BookmarkSubscribeRSS Feed

Data Science Meets A/B and Multi-Arm Bandit Testing in SAS® Customer Intelligence 360

Started 3 weeks ago by
Modified 3 weeks ago by
Views 438

Watch this Ask the Expert session to learn how smarter testing yields dynamic, real-time results with SAS Customer Intelligence 360. 

 

Image 1: Data Science Meets A/B and Multi-Arm Bandit TestingImage 1: Data Science Meets A/B and Multi-Arm Bandit Testing

Watch the Webinar

 

You will learn:

 

  • How to choose the right testing strategy — from classic A/B to adaptive multi-armed bandits.
  • Ways to integrate SAS CI360 insights into your existing marketing workflows with minimal disruption.
  • Best practices for using advanced analytics and machine learning to optimize campaigns in real time.
  • How to measure success and turn test results into actionable business value.

 

The questions from the Q&A segment held at the end of the webinar are listed below and the slides from the webinar are attached.

 

Q&A

 

Does a multi-arm bandit test replace a multivariate test?

No. Multi-arm Bandits are a more analytically enhanced way of optimizing the element performance that we are looking to improve upon in a similar way to an A/B test. Multivariate tests look at a variety of elements and attempt to optimize the ideal recipe across multiple elements. So, they're different. In 2025, we are observing a much higher demand from our clients for A/B and Multi-arm Bandit testing features in martech use cases.

 

Why is A/B testing important in validating analytical impact on customer experiences?

For those of you who do data analysis or have built models before, you're probably familiar with the concept of data partitioning. Before running your modeling activity, you divide your input data into training and validation sets, or perhaps into training, validation, and test sets. You might also use k-fold cross-validation or other techniques. The purpose of these methods is to ensure that when you run your model and predict or estimate outcomes, it does so accurately. We seek higher precision, greater accuracy, and lower error rates. Since we're always trying to anticipate or predict, some mistakes are inevitable. By partitioning data, we can evaluate how well our predicted scores generalize to new data.

 

As we deploy models for marketing use cases, we want those predictive scores to perform with the same precision and accuracy as during model training. Partitioning allows us to stress-test our models across different data segments, confirming that prediction accuracy is consistent. If our model generalizes well—neither underfitting nor overfitting—we gain confidence in its reliability.

 

When moving from training to production use cases (or inference), testing provides an additional layer of confirmation. As we activate insights and influence everyday customer interactions, we need to ensure that the analytical scores behave as expected. Consumer behaviors can shift rapidly, so it’s important that the model’s performance in production use cases aligns to our anticipated expectations. Data partitioning and thorough testing play crucial roles in validating and sustaining the analytical impact on marketing strategies.

 

How are Multi-Arm Bandit tests different from A/B testing approaches?

Let me provide both a non-technical and a moderately technical explanation. With an A/B test, if you have three variants and a sample size of 75,000, each variant will be served to 25,000 individuals during the test period. When the required sample size is reached, the software automatically runs a statistical test to determine if there is a significant difference in performance between the variants, based on your objective—such as conversions or engagement. If there is a clear difference, one variant is identified as the winner, while the others are considered defeated challengers.

 

In a Multi-arm Bandit test, you do not have to wait until the end of the test to determine the winning variant. Instead, a Thompson Sampling Monte Carlo simulation engine runs autonomously within the software, starting in the early stages of the test. This process estimates (or predicts) the likely winning variant as the confidence interval increases, and it proactively allocates more impressions of that variant as the test continues. In the end, you gain more conversions, improved efficiency, and the same level of learning as you would from an A/B test.

 

Can you explain why SAS is developing a purpose-driven Marketing AI software offering for users in the future?

That was the Ask the Expert webinar we did a few weeks ago. SAS is building purpose-driven Marketing AI software application because we recognize an increasing complexity in every aspect of how business, consumer behavior, and customer journeys are coming together. Historically, SAS and other 3rd party vendor technologies provided general data management and analytics platforms, integrating with marketing vendor tools but lacking domain specificity. To operate these platforms effectively, users needed formal training in analytics, data science, or data engineering.

 

While there remains a place for expertise and customization—where data scientists and engineers can work on high- and low-code projects using technologies like SAS Viya—we also see an opportunity in the marketing industry acutely. If we can accelerate, scale, and improve the velocity of activating analytical scores for recurring marketing use cases, such as segmentation, acquisition, upsell, cross-sell, recommendations, retention, churn, next best action, next best experience, and customer lifetime value, we can efficiently address themes heard repeatedly from brands.

 

Our approach is to create marketing domain-specific software that speaks the language of marketers rather than data scientists. For instance, a marketer wishing to produce a churn score for a campaign will find a churn recipe in the software, which automates many steps used in best-practice data science workflows. The software will not present technical jargon like support vector machines, gradient boosting, or neural networks—that is the language of data scientists and analysts.

 

In essence, SAS is developing unique software solutions to appeal to different personas. For this Marketing AI application, we are focusing on marketers who are eager for data-driven insights. Our goal is to enable them to address repetitive, straightforward use cases so they can accelerate the launch of performant campaigns and targeting tactics, while data science teams concentrate on more complex, innovative projects.

 

 

Recommended Resources

SAS Thought Leadership & SAS Community Articles on Data Science & Martech

SAS Learning Subscription for CI

SAS Support Community

 

Please see additional resources in the attached slide deck to this article above.

 

Want more tips? Be sure to subscribe to the Ask the Expert board to receive follow up Q&A, slides and recordings from other SAS Ask the Expert webinars.

Version history
Last update:
3 weeks ago
Updated by:

hackathon24-white-horiz.png

The 2025 SAS Hackathon has begun!

It's finally time to hack! Remember to visit the SAS Hacker's Hub regularly for news and updates.

Latest Updates

SAS Training: Just a Click Away

 Ready to level-up your skills? Choose your own adventure.

Browse our catalog!