Introduction
If you are here, you probably are already familiar with A/B testing. Instead of defining what it is, let’s skip to the good part.
Tech Design
We wanted to build a micro-service for A/B experiments which can run at scale and can be used by other backend services at WinZO as well as our android/iOS apps.
Choice of database was easy, since we didn’t require any transactional support, we went with MongoDB.
Most of our backend services run on NodeJS which scales pretty well for us but we wanted to test all the hype around Go. Therefore, we ran some benchmarks comparing the two and began to understand the hype on seeing the benchmark results for high throughput and concurrency. A single Go application instance can handle a lot more traffic than a NodeJS application. So, choosing Go can save a few 💰on infra cost.
We went ahead with Go and it did prove to be the wiser choice due to the spectacular performance of the application on production. We are running 2 application instances which are able to handle a traffic of ~35K RPM at an average latency of <2ms ⚡.
Database Schema
A simple AB framework can be designed using 3 collections
User Sampling Logic
Suppose E1 is an experiment with 2 variants A (WA% )and B (WB%), then users are assigned variant A or B in a weighted round-robin fashion. So on an average, for a set of 100 API calls for variant assignment, WA number of users with be assigned variant A and WB will get variant B. To achieve this, API servers maintain an in-memory list of variants of an experiment.
- Insert A and B into a list WA and WB times respectively and then randomize the list. Let’s call this List as L.
- Initialize currentIndex=0 for this list.
- Whenever there is a request to assign a new variant for the experiment, system assigns L[currentIndex] variant and increments currentIndex.
currentIndex = (currentIndex + 1)%100;
Incrementing current index code is synchronised for each experiment. We have used Go’s sync.Mutex module to achieve this.
import "sync"var once sync.Once
var RoundRobinIndexCounter *RoundRobinIndexCounterTypetype RoundRobinIndexCounterType struct {
mu sync.Mutex
v map[string]int
len int
}// return the current index and increment index. key will be experimentID
func (counter *RoundRobinIndexCounterType) GetIndex(key string) int {
// Lock so only one goroutine at a time can access the map c.v.
counter.mu.Lock()
defer counter.mu.Unlock()
currentIdx := counter.v[key]
// reset counter if it crosses max length length
if nextIdx := counter.v[key] + 1; nextIdx >= counter.len {
counter.ResetCounter(key)
} else {
counter.v[key] = nextIdx
}
return currentIdx
}func (counter *RoundRobinIndexCounterType) ResetCounter(key string) {
counter.v[key] = 0
}func InitializeRoundRobinIndexCounter() {
once.Do(func() {
c := RoundRobinIndexCounterType{v: make(map[string]int), len: 100}
RoundRobinIndexCounter = &c
})
}
Food for thought: sync.Mutex will will ensure that the variants are assigned in correct rollout distribution within one application server. But most production applications will be running on multiple servers, will this approach ensure that variants are correctly distributed across servers?
One important point to note here is that variant assignment logic doesn’t depend on the userId, so same users might get different variants each time which will be a bad user experience and will also corrupt experiment data. To resolve this we have a UserExperiments collection where we save the experiment-variant pair for users when they are assigned to the experiment the very first time. On every subsequent request, user will always get the same variant.
Lifecycle of an Experiment
Let’s assume we have a new app home page design and we want to test this design against the old one before rolling out to all the users. To run both these designs in an A/B experiment, we can create a new experiment, home_page_design and two variants old (control group with 95% rollout) and new (5 % rollout). With this configuration 95% of the users on app will see old home page and 5% will see new one. We can run this experiment for some time and increase rollout of the new page gradually while monitoring business metrics to see which of the two versions perform better. Once we have enough data to say with conviction which of the two variants is better, we will declare the better variant as winner and mark the experiment as closed. Once the experiment is closed, all the users will get to see the winner variant.
How WinZO uses A/B testing
A/B testing is currently being used by product & revenue managers at WinZO to take data-driven, customer centric approaches to push different product and business initiatives on the application. In addition to running A/B experiments on different visual and experience elements on the app, we also run experiments on the backend, for different algorithms to understand what approach elevates overall user experience on the app, to the maximum.
Business Impact of A/B :
1. Faster time to conclusion: WinZO PM’s with the help of the A/B testing module are able to assess impact of different variations of the features at a faster rate by creating control groups and close on the best performing versions faster. Time taken to conclude experiment by achieving statistically significant results due to integrated analytics tracking: Reduced by 60%.2. Capability to measure impact of individual features in Android release: With the A/B testing, majority of new experience / visual elements are pushed to the users via the experiment approach. This helps to navigate the impact of each release feature by feature rather than the release as a whole — providing a more in-depth view into the performance of the release and taking appropriate measures to optimize for performance.
3. Reducing risks while launching new features: Ability to create different control group and testing new developments on a small focused audience has helped to get early insights from the users and reduces risk of pushing out a “bad” product to the entire set of users. Helps to plug in different loops at the early stage of user adoption and push out better products.
As India’s Largest Interactive Entertainment company, WinZO envisions establishing India as the next Gaming Superpower of the world, fuelled by revolutionary innovations and path-breaking engineering. Resonate with our vision? Visit our Careers page ( https://winzogames.hire.trakstar.com/) and reserve a seat on our Rocketship.