Updated: Aug 13, 2019
The Harvard Business Review published that acquiring a new customer is 5 to 25 times more expensive than retaining an existing one. Research also shows that increasing customer retention by just 5% increases profits by 25% to 95%.
The customers of retail businesses such as banking, telecom, utilities, and insurance are constantly influenced by factors such as the rates charged and promotions by competitors. These create concerns that make customers stay or switch. The customer may sometimes call to express their doubts but their mind is probably already made up by the time they get to speak to a customer service representative. A business is left reacting to a situation vs. being proactive about it.
Imagine a world where you can predict which customer is likely to leave and why. A company’s efforts in retention could be proactive and focused.
This is where AI can help. What it can do that traditional analytics can’t do is to figure such behavioural characteristics and patterns from your customer data to reliably predict the risk of a customer leaving.
But before we start, let’s define the two important terms, data and prediction.
“Data” in the context of a telecom company could be demographic data such as the home address, work address, occupation, income, age, and gender. It could be purchase data such as type of service, the value of purchase, the date of purchase, the payment type, how it was purchased e.g. online / shop, the options bought, etc. The data may come from the use of services such as the number of calls [outgoing, incoming, location, international, roaming, text messages, through Wi-Fi or network, total minutes, etc. The data could come from billing such as the total amount of the bill, amount for voice calls, text, or for other services e.g. use of data, for 1-800 calls, or the number of days after which payment was made. Further, customer relationship data can be gathered, such as the number of interactions from the call centre or visit to retail shops, online website, or mobile application use, total complaints, complaint type, and the time that was taken to solve the issue. Data can come as a file or from an online source, from a database, or a machine controller, in formats such as text, numeric, image, audio, video, etc.
For the purpose of this note, an open public dataset was downloaded from Kaggle and used.
“Prediction” is the output of a machine learning algorithm having been trained to learn from historical data. These algorithms make a model which when applied to new data can reliably forecast the chance of an outcome. E.g. what is the chance of an existing customer leaving in the next 30 days? These models generate presumed values for unknown variables for each record in the new data by learning patterns between known variables and calculating what a presumed value will likely be.
Step 1: Login and create a project.
The user starts by authenticating to our AI platform
Creates a project.
Names it customer_churn_prediction.
The project can be shared with team members to collaborate.
The following steps illustrate how data is ingested, cleaned, and transformed to train a machine-learning algorithm and then deployed to production.
This can be done in full-automated mode using the AutoPilot feature where the system will do everything for you. But since the goal is to demystify AI development, the Visual ML mode was chosen as it explains the complete workflow in a step-by-step manner.
Step 2 - Ingest data, clean and transform for machine learning.
The user selects the data file (in this case a .csv) from their desktop.
Once uploaded, the user can view the raw data. The user can also view the statistics of the data by clicking on the Stat icon. These are already auto-computed by the system
The user starts defining a dataset.
Selects Target (Output) as “Churn”.
Selects the Features (Input) as other input variables.
Think of this way - the target variable is what the user wants to predict. The Features (Input) are variables from which the user wants the AI to draw a pattern.
The user proceeds to the next page.
The data may have missing values. It may be required to treat the missing values with something.
The user checks for any missing values in the raw data in the Missing Value Handler feature.
The system has prescribed a transformation step.
The user follows the recommendation.
The data has to be transformed into something that algorithms can understand and calculate.
The user clicks on Feature Pre-processor.
The system has prescribed all transformations. The user follows the recommendation.
The user gives this transformed dataset a name, e.g. ds_all_features.
Data will be split into test and training data.
The system recommends a 20/80 split.
The user follows the recommendation.
The dataset is now ready for machine learning.
Step 3 - Create and train ML models. Select the best model by testing them.
Many machine learning algorithms are available for modelling - some solve classification problems, some solve regression problems, and some that solve deep learning problems. Our AI platform ships with many algorithms. It also allows you to add, or create your own.
The user selects “Add Base Model” to select the dataset created in the previous section.
The user concurrently selects a machine learning algorithm.
The system creates the model that the user selected.
The user can create as many models by clicking and selecting other algorithms. Or also use the Auto ML feature where the system lets you sit back and relax while it automatically generates many models and ranks them you. For now, let’s create one more version of this model now using a neural network classifier.
The user selects the MultiLayerPerceptronNNClassifier.
The system has recommended optimum parameters for the algorithm. The user goes with the system optimized recommendations.
Version 2 of the model is instantly created, now using the neural network classifier.
The model accuracy statistics are instantly displayed.
The actual vs. predicted results can be seen in a click.
The user compares the models to pick one that offers the best results.
The Logistic Regression Classifier gave the best results in this case.
This completes the machine learning part. The model is ready for deployment so that it can be used by other applications.
Step 4 - Check for biases, validate and approve model.
It’s always good practice to have someone review your work. Making AI is no different. It has to be checked for biases, interpretability and considerations for safe and ethical AI.
The user puts comments for deployment, attaches any extra information and submits the request to review.
The reviewer now has access to all the documentation that was automatically generated from what the raw data looks like to the point of time that the model was created.
When satisfied, the reviewer accepts it to be deployed to a production environment.
Step 5 - Deploy a model as an API for real-time prediction.
The model is now approved to be deployed. It’s time to deploy it as an API so that new and legacy applications can start using it as a prediction service.
The user selects the model to be deployed
Clicks Add default app code. Confirms. Clicks Deploy. Confirms.
A Docker container is instantly created and the application starts running.
The user can generate an API key and access token and share with others so that new and legacy applications can start using the prediction service that was just created.
Step 6 - See a dashboard for real-time prediction.
For each model deployed as an API, a real-time dashboard is automatically created.
The user can type input values and hit the predict button to get the model output.
The user can connect the API results to tools such as Microsoft Power BI or Spotfire to make a custom report too.
Further models can be developed to fine-tune your customer retention strategy. For example, all customers may not be worth retaining. To understand that, a Lifetime Customer Value model can be built that learns from data such as charges, margins, demographics, past purchases, billing type, etc. to predict the total value from a customer over their expected lifetime with you.
Another example is a model that tells which customers will give positive feedback if a particular product or service was offered to them. Think of it as how Netflix tells what you might like by understanding what others like you are watching.
A combination of these models, when used together, allow your retention teams to become highly effective at what they do.
We believe that an AI can help you boost customer loyalty and your profitability at the same time. It does not have to be one or another.
Start using it to supercharge your sales and marketing, do the next best offer, cross-sell and up-sell, optimize your campaigns for special customer characteristics - the possibilities are endless.
AI is not tough. You can do it. We can help.
Do you want to see it in action? Say hi at firstname.lastname@example.org