Leveraging Machine Learning to Analyze Impact of Promotional Campaigns on Sales
- Analytics & Modeling - Machine Learning
- Robots - Collaborative Robots
- Equipment & Machinery
- Retail
- Procurement
- Sales & Marketing
- Experimentation Automation
- Time Sensitive Networking
- System Integration
- Training
deepsense.ai, an AI-focused software services company, was tasked with a project for a leading Central and Eastern European food company. The project involved using machine learning to analyze the impact of promotional campaigns on sales. The food company runs various promotional campaigns for different products and wanted to create a model that predicts the number of sales per day for a given product on a promotional campaign. The challenge was the complexity of the data involving a large corpus of data sources, hundreds of different products, contractors, thousands of contractors’ clients, different promotion types, various promotion periods, overlapping promotions, and actions of the competition. It was also difficult to determine whether the sales increase was caused by any of the promotions applied, by the synergy between them, or it took place regardless of any campaigns.
The customer in this case study is a leading Central and Eastern European food company. They run various promotional campaigns for different products, including jams, juices, and pickles. Some of these campaigns are dedicated to the main contractors, some to contractors’ clients, and some are directly aimed at the consumer. The company wanted to create a model that predicts the number of sales per day for a given product that is on a promotional campaign, thus analyzing the impact of that campaign on the sale of that product. They sought the expertise of deepsense.ai to build this model and analyze the effectiveness of their promotional campaigns.
To solve this complex problem, deepsense.ai's team decided to use a separate model for each product, contractor, and sometimes clients’ type, leading them to model more than 7000 separate cases. This required training more than 120,000 models, which presented a challenge in itself. The team needed an efficient solution for tracking experiments, visualizations and dashboarding, and saving metadata. They chose to use Neptune, a tool they were already familiar with, to manage these issues. Neptune was integrated into the project’s codebase, and was used for experiment tracking, model artifacts, and visualizations. This allowed the team to focus on the problem at hand, rather than managing the models.