Case Studies.
Our Case Study database tracks 18,926 case studies in the global enterprise technology ecosystem.
Filters allow you to explore case studies quickly and efficiently.
-
(5,794)
- (2,602)
- (1,765)
- (764)
- View all
-
(5,073)
- (2,519)
- (1,260)
- (761)
- View all
-
(4,407)
- (1,774)
- (1,292)
- (480)
- View all
-
(4,157)
- (2,048)
- (1,256)
- (926)
- View all
-
(2,488)
- (1,262)
- (472)
- (342)
- View all
- View all 15 Technologies
- (1,732)
- (1,626)
- (1,605)
- (1,460)
- (1,423)
- View all 42 Industries
- (5,781)
- (4,113)
- (3,091)
- (2,780)
- (2,671)
- View all 13 Functional Areas
- (2,568)
- (2,482)
- (1,866)
- (1,561)
- (1,537)
- View all 127 Use Cases
- (10,333)
- (3,499)
- (3,391)
- (2,981)
- (2,593)
- View all 9 Services
- (503)
- (432)
- (382)
- (301)
- (246)
- View all 737 Suppliers
Selected Filters
18,926 case studies
Nigerian Bank Reduces Risk, Cost with ML Driving Decisions
DataRobot
Carbon Digital Bank, a financial institution serving the underserved African market, needed a way to quickly determine credit risk for individuals without prior credit. The bank also wanted to empower its data science team to take on additional business challenges. The bank had committed to a data-first strategy and looked to AI as an integral part of its decision-making. However, assessing customers' credit worthiness was a major challenge. The bank needed to expedite decisions on hundreds of thousands of loan applications every month.
|
Continuous Compliance in CI/CD
A leading US-based fintech company with a development center in India was facing difficulties in monitoring process compliance across its numerous ongoing projects. The company lacked centralized visibility to assess compliance across enterprise projects. Manual tracking of every commit, pull request (PR), and peer approval was untenable. It was also challenging to track if developers used the predefined tools and procedures for version control, source code management, peer reviews, etc.
|
DORA Metrics : Ensuring DevOps Success
The company, a leading media and entertainment entity with a presence in over 150 countries, was facing challenges in managing its applications, including a newly launched subscription-based streaming application. The company's internal DevOps team was responsible for managing these applications, but the company wanted to improve visibility into performance, identify areas for improvement, and gauge customer experience. However, they lacked a standard framework to measure DevOps success and relied on monthly manual reports to understand the team's health and performance. This approach had limitations in analyzing DevOps data and metrics. Furthermore, frequent bugs and a longer time to resolve issues led to a poor customer experience.
|
A Leading Wireless & Telecom Services Provider Reduced Annual Call Center Cost by $5 Million
A leading U.S.-based wireless and telecommunications service provider wanted to improve call center performance, increase customer satisfaction, and have greater insight into the activities of its call center representatives. To achieve this, the Fortune 50 Company wanted to analyze the desktop activities of the call center representatives around the clock. The client wanted to monitor desktop activities in real-time while the representatives are on duty. From an operational perspective, this meant creating a centralized system where operations personnel would be able to track idle time, track what websites are being used for how much time, track outlook usage, and track various applications being used on the desktop. The client also wanted to track desktop activities when the agent are on call, not on call, and on call and kept customer on hold.
|
Fortune 100 telecommunications company seamlessly migrates from Teradata to Amazon Redshift
The customer, a US-based Fortune 100 broadband connectivity company and cable operator serving more than 30 million customers, was facing several technical and business challenges with their existing data workflow. They received data from multiple sources that was fed into an SFTP server. After ETL was performed, the data was read by an Informatica workload and persisted to their Teradata data warehouse. Business analysts then accessed this data and ran queries to gather insights. The client wanted to make a strategic shift to the cloud to enhance scalability, reduce costs, improve query performance, realize a unified view, simplify management, seamlessly integrate with other cloud-native services, and automate workflows for CI/CD.
|
DevOps 360
A leading media and entertainment company with a presence in over 150 countries and a headcount of over 3000 employees faced several challenges in managing their applications. They had recently launched a subscription-based streaming application, in addition to their existing apps that required frequent updates. Their internal DevOps team was responsible for managing these applications, but the company wanted to improve visibility into performance, identify areas for improvement, optimize costs, and assess customer experience. They lacked a standard framework to measure DevOps success and relied on monthly manual reports to understand the health and performance of the team. They also faced limitations in analyzing the DevOps data and metrics. Frequent bugs and a longer time to resolve issues led to a poor customer experience.
|
Real-time Multi-lingual Classification and Sentiment Analysis of Text
The client, a major telecom company providing nationwide telecom services, was in need of a system that could perform real-time, multi-lingual classification and sentiment analysis of text data. They were looking for a solution that allows storing, indexing, and querying PetaBytes (PBs) of data with a very high throughput. The critical requirements included the ability to ingest and parse a high volume of data [250M (15 TB) records/day] of varied types such as weblogs, email, chat, and files. They also needed to apply real-time multi-lingual classification and sentiment analysis with very high accuracy (four nines), store metadata and raw binary data for querying, and meet a Query SLA of 5s on cold data.
|
Hygiene technologies leader Ecolab brings data science to production with Microsoft Azure and Iguazio
Iguazio
Ecolab, a global leader in water, hygiene, and infection prevention solutions, wanted to develop predictive risk models for water systems, industrial machinery, and other applications. The company's machine learning journey began in 2016 with a project to develop bacterial growth risk models using existing sensor data. However, the process of building, deploying, and maintaining machine learning models in production was complex and challenging. The company needed a data science collaboration platform that would bring together its large, geographically dispersed team, while efficiently using cloud computing resources. The deployment of machine learning models at Ecolab followed a 'rewrite-and-deploy' pattern, where model development occurred independent of the application developers. This approach led to deployment timelines exceeding 12 months on average.
|
EMBL Enhances Microbiology Methods with Deep Learning
Researchers at EMBL, Europe’s flagship laboratory for the life sciences, were looking to enhance traditional microbiology methods with Deep Learning. Their goal was to reconstruct the complex biological phenomena that underpin the life cycle of cells. This was a significant challenge due to the complexity of cell life cycles and the limitations of traditional microbiology methods. EMBL operates across six sites in Europe and has more than 80 independent research groups covering the spectrum of molecular biology. The challenge was to develop a solution that could accurately model the lifecycle of cells and provide insights into complex biological processes.
|
Epona Science: Revolutionizing Horse Racing with Pachyderm
Epona Science is a company that specializes in buying, breeding, and identifying the best racehorses in the world. The racehorse business is a traditional industry where buyers often rely on pedigree or trusted breeders' instincts to choose horses. However, Epona Science believes that these are not the best predictors of success. They aim to revolutionize the industry by using machine learning, statistical analysis, and science. They have discovered that factors such as the horse's entire genetic profile and lineage, its height and gait, and even the size of its heart can make a significant difference in its performance. However, gathering all this data, cleaning it, standardizing it, and getting it into a consistent format that their machine learning models can train on is a significant challenge. The data comes from various sources worldwide, including x-rays, genetic profiles, and track records from previous races.
|
RTL Nederlands Relies on Pachyderm’s Scalable, Data-Driven Machine Learning Pipeline to Make Broadcast Video Content More Discoverable
RTL Nederlands, part of Europe’s largest broadcast group, wanted to use artificial intelligence (AI) to make video content more valuable and discoverable for millions of subscribers. The company broadcasts to millions of daily TV viewers, along with delivering streaming content that garners hundreds of millions of monthly views online. One of the key growth metrics for RTL Nederlands is viewership, but optimizing the value and discoverability of video assets is an extremely labor-intensive endeavor. That makes it ripe for automation, and the team applied machine learning to optimize key aspects of its video platform, like creating thumbnails and trailers, picking the right thumbnail for those trailers, and inserting ad content into video streams.
|
Top Healthcare Provider Derives Actionable Medical Insights from Terabytes of Clinical Data Using Pachyderm’s Scalable, Data-Driven Machine Learning Pipelines
One of the top for-profit managed healthcare providers in the U.S., with affiliate plans covering one in eight Americans for medical care, was looking to leverage artificial intelligence (AI) to harvest long-term insights and make much more detailed health predictions from claims and electronic health record data. The data store is massive, with more than 50 terabytes of data covering the company’s tens of millions of members across the U.S. They were mining this data to determine treatment efficacy based on past outcomes given particular patient characteristics. However, getting these potential insights into the hands of healthcare providers was a challenge. It’s one thing to have small scale implementations working in a lab, it’s another to deliver machine learning at scale. When the engineering lead joined the AI team, they had a very complicated data delivery pipeline based on Apache Airflow. While it worked, it wouldn’t scale beyond a single pipeline or container instance at a time.
|
How Pachyderm Is Used to Support Adarga in Analyzing Huge Volumes of Information
Adarga is an AI software development company that provides organizations with the capability to build and maintain a dynamic intelligence picture. Its AI analytics platform processes huge volumes of unstructured data, such as reports, global news feeds, presentations, videos, audio files, etc., at a speed unachievable by humans alone. The software extracts the essential facts in context and presents them in a comprehensible manner to unlock actionable insights at speed and enable more confident decision-making. However, the company faced challenges in developing, training, productionalizing, and scaling the necessary data models. They needed a solution that could drive data consistency, understand lineage, and enable model scaling.
|
How SeerAI Delivers Spatiotemporal Data and Analytics with Pachyderm
SeerAI’s flagship offering, Geodesic, is the world’s first decentralized platform optimized for deriving insights and analytics from planetary-scale spatiotemporal data. Working with spatiotemporal data is a challenge. Because it concerns planetwide questions, the data sets are massive in scale – often entailing petabytes of imagery. The data itself can come from different sources, requiring the ability to load and manage from a decentralized data model. Finally, that data is generally heterogeneous and unstructured, and thus notoriously complex and difficult to deal with. SeerAI designed Geodesic to constantly grow in knowledge and data relationships so that it can eventually answer most any question. Controlling the data ingest, ML job scheduling, model interaction, and data versioning can be extremely complex at this scale.
|
Risk Thinking: How Riskthinking.AI Uses Machine Learning to Bring Certainty to an Uncertain World
Riskthinking.AI, a company specializing in measuring the financial risk of climate change, was in the early phases of ramping up their internal AI infrastructure when they took on the CovidWisdom project. The project was a response to a call from the Canadian government to assess the economic impact of major pandemic policies. The challenge was to predict the best way to implement societal-level responses like lockdowns with the minimum amount of damage to daily life and the economy. However, the team realized they had experts in predicting the future but not in building AI architecture. They had data scientists working on laptops, pulling and pushing data over VPNs to remote work spots, and even building their own Docker containers. They needed to move from ad hoc to MLOps.
|
Autonomous Vehicle Company Wayve Ends GPU Scheduling ‘Horror’
Wayve, a London-based company developing artificial intelligence software for self-driving cars, was facing a significant challenge with their GPU resources. Their Fleet Learning Loop, a continuous cycle of data collection, curation, training of models, re-simulation, and licensing models before deployment into the fleet, was consuming a large amount of GPU resources. However, despite nearly 100 percent of GPU resources being allocated to researchers, less than 45 percent of resources were utilized. This was due to the fact that GPUs were statically assigned to researchers, meaning when researchers were not using their assigned GPUs others could not access them. This created the illusion that GPUs for model training were at capacity even as many GPUs sat idle.
|
How one company went from 28% GPU utilization to 73% with Run:ai
The company, a world leader in facial recognition technologies, was facing several challenges with their GPU utilization. They were unable to successfully share resources across teams and projects due to static allocation of GPU resources, which led to bottlenecks and inaccessible infrastructure. The lack of visibility and management of available resources was slowing down their jobs. Despite the low utilization of existing hardware, visibility issues and bottlenecks made it seem like additional hardware was necessary, leading to increased costs. The company was considering an additional GPU investment with a planned hardware purchase cost of over $1 million dollars.
|
London Medical Imaging & AI Centre Speeds Up Research with Run:ai
The London Medical Imaging & AI Centre for Value Based Healthcare was facing several challenges with its AI hardware. The total GPU utilization was below 30%, with significant idle periods for some GPUs despite demand from researchers. The system was overloaded on multiple occasions where more GPUs were needed for running jobs than were available. Poor visibility and scheduling led to delays and waste, with bigger experiments requiring a large number of GPUs sometimes unable to begin because smaller jobs using only a few GPUs were blocking them out of their resource requirements.
|
How Exscientia reduced the time it takes to monitor and prepare models from days to hours
Exscientia plc is an AI-powered drug discovery organization that relies heavily on the accuracy and stability of its models. The company's model deployment process is unique as it is entirely automated, resulting in thousands of models being delivered, monitored, and retrained without human interaction. However, as Exscientia expanded its reach and goals, it needed an enterprise-grade scale solution. The team was looking for additional operational efficiencies and other ways to debug and stabilize models. The existing open-source deployment solution and inference platform were no longer sufficient for their growing needs.
|
How Capital One reduced model deployment time from months to minutes
Capital One, a leading US retail bank, was facing significant delays in their machine learning (ML) deployment pipeline. The data science teams were heavily reliant on the engineering department to test, deploy, or upgrade models. This resulted in month-long lag times and the need to redeploy entire applications for updates to existing models. Scaling up projects was only possible by using more developer resources and people power, which further strained the already overstretched teams. The bank needed a robust, scalable, and flexible approach to the deployment of ML models to support its millions of customers and users of their mobile banking app.
|
Noitso accelerates model deployment from days to hours
Noitso, a company based in Copenhagen, Denmark, specializes in data science, data collection, and predictive analysis. They provide their customers with credit ratings, scorecards, and risk profiles using data science and AI. However, they faced challenges in deploying their models. The models took a long time to get to production and lacked explainability and monitoring. They were unable to determine when models needed to be retrained, and had to do it after a fixed period of time rather than when necessary. This approach was the only way to maintain accurate predictions and prevent issues such as data drift.
|
How Covéa plan to save £1 million detecting fraudulent insurance policies
Covéa Insurance Plc, the UK underwriting business of leading French mutual insurance group Covéa, serves two million policyholders and generated over £725.7 million in premiums in 2020. The company is facing a significant challenge in the form of insurance fraud, which is costing the industry over £1bn a year. One of the most complex and hard-to-detect types of fraud they face is Ghost broking. This is when a policy is purchased by a middle person for a customer using false or stolen information to reduce the premiums. In the event of a claim, these policies would be legal and Covéa would have to pay out. As Covéa is mainly an underwriter, they often do not deal with the policy holder directly, so they had less data to work with to detect fraud. The call handling team were doing manual searches and checks on over two million new quotes per day. The scale was far too much to deal with in an efficient timeframe.
|
GroundLink uses Appsee to increase revenues and deliver better UX
GroundLink, a global provider of ground travel services, was seeking ways to enhance the customer experience and improve the features and functionality of its mobile application. The company wanted to gain a deeper understanding of user behavior within the app to make informed decisions about development strategy. The challenge was to find a solution that could provide detailed analytics and insights into user behavior, enabling GroundLink to detect shifts in user experience and adjust their strategy accordingly.
|
JoyTunes uses Appsee to improve retention, usability and conversions
JoyTunes is an innovative company that uses gamification and audio technology to revolutionize the way people learn and practice music. The company wanted to better understand their users, eliminate usability issues within the app, and increase user retention and in-app conversions. They were looking for a solution that could provide them with real insights into their users' in-app behavior and help them measure, understand, and improve the user experience.
|
Aviso’s Conversational Intelligence Provided Robust X-Ray For NetApp Sellers To Increase Customer Engagement
In 2020, NetApp was seeking a tool to streamline the input, review, and reporting of forecast calls at both the individual and team level. The company wanted an AI solution to enhance customer engagement and foster the internal training and growth of their sales reps. The challenges faced included a lack of insights into customer conversations, over-reliance on spreadsheets, ad-hoc deal reviews, ineffective CRM use leading to poor customer engagement, and no AI in the training of sales teams.
|
Aviso Drives Digital Sales Transformation At Honeywell With Conversational Intelligence, Persona Based Nudges, And Custom Solution To Build An Integrated CRM Platform
In 2018, Honeywell embarked on a strategic initiative to implement a global design model (GDM) for its CRM solution. This initiative was a result of a high-level blueprint of recommendations, which were gathered from over a hundred Honeywell employees across various functions. The next step for Honeywell was to find a suitable sales forecasting tool. The goal was to improve sales forecast accuracy across business units, enable informed decision-making, and predict short and long-term performance. However, Honeywell faced several challenges. These included disconnected CRM instances maintained across business units, low accuracy in predicting short and long-term deals and opportunity performance, lack of real-time deal insights, and overspending on underutilized CRM licenses and ineffective call recording tools.
|
Aviso Enabled Ivanti With “Single Pane of Glass” For Deal Intelligence And Fueled Organic Growth And M&A
Ivanti, a software company, was facing several challenges with its sales business processes. The company had multiple disconnected instances of Salesforce CRM, which made it difficult to streamline its operations. The manual forecast rollup with MS Excel and PowerPoint was time-consuming and inefficient. The data was scattered across CRM and Excel spreadsheets, making it hard to get a comprehensive view of the business. The company also lacked insights into opportunities and activities, and had complex hierarchy requirements. These challenges were hindering Ivanti's growth and efficiency.
|
Mass Appeal
Massdiscounters, one of the largest value retailers in South Africa, is expanding its presence in sub-Saharan Africa, including Namibia, Botswana, Zambia, and Mozambique. As part of its aggressive corporate growth strategy, Massdiscounters recognized the need to re-evaluate its best-of-breed technology strategy. The company has recently added food to its merchandise array, leading to increased market share but also adding to the complexity of Massdiscounters’ supply chain model. To support this move, Massdiscounters has invested heavily in new physical infrastructure, including three large distribution centers. The company decided to move from a best-of-breed to a best-of-suite strategy, selecting JDA’s demand management, replenishment, and promotions optimization solutions.
|
Recipe for Success
Butterball, a leading producer of turkey products, faced significant supply chain challenges due to the seasonal and promotion-driven nature of many of its products, as well as the fact that every product is date-sensitive. The company needed to forecast at a very high level of accuracy, as well as improve its date-sensitive inventory management. Meeting retailers’ different service-level expectations and product freshness requirements added a layer of planning complexity to Butterball’s supply chain that if not managed well could lead to a risk of obsolescent inventory and unhappy customers.
|
Driving Out the Competition
NFT, a leading provider of time-sensitive, chilled food and drink logistics services in the UK, was facing operational challenges that were preventing it from optimizing performance. The company's traditional pick-face approach was leading to underutilization of its 220,000 square-foot warehouse, immense SKU proliferation, and almost unmanageable replenishments and put-backs. Additionally, the way manufacturers produced their products was creating issues with sell-by dates and misrotation, leading to waste, extra administrative burden, fines, claims, penalties, returns, and rejections. NFT wanted to protect its business reputation, preserve customer sales, and improve performance.
|