AI-Driven Ecommerce Growth: Dianthus Case Study
- Sensors - Autonomous Driving Sensors
- Sensors - GPS
- Chemicals
- Education
- Product Research & Development
- Sales & Marketing
- Indoor Positioning Systems
- Onsite Human Safety Management
- Cloud Planning, Design & Implementation Services
- Training
Dianthus, a leader in scaling Direct-to-Consumer (D2C) brands, faced a significant challenge in the creation of unique visual marketing assets for ecommerce product marketing. The process was cumbersome and manual, making it difficult to generate assets suitable for social sharing. The proposed solution was to develop a sophisticated AI computer-vision system capable of generating unique photographs, including computer-generated human or animal models against naturalistic backgrounds, that also incorporated the D2C product. However, creating believable AI-generated product shots with digital influencers required a unique machine learning and data pipeline that incorporated multiple processes such as background generation, identity generation, 3D rendering, human positioning, product positioning, and harmonization of all elements. The pipeline also needed to allow for results to progress through various stages of refinement to deliver a finished photo result that was attractive and natural-looking enough to share on social media.
Dianthus is the world’s only AI-first ecommerce company. It specializes in applying cutting-edge AI technology to grow Direct-to-Consumer (D2C) brands. Dianthus leverages predictive models to identify customer needs and behaviors at scale, implementing solutions for customer experience and service that incorporate real-time customer data through design, content, and marketing. The company is so confident in its ability to grow D2C brands that it acquires or takes an ownership stake in every brand it works with.
Dianthus partnered with Neu.ro to develop AI-generated 'influencer' product shots for social sharing. The solution involved conducting research on potentially applicable techniques for dynamically composing unique images composed of people, products, and backgrounds. Various approaches were tested during the experimental phase, including CLIP guided optimization, ESRGAN-based refinement, CycleGAN, MUNIT, Diffusion model reprojection, StyleGAN3 reprojection, StyleGAN3 face recovery, Face swap, and Contrastive Unpaired Translation. Tools used included the Neu.ro MLOps Platform, PyTorch, Tensorboard, MLFlow, Numpy/Scikit libraries. Real image datasets from commercially available sources were combined with synthetically generated images via Synthesis API with different camera views and configurations. The resulting solution incorporates technologies for human positioning, camera positioning, appropriate sizing of foreground and background elements, and product harmonization improvements, as well as numerous optimizations for speeding up the pipeline and limiting required human input into the process.