Ulta Beauty Taps NVIDIA StyleGAN2 GenAI for Virtual Try-on

The company is exploring how the hairstyle virtual try-ons could be connected to in-store styling services.

Topics

  • A new Ulta Beauty AI app uses selfies to show near instant, realistic previews of desired hairstyles.

    GLAMlab Hair Try On lets users take a photo, upload a headshot or use a model’s picture to experiment with different hair colours and styles. The experience is powered by the NVIDIA StyleGAN2 genAI model.

    The hair colour try-ons feature links to Ulta Beauty products so shoppers can achieve the look in real life. Ulta Beauty, which has more than 1,400 stores across the US, has found that people who use the virtual tool are more likely to purchase a product than those who don’t.

    “Shoppers need to try out hair and makeup styles before they purchase,” said Juan Cardelino, Director of the Computer Vision and Digital Innovation Department at Ulta Beauty. “As one of the first cosmetics companies to integrate makeup testers in stores, offering try-ons is part of Ulta Beauty’s DNA – whether in physical or digital retail environments.”

    GLAMlab is Ulta Beauty’s first genAI application, developed by its digital innovation team.

    To build its AI pipeline, the team turned to StyleGAN2, a style-based neural network architecture for generative adversarial networks, aka GANs. StyleGAN2, developed by NVIDIA Research, uses transfer learning to generate infinite images in a variety of styles.

    “StyleGAN2 is one of the most well regarded models in the tech community, and, since the source code was available for experimentation, it was the right choice for our application,” Cardelino said “For our hairstyle try-on use case, we had to license the model for commercial use, retrain it and put guardrails around it to ensure the AI was only modifying pixels related to hair – not distorting any feature of the user’s face.”

    Available on the Ulta Beauty website and mobile app, the hair style and colour try-ons rely on NVIDIA Tesnor Core GPUs in the cloud to run AI inference, which takes around five seconds to compute the first style and about a second each for subsequent styles.

    The company next plans to incorporate virtual trials for additional hair categories like wigs and is exploring how the virtual hairstyle try-ons could be connected to in-store styling services.

    “Stylists could use the tool to show our guests how certain hairstyles will look on them, giving them more confidence to try new looks,” Cardelino said. “Hair and makeup are playful categories. Virtual try-ons are a way to explore options that may be out of a customer’s comfort zone without needing to commit to a physical change.”

    Topics

    More Like This