Master surveys that drive meaningful feedback in user testing—download the new ebook with expert tips and best practices now!
Industry News

From Labs to Launch: The Journey of Google's Duet AI

Updated on
February 22, 2024

Recently, Google announced an expansion of its beta testing program for Duet AI (formerly Workspace AI), a generative AI tool designed to streamline user productivity in the suite of Google Workspace tools including Docs, Sheets, Slides, and more. As AI-driven products become more mainstream, it’s important to re-examine beta testing processes to ensure that they are designed to help achieve the testing goals set by the team. In the case of AI, gathering usage and user interaction data during testing is vitally important to further developing these types of products.

Let’s explore what this expansion to the Duet AI tester program means and how beta testing is evolving to support AI-driven products. 

A New Focus on AI

In February 2023, Alphabet faced backlash from the early release of Google's Bard AI, which was criticized for its buggy public demo, which has led to increased scrutiny on the quality and stability of Google's AI-driven products. As a result, Google is putting Duet AI through rigorous, iterative beta testing to ensure it meets the high expectations of its user base and avoids repeating previous mistakes.

Previously, Google utilized their Labs incubator to give their more adventurous users an opportunity to test and review new projects from 2002 to 2011. Google has recently revived the pseudo-public Labs program with a focus on AI-driven products. Via the Labs program, Google’s team can gather valuable user feedback and identify potential issues at an increasingly wider scale as Duet AI becomes more polished, ensuring a smoother user experience upon launch.

The Need for More Data in AI

The expansion of Google's trusted tester program for Duet AI to more than 10 times its current size underscores the need for more data to refine and optimize AI-driven tools. As AI systems, like generative AI tools, rely on massive amounts of data to learn and make accurate predictions, increasing the pool of testers becomes essential for improving the overall quality and effectiveness of the AI.

A larger testing pool not only provides more data to train and optimize the AI but also exposes the product to a wider variety of users and use cases. This helps identify potential edge cases or issues that may not have been apparent during the initial testing phases both internally and with smaller groups of testers.

Taking Cues From Google on Beta Testing

While the larger testing pool provides benefits for data gathering, the more than 10x increase in testers for Duet AI may also signal that Google is not seeing the engagement they expect in this pseudo-public beta. Centercode's previous research indicates that true public beta tests generally see participation rates below 50%, so increasing the group size may be a way to increase the user activity for Duet AI and generate more data to meet their feedback and product usage targets.

For important product launches, Centercode always recommends starting with at least one round of beta testing with a small group of testers. It’s especially important when introducing external users to the product as this begins to introduce risk. This practice ensures that:

  • Any catastrophic failures are experienced by a limited number of external testers reducing the chance of damaging leaks
  • If the feedback being submitted is surprising or unexpected, it’s easier to dig deeper with individual testers to better understand their experience and course correct
  • Costs are kept low when the risk of show-stopping issues is highest (the product is likely still buggy at this stage)

As the product becomes more stable and less buggy, more and more testers can be invited in stages (as we’ve seen here with Google and Duet AI) to test increasingly heavy loads on back-end infrastructure and to gather more data.

Commitment to High Quality

The very public announcement of this program expansion certainly seems to demonstrate that Google is learning from the Bard AI backlash and is re-committing to the high quality standards they’ve been known for in the past. Like ChatGPT opened its doors to the public in December 2022 to increase its capabilities through mass user interaction, it’s clear that AI-driven products require extensive pre- and post-release testing to ensure high-quality user experiences throughout the product’s lifecycle.

In February 2024, Google rebranded Duet AI to Gemini for Workspace, highlighting its rapid evolution and the introduction of new features, including deeper integration into the Google Workspace ecosystem. This change reflects the significant updates made based on extensive beta testing feedback. The rebranding also underscores Google’s commitment to refining their AI-driven products through continuous testing and user feedback throughout their products' lifespan.

Learn How Centercode Makes Beta Testing Easy
No items found.