Ads Guide

Revolutionizing Google Ads Support with the "Ads Guide" AI Agent

As the lead Conversation Designer for the Ads Guide, an AI-powered support agent for Google Ads, I spearheaded the design and implementation of a sophisticated conversational experience that demonstrably improved support quality and reduced the need for human intervention. This was achieved through the creation of a comprehensive conversation design system, a novel rubric-based autorater for quality assurance, and a rigorous human evaluation program.

The Challenge: Google Ads is a complex platform, and users frequently require support on a wide range of issues. The goal was to create an AI agent that could provide accurate, efficient, and user-friendly support, thereby improving customer satisfaction and reducing the burden on human support teams. The agent needed to be able to handle a vast array of queries with a high degree of quality and consistency.

My Role & Approach: As the Sr. Conversation Designer, I was at the center of this initiative, collaborating with a cross-functional team of product managers, software engineers, UX designers, and researchers. My primary responsibilities included:

  • Conversation Design System Development: I created a comprehensive conversation design system from the ground up. This system established the architectural and stylistic guidelines for all agent interactions, ensuring a consistent and high-quality user experience. It included defining the agent's persona, tone of voice, and interaction patterns, as well as creating a library of reusable conversational components.

  • Rubric-Based Autorater Design: To systematically evaluate and iterate on the agent's performance, I designed a detailed, rubric-based autorater. This tool automatically assessed conversation quality against a range of criteria, including politeness, empathy, accuracy, and task completion. This allowed for rapid, large-scale evaluation of the agent's responses and provided quantitative data to guide our development efforts.

  • Human Evaluation Leadership: Recognizing the limitations of purely automated evaluation, I led the human evaluation program to supplement and validate the autorater's findings. This involved recruiting and training human evaluators, developing evaluation guidelines, and analyzing their feedback to gain deeper insights into the nuances of the user experience.

  • System Instruction Design and Iteration: I was responsible for crafting the core system instructions that powered the Ads Guide agent. These instructions were meticulously designed and continuously refined based on feedback from both the autorater and our human evaluation team, leading to significant improvements in the agent's performance and capabilities.

The Solution: The Ads Guide is more than just a chatbot; it is a sophisticated support tool built on a foundation of strong conversation design principles. The key components of the solution include:

  • A Robust Conversation Design System: This system provided a clear framework for all agent interactions, ensuring a consistent and intuitive user experience. By standardizing design elements, we were able to accelerate development and maintain a high bar for quality.

  • A Multi-Faceted Evaluation Framework: The combination of the rubric-based autorater and a human evaluation program created a powerful feedback loop. The autorater provided immediate, scalable data on agent performance, while human evaluation offered nuanced qualitative insights. This dual approach allowed us to identify and address issues quickly and effectively.

  • Data-Driven Iteration: The insights generated from our evaluation framework were used to continuously refine the agent's system instructions. This iterative process of design, evaluation, and refinement was crucial to the project's success, enabling us to steadily improve the agent's quality and effectiveness over time.

The Impact: The launch of the Ads Guide resulted in a significant improvement in the quality of Google Ads support. Key outcomes include:

  • Improved Quality Scores: The agent consistently achieved high scores on our quality rubrics, demonstrating its ability to provide accurate and helpful support.

  • Reduced Escalations: By successfully resolving a high volume of user queries, the Ads Guide significantly reduced the number of cases that needed to be escalated to human support agents, leading to increased efficiency and cost savings.

This project showcases my ability to lead complex, cross-functional initiatives and to leverage innovative design and evaluation methodologies to deliver a high-impact AI product. The success of the Ads Guide demonstrates the power of a well-designed conversational experience to improve customer satisfaction and drive business results.