Today—and even more in the future—an increasing amount of content that customers interact with will be generated. To get high-quality AI outputs, prompt designers need to understand what “good” looks like. This includes understanding conversational structures so you know what to prompt for.
Disambiguation
Front door and disambiguation responses are used to discover more about the customer’s intent, especially when they use just a string of nouns for their request (for example, taxes or address).
A front door response asks an open-ended question about what the customer wants to do.
A disambiguation response gives the customer 2–3 action options. These are used when the bot is confident the customer will choose one.
Disclaimers
Disclaimers caution customers about the limits of what AI can do, and encourage customers to review any generated content. These patterns help set consistent expectations and help protect Intuit as a brand. Make sure to work with a legal partner to ensure the right disclaimers are included near any AI-generated content you work on.
Errors
At some point, something is going to go wrong. An API fails to get data, or the system fails to commit changes for an automation. The generative model is down. A plug-in is down. The model timed out. All of these situations should be handled with transparency and empathy.
Responses should:
- Be transparent that something went wrong
- Protect the Intuit brand
- Offer a path to another solution, such as self-service with an article or escalation
For more, see our guidance on delivering bad news and writing errors.
Escalation
Escalation means connecting the customer to one of our human agents or experts. Be sure to address the customer's question and why the bot can’t answer, then offer the path to the escalation.
Good escalation paths:
- Show up as an option when something is wrong, when the situation is complex and difficult to navigate (such as reconciliation), or when the customer asks for it.
- Are paired with an offer for self-service unless the risk is high to the customer or company. Example: “I don’t have an answer for that, but I found this article. Or I can connect you with a human.”
Fallback
We use fallback responses to get customers back on track when a model hasn’t been trained on a good answer.
Fallbacks use other technologies, like search, to:
- Help customers rephrase their question
- Present FAQ search results
- Connect the customer to other help, including human support
Good fallbacks should:
- Be transparent and tell customers what happened, such as, “I couldn’t find anything about that.”
- Be empathetic. The customer is going to be disappointed that there’s no answer.
- Offer a way to self-serve or get help another way.
- Keep small talk to a minimum. The customer still needs help.
- "Fall upwards" and suggest other things that might be close to the customer's query.
- Set appropriate expectations to prepare customers for what they're seeing. For example, "I found this answer."
- Always provide options for customers to indicate whether their question has been answered.
How-to instructions
How-to instructions provide steps to achieve a goal. They should:
- Be brief enough to cover the key info (additional info should be a follow-up or a link)
- Consider the real estate available for that platform (mobile vs. desktop)
- Be specific to the product the customer is using
- Be a multi-answer workflow when steps are lengthy
- Set expectations as to how long the instructions will take if it’s a multi-response flow
- Provide a way for customers to track progress or come back later to finish if it takes longer than a few minutes
- Provide warnings that help the customer be successful, like “Once you start a transfer, it can't be canceled.”
Small talk
LLMs are generally pretty good at small talk out of the box. Small talk answers should respond to a customer’s question, but get them back on track with what they need.
Status
Trust and accuracy are important to customers when it comes to AI experiences. Building customer trust hinges on being clear and transparent about what the model is doing, when it’s doing it, and how outputs are generated.
When customers need to review AI’s work
Give customers explicit notice when work has been done by AI. Give them a chance to review before an action is taken by using the pattern “Review before [action].” This reminder should appear contextually near AI-generated content or calls to action. See also: Disclaimers
Processing steps
People tend to value things more when they think more effort or work is being done on their behalf. This labor illusion, whether visual or written, offers a glimpse into what AI is doing. It keeps customers occupied and lets them know the model is processing on the backend.
Use the "Planning, Doing, Done" structure
- Start by describing what AI is going to do. For example, “Tailoring suggestions for your review…”
- Indicate progression and that something is being done, whether that’s visually or with content. For example, a content animation that cycles through messages like “Doing a team inventory…”, “Reviewing their pay types…”, “Optimizing for your pay schedule…”
- Finish by reiterating the task AI is about to complete.
Welcome
For conversational interfaces, a welcome message should break the ice and explain how customers can use the agent or digital assistant. It not only greets the customer, but sets the friendly, conversational tone for the entire interaction.
A well-designed welcome should:
- Have different first-time and return welcome messages to build trust
- Be personalized to the customer
- Be transparent that the customer is talking to a bot
- Contain a brief statement about what the bot can do
- Provide proactive support for the most common questions or issues