Prompt library
The prompt library is organized by use case. Each page lists prompts you can run directly in your connected AI assistant (Claude, ChatGPT, Gemini, etc.), along with the insights you can expect in return.
Use cases
- Improvement recommendations — ask for actionable changes to improve performance.
- Quick health checks — one-shot views of performance and trends.
- Create visualizations — generate charts and diagrams from conversation data.
- Diagnose performance issues — root-cause analysis for sudden metric changes.
- Identify optimization opportunities — find areas with the largest improvement potential.
- Review configuration — audit playbooks, actions, and custom instructions.
- Search knowledge and coaching — check existing coverage before adding content.
- Test agent responses — validate changes before going live.
- Deep-dive analysis — combine multiple tools for comprehensive analysis.
Prompting best practices
Use detailed, structured prompts for best results. Being specific about the data you want to analyze and the insights you’re seeking will help you get more actionable results.
- Be specific about timeframes. Use phrases like “last 7 days”, “yesterday”, or “this month” rather than open-ended ranges.
- Set a sample size. For qualitative analysis, state how many conversations to review (for example, “Review 50 summaries…”).
- Ask follow-up questions. Dig deeper into initial findings rather than accepting the first answer.
- Use available filters. CSAT scores, resolution status, topics, playbooks, language, channel, browser, device, and more. See
get_available_filtersfor the full list. - Refine visualizations. After generating a chart, ask for refinements like “adjust the y-axis to start at 30%”.
Example: vague vs. specific
Vague: “How is my Agent doing?”
Specific: “Compare my automated resolution rate and CSAT for the last 7 days vs. the previous 7 days. Flag any topic with a drop of more than 5 points week-over-week, and pull 5 transcripts from each to explain the drop.”
The second prompt gives the AI assistant a clear scope, target metrics, and expected output format, leading to consistently better results.