Prompt library

The prompt library is organized by use case. Each page lists prompts you can run directly in your connected AI assistant (Claude, ChatGPT, Gemini, etc.), along with the insights you can expect in return.

Use cases

Prompting best practices

Use detailed, structured prompts for best results. Being specific about the data you want to analyze and the insights you’re seeking will help you get more actionable results.

  • Be specific about timeframes. Use phrases like “last 7 days”, “yesterday”, or “this month” rather than open-ended ranges.
  • Set a sample size. For qualitative analysis, state how many conversations to review (for example, “Review 50 summaries…”).
  • Ask follow-up questions. Dig deeper into initial findings rather than accepting the first answer.
  • Use available filters. CSAT scores, resolution status, topics, playbooks, language, channel, browser, device, and more. See get_available_filters for the full list.
  • Refine visualizations. After generating a chart, ask for refinements like “adjust the y-axis to start at 30%”.

Example: vague vs. specific

Vague: “How is my Agent doing?”

Specific: “Compare my automated resolution rate and CSAT for the last 7 days vs. the previous 7 days. Flag any topic with a drop of more than 5 points week-over-week, and pull 5 transcripts from each to explain the drop.”

The second prompt gives the AI assistant a clear scope, target metrics, and expected output format, leading to consistently better results.