Skip to main content

Phase 6 – Optimization & Continuous Improvement

Keep your AI effective after launch by reviewing conversations, tracking KPIs, and updating knowledge regularly. Continuous improvements ensure long-term success.

D
Written by Dmitry
Updated over 3 months ago

This phase is about keeping your AI support agent effective after launch. By monitoring conversations, tracking key metrics, and updating knowledge regularly, you ensure the agent continues to deliver value as your product and support needs evolve.

Think of this phase as ongoing maintenance — small, steady improvements that keep your AI sharp and reliable over time.


Steps

1) Monitor escalated conversations

Review chats where the AI handed off to a human.
Why it matters: Escalations highlight the gaps that need fixing first.

  • Look for missing content or unclear workflows

  • Identify incomplete or misleading answers

  • Spot patterns where escalation rules may need adjusting


2) Review successful AI interactions

Audit conversations resolved by the AI.
Why it matters: Even correct-looking answers may be incomplete or confusing.

  • Confirm answers were accurate and helpful

  • Check workflows ran as intended

  • Spot partial answers that need improvement

Example Chatlogs view showing a customer query, the AI’s response, sources used, and conversation details with filters applied.


3) Track key performance metrics

Focus on a few KPIs during your free trial.
Why it matters: Simple metrics help you measure progress without overwhelm.

Recommended KPIs:

  • Deflection rate = AI resolved without human ÷ total inbound

  • Resolution rate = AI resolved ÷ total conversations

  • Fallback rate = AI fallback replies ÷ total AI replies

  • CSAT = positive ratings ÷ AI-handled chats

Example KPI tracker:

KPI

Target

Current value (example)

Notes

Deflection rate

40–50% tickets handled

42%

AI fully resolved without human intervention

Resolution rate

60%+ resolved w/out escalation

65%

Measures AI’s effectiveness end-to-end

Fallback rate

<15% of replies

10%

High = missing content or weak source connections

CSAT

Track if collected

4.5 / 5

Helps verify quality of AI-handled conversations

Example KPI tracker with key metrics (deflection, resolution, fallback, CSAT).


4) Set a review cadence

Define how often you’ll review interactions.

Suggested rhythm:

  • Week 1 after launch → daily review

  • Weeks 2–4 → weekly review

  • Ongoing → biweekly or monthly, depending on product changes


5) Expand automation over time

Gradually move more cases from humans to AI.

Examples of automatable cases:

  • Common FAQs

  • Simple troubleshooting

  • Informational requests

  • Standard workflows (order status, plan details)

Continuous optimization loop — monitor, identify gaps, update knowledge base/workflows, and retest.


Best Practices / Tips

  • Start reviews with escalated cases — they’re the richest source of insights.

  • Always spot-check successful AI interactions to confirm quality.

  • Track only 2–3 KPIs at first to avoid data overload.

  • Increase review frequency when launching new features or docs.

  • Treat optimization as continuous, not a one-time setup.


Common Mistakes to Avoid

  • Assuming “resolved” means “satisfied” without checking.

  • Tracking too many KPIs and losing focus.

  • Ignoring documentation updates — stale content = poor answers.

  • Reviewing only once at launch instead of setting a cadence.

  • Automating sensitive or judgment-heavy cases.


Cross-references


Expected outcome: Key metrics are tracked, reviewed regularly, and used to improve performance.

Did this answer your question?