Skip to main content

Playground

How to test, debug, and understand your AI agent before deployment

D
Written by Dmitry
Updated over 3 months ago

What is the Playground?

The Playground in Ordemio Dashboard lets you test your AI agent before deploying it into live chat or your help desk.

It helps you:

  • See why the AI responds the way it does.

  • Check which chunks, documents, and data sources are being used (RAG made transparent).

  • Test workflows, actions, and user data safely without affecting the live agent.

💡 Best practice: Use the Playground to experiment and debug safely. Apply changes only when you’re confident, so your live agent always delivers reliable responses.


How it works

Go to your Ordemio Dashboard → Playground.

The interface has two columns:

  • Left panel → configuration tabs (Prompt, Actions, Workflows, User data).

  • Right panel → Thread where you can chat with your AI agent and see exactly how it responds.


Prompt tab

In the Prompt tab you can customize and test instructions for your AI agent:

  • Communication Tone & Style Guide

  • When to Escalate to Human Agent

  • Additional Guidance & Clarifications

At the bottom you’ll find the Temperature setting:

  • Reversed → The assistant only answers based on your training data.

  • Moderate → The assistant also uses some general AI knowledge.

  • Creative → The assistant answers broadly to almost any request.

💡 Changes made here only affect the Playground — they will not change your live agent until you click Apply settings at the bottom.


Actions tab

The Actions tab shows the list of available actions your AI can use. Each action has a name and a Description (or Trigger) to indicate when the agent should use it.

💡 If the AI doesn’t call an action when expected, you can edit its description to improve future behavior. You can also enable or disable actions with a toggle to see how it affects the agent’s behavior in a specific case.


Workflows tab

The Workflows tab defines structured instructions — similar to playbooks for human agents. Each workflow includes a name, a purpose, and triggers (when it should run).

💡 Use workflows to handle recurring cases (e.g., troubleshooting steps). You can enable or disable them with a toggle.


User data tab

Here you can simulate conversations with real user data in valid JSON format. Examples include:

  • User name, email, plan type

  • Number of seats purchased

  • Country or billing period

⚠️ Note: The JSON must be valid for the AI to process it correctly.

This data can come from third-party integrations such as Zendesk, Intercom, or your CRM, helping you emulate real customer conversations more accurately.


Understanding AI responses in the Playground

When you ask a question in the Playground thread, the AI agent may respond in two ways:

Case 1: Retrieve knowledge chunks (RAG)

The agent can call a tool like searchData to look up information in your Knowledge Base.

  • The tool that was called is shown in the Playground (e.g. searchData).

  • The query used is visible (e.g. “WhatsApp integration CloudTalk”).

  • The response includes several chunks, each marked with <source> and </source>.

  • Each chunk contains the article title, URL, and a snippet of content.

  • By default, 5–10 chunks are retrieved.

This helps you see exactly which documents powered the agent’s answer and whether the right data was pulled.

Case 2: Call a workflow or action

In other cases, the agent may trigger a workflow that calls an action.

The Playground shows:

  • Which workflow was called.

  • Which action was executed.

  • The parameters sent to the tool.

  • The tool’s full response (for example, a JSON output).

This lets you debug tool calls in detail — you can check if the right tool was used, confirm the parameters passed, and review the exact response that shaped the agent’s reply.

💡 Business value: By checking both retrieved chunks and tool calls, you can quickly identify whether wrong data was used, an incorrect tool was triggered, or key information was missing — making it much easier to improve the agent’s performance.


Why use the Playground?

  • Transparency → See exactly which sources and chunks power each answer.

  • Debugging → Trace API calls, parameters, and responses to fix issues faster.

  • Workflow testing → Run troubleshooting guides and confirm actions fire correctly.

  • Simulation → Test with user data to see how the AI responds to real-world cases.

  • Safe experimentation → Make changes without affecting your live agent until you’re ready.

Did this answer your question?