Introduction to Chat Agents¶
In this tutorial, you'll create an AI-powered math tutor assistant capable of handling a variety of mathematical queries. The agent will be equipped with tools to perform fundamental arithmetic operations and intelligently combine and execute these tools to address user questions. By the end of this tutorial, you'll have built an interactive math assistant that can help users solve problems and provide clear, step-by-step explanations.
Note
This math tutor agent can technically be implemented using just an LLM, without any agent capabilities. However, the purpose of this tutorial is to help you understand the essential concepts required to build an AI agent using WSO2 Integrator: BI. By following this guide, you'll gain hands-on experience with agent creation in WSO2 Integrator: BI, setting the foundation for developing more powerful and tailored AI agents in the future.
Step 1: Create a new integration project¶
- Click on the WSO2 Integrator: BI icon in the sidebar.
- Click on the Create New Integration button.
- Enter the project name as
MathTutor. - Select the project directory location by clicking on the Select Location button.
-
Click the Create New Integration button to generate the integration project.
Step 2: Create an agent¶
- Click the + button on the WSO2 Integrator: BI side panel or navigate back to the design screen and click on Add Artifact.
- Select AI Chat Agent under the AI Agent artifacts.
- Provide a Name for the agent. It will take a moment to create an agent with the default configuration.
-
After creating the agent, you can configure it with a model provider, memory, tools, roles, and instructions.
Step 3: Configure the agent behavior¶
- Click on the AI Agent box to open the agent configuration settings.
- Define the agent's Role and provide Instructions in natural language. These instructions will guide the agent's behavior and tasks.
-
Click Save to finalize and complete the agent behavior configuration.
Step 4: Configure the agent model¶
By default, the AI agent is configured to use the Default Model Provider (WSO2), which uses a WSO2-hosted LLM. To use this provider, you must sign in to BI Copilot. When creating the agent, you will be prompted to sign in to BI Copilot. After signing in, configure the default model provider as follows:
- Press
Ctrl/Cmd+Shift+Pto open the VS Code Command Palette. - Run the command:
Ballerina: Configure default WSO2 model provider.
If you want to use a different model provider, for example to configure OpenAI as the model provider, follow the steps below.
-
Locate the circle with the WSO2 logo connected to the AI Agent box. This circle represents the LLM used by the agent.
-
Click the circle to open the model configuration options.
- Click Create New Model Provider.
- Select OpenAI Model Provider from the list.
-
Configure the model provider with the required details.
Note
Since the API key is sensitive, it’s recommended to externalize it by using a configurable value. This helps prevent accidentally committing it to your version control system and ensures it’s kept secure without being exposed. To learn more, see Configurations.
- Switch the API Key field from Text mode to Expression mode using the toggle above the field.
- Click the API Key input field to open the Expression Helper.
- In the Expression Helper, select Configurables.
- Click + New Configurable to define a new configurable.
- Set the Name to
openAiApiKeyand the Type tostring. - Click Save to create the configurable.
-
In the Model Type dropdown, select
gpt-4.1. -
Click Save to complete the LLM configuration.
Note
If you have used a configurable for the API key, you will be prompted to provide its value the first time you run the integration (see Step 7).
Step 5: Configure agent memory¶
- By default, the agent comes preconfigured with an in-memory implementation.
- For this tutorial, we will keep the default memory configuration and not make any changes.
- If you prefer to configure a different memory type, click on Add Memory and select your desired memory option from the list.
Step 6: Add tools to the agent¶
WSO2 Integrator: BI allows you to create tools using existing functions. It also supports automatically generating tools from connector actions or OpenAPI specifications by leveraging BI's capability to generate local connectors from an OpenAPI spec.
However, in this tutorial, we will create simple functions to perform arithmetic operations and use them as tools.
Create a function
- Click the + button in the WSO2 Integrator: BI side panel under the Functions section.
- Provide the required details to create the function. For this example, use
sumas the function name, and specify the parameters and return types. - Implement the function logic in the flow node editor that opens.
Add the created function as a tool
- Go to the agent flow view.
- Click the + button at the bottom-right corner of the
AI Agentbox. - Select the Use Function option
- Select the created function from the Current Integration list — in this case,
sum. - Then provide the Tool Name and Description of the tool
Follow steps 1 to 3 to create functions named subtract, multiply and divide to perform subtraction, multiplication, and division operations respectively. Define the appropriate parameters and return types, and implement the corresponding logic in the flow node editor. Then repeat steps 4 to 8 to add each of these functions as tools in the agent by selecting them from the Current Integration list and providing a relevant tool name and description for each.
Step 7: Interact with the agent¶
After completing the above steps, your math tutor assistant is now ready to answer questions. WSO2 Integrator: BI provides a built-in chat interface to interact with the agent.
To start chatting with the agent:
- Click the Chat button located at the top-left corner of the interface.
- You will be prompted to run the integration. Click Run Integration.
Step 8: Debug agent responses with tracing¶
To better understand how the agent arrives at its responses, you can enable tracing. Tracing provides a detailed view of the agent's reasoning flow, including tool invocations and intermediate steps used to generate the final answer.
Using the built-in tracing feature
WSO2 Integrator: BI provides a built-in tracing capability that can be enabled directly from the VS Code interface. Once enabled, you can view detailed execution logs for each agent interaction.
- Press
Ctrl/Cmd+Shift+Pto open the VS Code Command Palette. - Run the command:
Ballerina: Enable Tracing. - Click the Chat button located at the top-left corner of the interface.
- When prompted, click Run Integration to start the integration with tracing enabled.
- Interact with the agent by asking a question.
- Click the Show Logs button under the agent's response to view the detailed trace, which includes the agent's execution steps, tool calls, and intermediate reasoning details.
Publishing traces to external observability platforms
In addition to the built-in tracing support, WSO2 Integrator: BI allows you to integrate with external observability and tracing platforms. This is useful for advanced monitoring, distributed tracing, and analyzing agent behavior across larger systems and deployments.
For example, you can configure WSO2 Integrator: BI to export traces to Jaeger, a popular open-source distributed tracing platform. To learn how to connect WSO2 Integrator: BI with Jaeger, see: Observe tracing using Jaeger
You can view the list of supported observability and tracing platforms here: Observability tools and platforms supported by Ballerina.








