Video Tutorial
Transcript
Transcript
All right, so in this video I want to actually test an agent with you guys and just walk you through step by step how the AI agent actually works. So in order to trigger the agent in a live, I already made another video about this, but you're going to go to your agent trigger, you're going to copy the web URL and trigger it. Feel free to watch that video if you haven't already. But in this case, we're going to do something called test flow. So let's go ahead, let's test this flow. It's going to start a new session and when that session starts, you will see this pop-up window here that allows you to chat with it. And then you will also see this blue line around the active node of where the agent is at.
So let's just take a peek right here. The agent started, it received a message that says agent initial message. That's how we trigger the agent. Then the agent wakes up, it gets, and then it sends the message. So, right, we told the AI to say this, hey, Alisha here, just to confirm this is at first name, right? So it did exactly what I told it to do. And then these are the different output flows. So I'm gonna go ahead and I'm actually going to respond and I will say, sorry, Alisha who? So I'm acting confused, a little bit shocked, surprised. The next thing, again, the agent received the message, it's then going to wait to see if we have any more messages in the settings. We have it wait for 3 seconds. I would highly recommend doing 25 to 60 seconds in real time.
Um and then again waits, it's finished waiting, wait message over agent receiving final message. What this means and why we have a setting. Again, we're going to get into this in another settings video. So beyond look out for that video specifically how to dial in your agent settings. But if this was set to let's say 60 seconds, the AI would wait 60 seconds before generating a response. Now why we do that is because sometimes humans send multiple messages. Uh they do, uh right, like a human could send in multiple messages. And if we're waiting, then the AI will package all the messages together and send it all as one.
After the AI receives all the message, here's something that's very, very, very important that you need to understand if you're going to use this platform. Our AI uses a reasoning agent and the reason we use this is so that we can control the flow of conversation with mass precision.
If you don't know the difference, you have regular AI which is basically message in, message out and then you have reasoning models which if you use chat GPT, it used to be like the 04 I think now GPT five just does reasoning embedded with it and also has an advanced reasoning model that you can toggle on. But what the difference fundamentally is, a normal large language model is message in, message out. A reasoning model will think within itself before it actually pushes forward and that's what it's doing here. So as you see the goal, determine if we can send them some availability or sorry the goal for this one is determine if we're speaking with Arun or first name, which in this test contact is Arun.
Then we have the different output flows which you can see right here. Yes, we are speaking with them, the user neither confirmed nor denied and we also have the conversational triggers, right? The lead became hostile, aggressive or seems irritated, the user no longer wants to talk to us, which are these conditions right here. So very important that you understand this is how this works. Every time the agent receives a message, it does this reasoning process. So again, we have the output flows, the conversational triggers, then it tells you the selected path. So the user neither confirmed nor denied and then selecting this path is what it selected.
Then we show to you the actual reasoning and how we train the agent. So the agent is following our training process, which again, I've said this a few times throughout the training series, Agent Kong is not just a AI model with a wrapper of some nodes. It's actually trained from the ground up along every step of the way to be a human appointment setter. So we trained it number one, analyze the message. So it'll actually analyze the message and here you see uh emotional tone, it's irritated and confused, right? It understood this as irritated and confused.
Uh, step two, available output flows. So these are the different available flows. Step three, available conversational triggers, step four, it goes through a semantic matching process. So it analyzes user intent against the custom scenarios as well as the, um, as well as the output flows, right? So it said is this one irritated but does not address confusion about identity, does not match, user hasn't expressed his interest. So it said no to both of those analyzing user intent against output flows. And then step five, which is honestly the one that I look at the most, we just show you this, I don't really look at it too, too much. I just always go straight to step five.
I look at the final decision ranking analysis. So it'll rank number one, the user neither confirmed nor denied. The match score is 90% and it tells you why, because the user is asking Alisha who and has given no clear yes or no regarding identity. So this is literally the AI thinking to itself in real time to see where it needs to send the conversation if anywhere.
Then again, number two is going to be uh this custom scenario. Why? Because the user shows irritation but is primarily seeking identity clarification which fits the flow better than mere hostility, right? It's like, so on and so forth. And then it will tell you its decision logic. So winner, the user neither confirmed nor denied with 90% semantic alignment. Why this beats alternatives? It directly matches the user's request for clarification and absence of confirmation denial, whereas the hostility scenario only partially fits tone but not intent confidence factor clear user confusion about identity, explicit lack of confirmation, and then it tells you the final decision that it chose, which is the user neither confirmed nor denied, which as you see is that one.
Now, why I bring this up, if at any point when your AI has conversations and you are not happy with where it went and you think that it took the wrong path, you can always go view the session logs, which we have another video on. So please go watch that video on how you can actually see the session logs because right now we're seeing the session logs again, session logs, we're seeing these session logs in a testing environment, but you can also see in the live production environment when it's actually texting humans. So watch that video on that to see how to do it. But to kind of wrap this point up here, if at any point in time you are not satisfied with the way the AI pushed the conversation forward, look at its decision logic and its decision criteria because I've had sometimes where, you know, I tested this and said, yes, we are speaking with the right person and then I said the user specifically said no.
And I think, I think that was what it was. But when I saw the session logs, uh it didn't do this path, which is what I expected it to do. And the reason was, um the user has said no, but they also added other text beside it. They didn't specifically only say no. So therefore it didn't actually choose that. So it'll tell you again why it chose it. If you notice that it chose the wrong path, then you're going to have to come back and you're going to have to tweak the output flows a little bit just to really like fine tune its mechanisms. Most of the time you're going to be good. Uh like 9.5 out of 10 times if you just talk to it in basic simple English and you don't get too crazy, like you're talking to a 12 year old, it will follow the output flows properly, but there might be the odd, you know, one out of 10 times or 0.5 out of 10 times where it just like it doesn't choose the path that you would have expected to choose and that's where you're going to have to come back and maybe tweak the output flows a little bit uh and and better set up the agent.
So that is how that works. Um I think it said S because I imported settings here and the guy trained the agent to talk like Gen Z. So I guess that's how you say sorry in Gen Z. Um So don't worry, we didn't tell the AI to be mentally challenged and talk like that. Uh that was just a flow that I imported and we're testing where the user specifically says in the settings how to talk and he tells it to, I guess, yeah, talk like this, like more Gen Z. Um so that's why it said that don't don't trip yourself out there. I've never actually seen that until I looked at the settings.
Um but it's like, oh sorry, I'm Alisha Mate's team. My boss Mate wanted me to offer you a free AI sales audit. Can I send you some availability? So as you see, I didn't tell the AI to apologize. I simply said, see if you can include this, but between nodes, the AI is obviously smart, it's going to carry on the conversation in a normal fluid way, so it will respond to the user's message like, oh sorry, I'm Alisha Mate's team and then it will go into what we told it to do, which is determine if we can send them some availability and then I gave it some extra instructions here. Can I send you some availability?
What's the pricing of this AI agent. Now the reason I'm asking this question is so you can also understand what happens if none of these flow conditions get met. So as you see right here, we have two conditions, the person said they would like to book an appointment or the user specifically states they are not interested in booking an appointment. I didn't give it either of those, right? I said what's the pricing of this AI agent. So now when we take a peek at the reasoning behind the models response, you'll see it chose no event. So it's like, hmm, we can take a peek at step five, no event. Why? The user's query about pricing does not align with any booking flow or disinterest hostility scenario. So again, the AI is always aware of what scenarios, right? disinterest or hostility. So neither of these scenarios got triggered, which is what it just says, uh it does not align with booking flow or those little conversational triggers. Then it will walk you through the different ranks, et cetera and then it will tell you its decision logic, no event with 100% semantic alignment. Why this meets alternatives? The user is requesting pricing details which does not semantically align with any of the appointment booking flows or disinterest hostility scenario.
Um and yeah, so that is why it is now still in this node and it's still trying to push this. The audit is totally free. We'll chat about pricing if you want to move forward after. I haven't trained it on how to overcome this objection. That's why I'm not happy with this response. I would have to go in and actually train it inside of the settings, which we will see in another video. Um but this is its response. So just to kind of explain to you what's going on. We trained the AI to look at the objection or the question, whatever it is, answer it and then continue to push forward into the goal, which in this case is send them availability. Uh let's just for context say keep me short, say yes, you'll then see it will send me off to the calendar booking node. So the AI received the trigger, it's waiting, finished waiting, it's then going to package all the messages of yes and send it to the AI agent and then the AI will do its reasoning as normal and then it will see where it's going to push it next.
Cool, the final decision, the person said they would like to book an appointment. Um so that's where it chose to send the message. Now we're inside of here and cool, I have availability tomorrow 22nd and the Saturday the 23rd, which works better. Uh I'm going to say, can we do next Monday at 3:00 p.m. So now because we have again, inherently trained the AI, we told it anytime you get a specific date and time go in the counter availability and check the calendar availability to then book it in. So we have a clear date and time. But interesting is we said next Monday. Again, this is where our AI agent is trained.
It knows what day it is today. We always train the AI and we tell it what next means, the week after means, et cetera, et cetera. Um I'll say I'm in Toronto. Uh so yeah, it's going to confirm time zone before actually confirms the actual time slot there. Um if we kind of wrap this thing up, push it forward, give it a second. Awesome, it generated a response. One thing I also want to tell you is we always show you um this here. So the AI agent used a tool, it grabbed the available date and time slots from the calendar. So we can view the data. This is coming from Go High Level. So our system has nothing to do with this response.
Sometimes we've had users that have had upset things or they're upset because the timing is wrong. Uh we have nothing to do with the timing. All the timing comes from the actual calendar provider in this case is Go High Level. Go High Level is extremely buggy as a platform. So we have seen issues where Go High Level sends the wrong time. So I'm not going to sugarcoat anything on the high level side. I wish we could write the code to fix it. We literally cannot control what AI or what Go High Level presents to us in the calendar time. Uh but as you see, it pulled calendar availability, I told it for next Monday, it grabbed, I guess Monday and Tuesday. Uh it went through, did its thing and then it generated a response of I have Monday the 25th at 3:00 p.m. available. Should I book that for you? I'll say yes, please.
And then we'll wait. Awesome. As you see, the agent booked a call in the workflow through this tool call and then it said you're booked for Monday the 25th at 3:00 p.m. Um and then don't worry, your leads are never going to see agent session ended. Uh we show it to you inside of the chat as well as through the conversation which we'll get to in a minute, but the end lead will never see this message. We only display this message to you. You can be confident your leads are never, ever, ever going to see any of these system logging messages like agent session ended. But that in a nutshell is how this entire thing works. Again, we went literally through the whole thing, uh see if we're speaking with the right person, give them availability, you understand now how the AI reasons within itself to go between nodes and how to flow or control the flow of conversation. Uh you understand now that if we also don't meet any of the output flows, the AI will stay localized in there and it will continue to try to complete the goal. So if we said, you know, what's the price, it will deal with the price how you trained it to and then uh it will continue to ask them to send availability because we never gave any conditions where we said the user asks for price and we're not sending it somewhere, right? So from that, it goes to the booking node, as you saw gives the date and time calendar availability and then conversationally books into the actual appointment. So as you hear agent booked an appointment directly in the calendar. And yeah, that's pretty much a quick summary of how your AI agent works. I hope after this video you're a lot more clear into the system. Again, if you do ever have any questions or you're ever like, yo, what's going on, the AI is maybe not responding or it's not talking how I want it to or like something's happening that I don't want, you are always going to come into the session logs to understand what's going on, see what happened at any point. and the session logs is even us and our support team like anytime you come to us with support, we're going to look at the session logs with you. We don't have any extra back end stuff. We share everything that happens inside of the session logs so that way both you and our team can be fully on board and understand exactly what's happening at each step of the way. And yeah, that is how your AI agent works. Hope you understand how it works, how the
Mastering Your Workflow: Testing and AI Reasoning in Agent Kong
Understanding how your AI agent thinks is the key to building powerful and reliable conversational flows. This guide will take you through the Test Flow feature, providing a deep dive into the Session Logs and the sophisticated AI Reasoning process that powers every decision your agent makes.
Testing Your Agent: A Live Simulation
The "Test Flow" feature provides an interactive window to simulate a conversation and see exactly how your agent responds to different inputs.
Activate the Channel: From the top menu in the Agent Hub, ensure the channel you wish to test (e.g., WhatsApp, SMS) is toggled on.
Initiate the Test: Click the Test Flow button at the bottom of the workflow canvas.
Interact and Observe: A new window will open with two main components:
Chat: A live chat interface where you can type messages as if you were the end-user.
Session Logs: A real-time log detailing every thought and action the AI takes.
As you chat, you will see a blue outline highlight the agent's current node on the main workflow canvas, giving you a live visual representation of its position in the flow.
The Brain of the Operation: Understanding AI Reasoning Through Session Logs
Agent Kong is powered by a proprietary Reasoning Agent. Unlike standard Large Language Models (LLMs) that simply generate a response to an input (message in, message out), our agent first thinks and reasons to determine the best possible action. The Session Logs provide a transparent, step-by-step view of this process.
After receiving a message, the AI follows these critical steps:
Message Reception and Delay:
The agent first registers that it has received a message.
It then respects the Message Delay you've set in the global agent settings. This crucial pause allows the agent to wait and see if a user sends multiple messages in quick succession. It then bundles all recent messages into a single input for a more complete understanding of the user's intent.
AI Reasoning: Analysis of Possibilities:
The AI evaluates its current position in the workflow. It reads the Goal of its active node (e.g., a Milestone like "Determine if we're speaking with @FirstName").
It analyzes all possible paths forward from its current location, which includes all the Output Flows you have created for that node.
Simultaneously, it scans all Conversational Triggers that exist across the entire workflow to see if the user's message matches a global command (e.g., "the user wants to cancel their appointment").
Final Decision Ranking Analysis: This is the most critical part of the process, revealing the AI's decision-making logic.
The AI ranks every potential path (both Output Flows and Conversational Triggers) with a Match Score (e.g., 90%).
It provides a detailed Decision Logic explaining why it chose the winning path and why it discarded the others. For example, it might select "user neither confirmed nor denied" because the user's message was a question ("Alisha who?") rather than a clear "yes" or "no."
Final Decision and Action: The AI executes the highest-ranking path. This could mean moving to the next node in the sequence or jumping to a completely different part of the flow if a Conversational Trigger was activated.
Important Note: Handling Unmatched Messages
What if a user's message doesn't align with any of the Output Flows or Conversational Triggers? The agent will remain localized on its current node. It will not move forward. Instead, it will re-engage the user in an attempt to fulfill its current node's goal. This prevents the conversation from derailing and keeps the agent on task.
A Live Test Walkthrough
Let's follow the example from the video to see this in action.
1. Agent Starts: The agent begins at the "User Verification" node and sends its first message: "Hey, Alisha here! Just to confirm, this is Arun... Right?"
2. User Responds: The user is confused and replies: "sorry... Alisha who???"
3. The AI Thinks: In the Session Logs, we see the reasoning:
Goal: The AI knows its current goal is to "determine if we're speaking with Arun."
Available Output Flows: It analyzes the three potential paths: "yes we are speaking with the right person," "the user neither confirmed nor denied," and "no the user denies that we are speaking with the right person."
Final Decision: The AI selects "the user neither confirmed nor denied" with a 90% match score.
Decision Logic: The log explains: "The user is asking 'Alisha who??' and has given no clear yes or no regarding identity." It correctly identifies that the user's question is a request for clarification, not a confirmation or denial.
4. AI Acts: Based on its decision, the agent moves the conversation to the next connected node, "Availability Check," and proceeds with the instructions defined there.
By reviewing the Session Logs, you gain unparalleled insight into your agent's behavior. This allows you to debug conversations, refine your node instructions, and build truly intelligent, predictable, and effective AI agents.
