Build an AI Chat App with JavaScript and Next.js

Build an AI Chat App with JavaScript and Next.js

Show Video

Let's build an AI application with the watsonx.AI SDK for JavaScript. Building your own AI application might sound like a challenging task, but in this video I'm going to break it down into simple steps. We'll be using Next.js to build a React frontend application.

We'll be using the watsonx.ai SDK for JavaScript to inference with models. Then we'll be using the same SDK to work with tools. And finally we'll be importing some community tools from wxflows. So let's dive into VS Code and get started. In VS Code I set up a new project. And here I'm going to use the Next.js CLI to bootstrap my application.

I'm going to run npm createNextApp. I'm going to make sure I'm using the latest version. And then I'm going to define my project name, which will be watsonx chat app. It will take a few moments to set up my boilerplate application, but first I need to answer some questions. I'm just going to go with all the defaults, but when you build your own app you might want to use different settings. Once finished it created a new directory called watsonx-chat-app, and in here you can find all the boilerplate for an Next.js application.

I'm going to move into this directory using my terminal. So I'm going to see the watsonx-chat-app. And in here I can start the application by running npm run dev. And in my browser I should now be able to see the boilerplate application.

From the boilerplate application it says get started by editing source app page which is a tsx file meaning that we're using TypeScript. And in this file I can add our chat application. I'm going to go ahead and delete all the code that's already in here because I don't need any of that boilerplate. Instead I'll be copying a new code which will show our chat application. First I'm going to create the canvas which is a simple div which has some settings for the composition of the page. As you can see I'm using Tailwind.

Tailwind is a great library if you don't want to write all the CSS by hand, and in this div I can add the header. I can add an input bar which will be used to type in our question, and I also will be adding some components to render messages on the screen. So first let me add the header, and the header code will mention the title of the application. Then in here I can also add the boilerplate for input bar, and the input bar is not a controlled component yet but we'll be hooking it up to state later on, and then finally I'm going to add some code in here to render some placeholder messages, and once to add the messages, I'm going to format my code and then save this file.

I'm going to format so it's all nicely structured. And then it's saved and we can find it in our browser. And in the browser you can see that we have a simple header. We have a few placeholder messages like hey are you today? And then the AI will reply with I'm okay what about you? And then we have an input bar which we can use to type our question. You can type or press or do whatever you want. It's not hooked up to anything yet.

So that's what we'll be doing next. But first I'm also going to delete some code from globals .css which is CSS code that I won't be using. So I'm going to save this and then I can close the global CSS file. I'm also going to kill the process that's running in my terminal because I'm going to install the watsonx.ai SDK. I can install this from npm by running the command npm install at IBM cloud slash watsonx.ai.

And this will install the library that I need to work with models available in watsonx.ai. After installing I first need to set up a file with my environment variable. These environment variables are needed in order to connect to the models that you have in your watsonx.ai account. And create a new file which I call .env. In here I need to set a couple of environment variables.

I need to set my API key, my project ID and I also need to set the auth type and I'll be using IAM for this. To get your API key and your project ID you need to open up the watsonx.ai dashboard and in here you can find a developer access page which has all the details that you need. So I'm going to save this file but before saving make sure to substitute API key and project ID with your own details.

I'm not going to be adding the connection to watsonx.ai for my page .tsx file. Instead I'm going to create a new file and I'm going to call this file actions .ts. This file will have all the functions that are running serve rside because with Next.js you can build both client-side code which is what you see in the browser, but also server -side code which is running in the background. So I'm going to make sure I have a server-side file by defining new server at the top and then in here I'm going to define a function.

And this function I'm going to call it message. This is the message function that I will be using to send a message to the large language model. In here I can add the logic to connect to watsonx.ai

So I'm going to import connection instance from the SDK. So I'm going to import watsonx.ai from the library I just installed and then I can use it inside my message function. I also need to set a type because we are using TypeScript.

So I'm generating a message type. I'm going to export this as well because I might be using it in different places as well. This will have two fields we have role which is a string and the role field is being used to define who the message is coming from.

Whether it's you, whether it's the large language model or whether it's a system prompt, but I'll explain more about system prompts later on, and it also has a content field which is the actual response from the large language model, a tool call or maybe the message that you're asking from the LLM. So this could be content. It is optional though because for example tool calls don't have a content field, and this is also a string.

In my message function I can now set the input to be messages and messages will be of type message, but of course it's an array because you could have multiple messages. I'm also going to import my environment variables. So these are the environment variables that we created for the watsonx.ai SDK. So those should be in your .environment file which we created just a little bit ago. So let me put in some code inside of the message function. And this code is to connect with watsonx.ai.

You need to set a service URL and this includes your region. So for me it's us-south, for you it might be something else. I've also set max tokens.

So this is the maximum amount of return tokens the LLM is providing for us. I'm setting up a text chat function which connects to watsonx.ai with the model mistral large which is a very nice model if you're working with data sets or if you're doing tool call, which we'll be doing later on. In here I'm also passing all the messages. So these are the messages that you create in your front -end application.

And then finally I'm returning the LLM response. So this is the response coming directly from the large language model. Let me format this code and then save it. Make sure you export this function because we're going to need it in our page .tsx file. At the very top I need to define that this is a client-side component. So I need to define use client before I actually start to import all the different functions from my actions .ts file.

Before doing so I'm going to set up a state variable to keep track of our input message. So this is the message that you type, whether it's your question or maybe something else you want to ask the LLM. So I'm going to import use state from react which is the application framework that we're using.

And then I can start creating my local state variables. I can create a local state which I called input message. So the first value is the actual value and the second value is the function to update the state. And I'm going to call this set input message. By default the input message will be an empty string, and I can use the state variable to turn my input bar which is at the very bottom of your application into a controlled component.

Meaning that whenever we type something in there it's going to update the state. And we can use this state for example to send a message to the large language model. So I'm going to scroll down to my input field, and in here I'm going to set the value to be input message which is a state variable. And then I'm also going to set a onChange function which is taking the event, which is the value inside the message box, and it's going to use that value to update the input message.

And we're going to take the target value. So I'm going to save this. And now I should have a controlled component for the input box, but of course I'm going to need more because I want to store this message into a message state. So I'm going to create a second state variable which I'll be calling messages, and then of course I need a function to update these messages.

Again I'm using the useState hook to create the state variables. And now the value would be an array because messages are a list of messages and not a single one. I can also import the type definition for this which I already created in my actions .ts file. And I should also import the message function as we're going to use it to send a message to the large language model.

So I'm going to import both the message function as also the type which we call message. So message will be the type of this state variable as well. I make sure to set it as a array of messages and not a single message.

We can also add a default message in here which will be present in each of our applications. I need to set a role which is called system. So the system prompt is a very important prompt as you use it to give the LLM additional instructions. So your question will be sent to the LLM together with your system prompt. And the system prompt in here you can ask for details or you can give the LLM a certain role.

Maybe you're building a chat application for an insurance company or a medical one. You want to use the system prompt to make the LLM aware of the context of your question. So we can set role to system. We can then set content to be your actual system prompt.

And in here I can set something like you are a helpful assistant and you'll be answering questions related to math. Because we'll be using some bit of tools later on which are related to mathematics. So after setting the system prompt I can now start to create my function which will be used to send a message to the large language model.

I'm going to create a async function because we need to await for the LLM response. And I'm going to call this send message. We don't need any input parameters because we can take the input message directly from state. And in here we'll be setting a shadow message history. So I'll be creating a const which I call message history which again is an array.

It includes all the previous messages we might have in state. Including my system prompt or any other messages sent to the large language model. And then also the most recent message which is your question which is coming directly from the state variable input message. And a role for this would be user because you're sending the message. And content would then be input message. So this is a shadow history because we're not using it to update the state.

We're using it to send a message to the large language model. So in here we're going to take the response coming from the model by awaiting the message function that we imported and created earlier on, and this will take the message shadow history. Once we get the response of course we want to update the state. So if there is a response we actually want to import the messages, but first we're going to update the shadow history. So we're going to say messagehistory.push,

and the push will be our response. I made a small typo there so this should be response, and now all the red lines are gone. And finally if this is all done we're going to update our message history to include the total shadow message history that we created here. So once I save this we still need to hook up our function to the actual button. So I can scroll down to our send button and I can add a onclick function here.

So whenever you click it it's going to send a message using the sendMessage function. Format this code and I make sure I start my application again, because we killed the application process earlier on, and now I should be able to go back to my browser and see the chat application. You can see it still has the placeholder messages in there. So what I'm going to do next I'm going to make sure that all the messages that are being rendered are actual messages that you sent or the LLM sent back to us. For this I'm going to this middle part, and in here I need to check for the messages state.

I can put these around curly brackets and I can say messages is there, and also make sure messages length is bigger than zero. And once it's bigger than zero I can actually start to render these components. Let me put this in the outer loop as well.

So I'm going to put this inside the div which is the div we already had before. And then inside this messages part I can start to map over all the messages that we have in history. So in here I'm going to map over the different messages.

So I'm going to be calling a .map function, and this .map function will handle the return. Make sure of all the parentheses set up correctly. If I have something like this. So my return is here.

Let me break up the return in two returns as well. Because we have messages coming from the LLM and we have messages coming from ourselves. Starting to look better. Let me format this code. And then the final bit we need to hook up here. We need to check what the role is.

So the role of a message determines whether it's rendered on the left, on the right, in our application, and it also depends which label is being shown next to the message. I can check for messages here. So if role is equal to user I want to return this first message. If role is equal to assistant which is the large language model.

Then I want to render the second type of message. So these roles are pretty important in order to make sure that we return the correct message, and then of course I want these to be dynamic. I don't want the placeholder messages anymore.

Instead I want the actual content either coming from the state or content coming from the large language model. As we're using React we need to make sure that each of the elements we return from a map function has its own key. So we need to set a key here. Otherwise you might be seeing errors in your developer console. And for this we can use the role and the index which is the index of the iteration of the different messages.

And we can use this same key construction for the different message types. So now if I go back to the browser I should be able to see no messages, and we can start typing and should see updates whenever we press the send message button. So let me refresh the browser and our screen should be empty now.

I can start typing in this box. I can ask the LLM. "So hey how are you doing?" And then the LLM should respond with a message. As you can see the LLM is pretty friendly. It's taking the role of a helpful assistant.

And it's going to ask us how they can help with math questions. Because we told the large language model you should be helping us with mathematical questions. In order to work with mathematical questions we can set up tools later on, but first I want to do some housekeeping. I make sure that whenever we press this send button we're going to show a small loading indicator so you know something's happening in the background. For this I'm going to create a loading state. I can call this is loading.

And then I need a function of course to update the loading state, so set is loading, and I can do use state is false. So by default my application isn't loading. Whenever I press send message I want to set the loading state to true. So I know the application is doing something in the background. Make sure it is set to true. And then whenever it finishes and updated my message history I want to make sure that it's no longer in a loading state, so set this to false.

What else I want to do is whenever I finish sending the message I want to make sure that my input message is being emptied out, because this way we don't need to delete it every time we want to send a follow-up question. Let me save this. Let me also scroll down because whenever I press the send message button I want the input field to be disabled. So I can set a disabled tag here and make sure this is looking at is loading. And then on our button we can also make sure we show a small loading indicator so you know something is happening in the background.

If the application is loading we want to set this to loading with three dots. Otherwise the label for the button should still be sent. I can do more things here as well. For example I can also make sure that whenever I press enter it's going to send a question. So we're not going to be doing this for this application but this is something you can implement yourself by using the onKeyDown property. So back in the browser we should now be able to see a small loading indicator whenever we ask a question.

I can ask another question so what do you know about tool calling? Which is what we will be implementing next? As you can see when I finish typing and press the button the input field is now disabled and you can see a small loading state over here. Looking at the LLM's response you can see it's also truncated. Meaning that we need to increase the max token size that the LLM generates for us, and you can do this in our actions .tss file which we'll need to head over to anyways to add our tools. In here you can see we are passing model parameters. I have max tokens set to 200. For example you can increase it to 400 and this way it shouldn't truncate any of the responses.

In the same file we're also going to set up our tools now. To set up the tools you need to define a tool definition, and the tool definition includes what your tool is and how the LLM should use it. And this is really important because otherwise the LLM might not pick your correct tool or it might not know it should use your tool to answer certain questions. In order to do this I'm going to paste some code in here which defines a tool to add two numbers together.

So I have this tool which I call add and the add tool adds the values of a and b to get a sum. So it's going to take two input properties which are a and b and it's going to add these numbers together. I'm defining tools as a constant here and I need to pass this to my chat function right there. So I'm going to say messages and then after messages comes tools. I'm also going to set the tool choice option. Meaning that I can force the LLM to use a certain tool or I can just set it to auto meaning that the LLM can use any tool.

I'm going to save this and after saving this I still need to implement a logic in order to call the correct tool when the LLM is saying us a certain tool needs to be called. When you work with tool calling you the application builder still needs to implement a logic to actually call the tools. The LLM is only going to propose which tool you should call and how you should call that tool. So instead of returning a message here we need to look if the LLM is going to propose a tool call and if it does we need to execute that actual tool call.

For this we need a bit more code. So instead of returning the message generated by the LLM we're going to look for any tool calls in its response. And we can actually console log this to see if it's going to propose a tool call whenever we ask a question that's related to adding two numbers together.

So I'm going to console log this right here and then head over to my browser. Let me refresh the page so we have a fresh message history and let me ask a question like what is the outcome of 6 plus 6? You can see it's not giving me any return because the LLM actually proposed to do a tool call. If I go back to VS Code I can actually see what tool call it proposed us to do. If you look at the log here you can see we have a tool with a tool call ID.

We also have a function which is named add. So this is the same as we have in our tool definition. And then there are two arguments which are a and b and both are 6. So we need to implement a logic right here to work with the add tool.

And this is what we're going to do next. For this I'm first going to map over all the different tool calls, and I'm going to do this by taking the tool call response from the LLM and I'm going to look for its ID and then I'm also going to look for a function. I'm going to check for the name add because I already know I have a tool called add and then I'm going to look at the different arguments.

So the arguments are a and b which in the previous case were both 6. And I'm going to add these together in a final response and this all together will be my tool response, and the large language model is going to look at my tool response and based on it it's going to generate natural language that's going to return to you in the application. Let me format this code. Once we have the tool responses we should return a message history back so the LLM can interpret the history and give us natural language in return.

For this I'm going to check for the length of tool responses. If it's bigger than zero I'm going to call the message function again. So the message function will call itself but now with an updated message history. It will include all the previous messages, it will include the tool call the LLM proposed and then it will include the response of the tool, and then finally it's going to go through this loop again and in the end it should return your final message which is natural language. So after saving this I can head back to the browser and I can ask the question again what is the outcome of 6 plus 6? And this time it should use the tool, it should look at the tool response and finally show you the outcome of 6 plus 6 is 12. So you can easily see these tool calls get more complex over time, and at some point you might see yourself creating tools for all the different mathematical options like adding or multiplying or working with pi.

So instead of defining all your tools by hand you can also use community tools. For example tools created by different frameworks, and next we're going to use tools created in wxflows which is a way to share community tools such as tools to do different mathematical calculations. I'm going to kill the process running in my terminal and I'm going to start creating a new directory which I call wxflows which is where we'll be adding our tools. We're going to run mkdir wxflows. I'm going to move into this directory and in here I need to use the wxflows CLI. So if you install wxflows CLI by looking at their documentation on github you can actually start to use it to import tools.

First let me check if I have the correct version installed and this will validate that the CLI is installed without any issues, and then I can use it to import a tool. For example I can import a wolfram alpha tool which is used to create mathematical calculations. Let me also create a project configuration which I can do by running wxflows in it. And then it's going to ask me for a project name. And I can call this api slash watsonx chat app. So the name of my endpoint or project will be the same as the project I created in here.

So this will make things easier for me in the future when I create multiple tool endpoints. And now I can start importing a community tool. The community tool I'm importing from github is a tool to do mathematical calculations.

It might take a few seconds to import this tool and then you can see within my wxflows directory all sorts of files have been created. The most important one is this tools .graphql file where you can see we have a math tool. It's able to perform calculations, data unit conversations and all sorts of formulas.

And you can also see it's using wolfram alpha under the covers which is a very nice way to work with different mathematical calculations. I'm going to close this file and I'm going to run the wxflows deploy step. So your tools created or imported using wxflows will be deployed to an endpoint. And this endpoint is what we connect in our chat application. So I won't be defining all the tools locally but instead I have them deployed to an endpoint from which I can pull them in and where they will also be executed.

Make sure to remember this endpoint because we need to add more environment variables to our file right here. We need to add two additional environment variables which are my wxflows api key and my wxflows endpoint. You can get the endpoint by looking at your terminal as it's being printed right there. You can get your api key by running the command wxflows boomi dash dash api key.

So this should print your api key right here in the terminal and after that you can paste it inside your environment file. Make sure to save the environment file before continuing. I'm going to move back into my project root which is watsonx chat app.

And inside the actions .ts file I now need to start importing the wxflows sdk which of course I need to install first. So I can run the command npm install add wxflows slash sdk and then we're going to make sure to install the latest beta version because this is a community project still in development. So after installing the sdk I can make some additions to my actions.ts file.

In here I need to import the following. I'm going to import wxflows from the library I just installed and I also need to make sure that I extend my message type to include a tool call id, which is used by the large language model to tell you which tool to call and how to call it. It is optional because not every message will have a tool call included. And this will also be a string. I can delete the tools I created previously because we'll be using tools that we just deployed to wxflows. So the mathematical tool that you just saw.

Inside my message function I can now start to import the sdk. So I'm going to do this right here and I'm going to pass it my endpoint and API key which are in the environment file that we just set up. Using this tool client I can now retrieve tools. I can say const tools is await because it needs some time to render, tool client dot tools.

So these tools are the tools I just deployed which is my mathematical tool. You don't need to change anything in your text yet because it still knows there's a tool. The tool name is still the same so we still have a variable called tools and we still want the LLM to decide for itself whether it's going to call one tool or the other or it's going to call no tools. We do need to make some changes in this if-else statement. So in here we're checking for all the potential tool calls and then we're executing the tool calls.

By using wxflows and the community tools we don't need to set up any of these tool call logic or self. Instead we're going to let the wxflows sdk handle it for us. So I'm going to delete all this code and replace it with the following.

So I'm still going to create a tool response as constant but this time it's using the tool client to execute the tools based on the entire chat history, and of course make sure to delete all the previous data that we don't really need here. So all the tool response logic can be deleted. Make sure to save this and then run your application again because we previously killed the process to run our frontend app, and this time it's going to be available in the browser again where you should make sure to refresh the page so you don't have any old data in there. You can ask another question like what is the outcome of 6 plus 6 and it should render the response right here in the browser.

But this time it's not using the add tool that we created ourselves but it's using the math community tools from wxflows. You can see it's not actually creating the result because there was some error while calling the tool. This sometimes happens and my advice will be ready to refresh the browser and bear with the large language models which could be a bit picky from day to day. So I'm going to refresh my browser and ask the same question again. This time you can see it's actually generating the outcome of 6 plus 6 which is 12.

You can see there's also links in there. So the Wolfram alpha tool doesn't only give you the result it's also going to give you more information on how it came to the conclusion. This means that you can also ask more complex things.

So let's try another question like what is the square root of something more complex? Something we won't be able to do with our add tool directly. I'm also going to make sure I refresh the page because I don't want old message history to complicate my response. So what is the square root of the third decimal of pi times 10? This is a quite complex question and I won't expect the LLM to be able to answer it without doing any tool calls. As you can see it's not able to do it on the first go so I always advise you to just try again.

You can see it's generating the tool call but somehow it doesn't seem to execute the tool call directly. What I advise you to do in this scenario is go back to the application. Make sure to run ampere and run dev again because sometimes the models get too confused and we need to actually restart the entire application.

We go back to our chat application. Ask the same question again and this time we should get the response that we expect which is the answer to what is the square root of the third decimal of pi times 10, and now you can see we're getting the answer which is 17.77. Of course there's much more you can do with the watsonx.AI SDK. For example you can implement streaming or you can have a chat with images. And that's how easy it is to build your AI applications using the watsonx.AI SDK. If you want to continue building make sure to look at the code which you can find in the description of this video.

2025-02-25 08:35

Show Video

Other news

Моддинг DaVinci Resolve Speed Editor // шумка, смазка, и ещё кое-что 2025-03-30 11:28
18 дюймовый широкополосный динамик. (Бувайло В.В.) 2025-03-28 00:44
Bumpy Week for Nvidia, Tesla, Billion Dollar Space Bet | Bloomberg Technology 2025-03-28 17:01