Build an AI-Powered E-commerce Chatbot with Next.js, Vercel AI SDK, and BigCommerce MCP

In this guide, you'll learn how to build an AI-powered chatbot using Next.js and the Vercel AI SDK. This chatbot will integrate with BigCommerce's Storefront MCP Server to provide product search, cart management, and checkout capabilities.

🚀 Beta Access is required for the BigCommerce Storefront MCP server. However, that is not required for the majority of this guide. If you’re interested in accessing the beta and joining our AI Labs Developer program, you can apply here.

Note: This code is provided for demo and educational purposes only. It is not intended for production use and should not be deployed to live environments without proper security reviews, testing, and production-ready implementations.

Prerequisites

This demo uses the following tech stack.

To follow this guide, you’ll need to have basic knowledge of React, Next.js, and TypeScript. You’ll also need to have Node.js and npm installed on your machine.

If you’d like to connect to the BigCommerce Storefront MCP Server, you need to create a storefront. If you don’t already have one, follow the Catalyst Getting Started guide for setup details.

Getting started

To follow along with this guide, you'll need to clone the sample repository and switch to the start branch:

The start branch contains a basic Next.js application with the foundation for our chatbot. This setup is based on the Next.js App Router Quickstart for Vercel AI SDK, which provides an excellent foundation for building AI-powered applications.

Environment Variables

Create a .env.local file in your project root with the following environment variables:

Running the Starter Project

To get started with the project, install your dependencies and then run the development server:

Then open http://localhost:3000 in your browser. You should have a simple home page with a chat component which isn't functional yet. That's what you'll work on next.

Configuring the Vercel AI SDK

The Vercel AI SDK provides some useful React hooks for building AI-powered applications. Let's start by setting up the basic chat functionality by using the useChat hook from @ai-sdk/react. This hook provides several things:

  • messages: Array of all messages in the conversation

  • sendMessage: Function to send a new message

  • status: Current status (idle, submitted, streaming)

  • error: Any errors that occur during the chat

Update the main chat component, src/components/chat/index.tsx, to import and use the useChat hook and destructure the following properties: messages, sendMessage, status, and error. You can then delete the hard-coded properties for messages, status, and error that were defined in the starter code.

Then, update the onSubmit handler to call sendMessage({text: input}). The result will look like this.

At this point, if you type a message in the chat and submit, you'll see you get an error. That's because it's sending the message to a backend API endpoint which isn't implemented yet. Let's work on that next.

Creating the Chat Backend API Endpoint

Now let's implement the API endpoint that will handle chat requests. By default, the useChat hook sends messages to /api/chat. This endpoint already exists in the starter code. Now you need to update it to use the Vercel AI SDK's streamText function to generate responses.

In this example, we'll be using OpenAI as the LLM. You can easily swap this out for other LLM options if you'd like. This is one of the benefits of using the packages that Vercel provides to swap out one LLM for another.

Basic API Route Implementation

Start by adding the following imports from @ai-sdk/openai and the ai packages to the route file located at src/app/api/chat/route.ts.

Note, you’ll send the completed version of this file at the end of this section. Next, extract the conversation history from the incoming chat request inside of your POST route handler:

The messages array contains the entire chat history, which provides context for the AI to generate appropriate responses.

Now, make a call to streamText and choose a model from OpenAI. Here, we’ll use gpt-4.1-mini. You'll also need to convert the incoming messages to the appropriate format by calling convertToModelMessages like so:

Finally, return the streaming response:

This converts the AI response to a format that the frontend can consume and stream to the user in real-time. Here's the complete API route code up until this point:

Displaying Basic Chat Responses

With the basic setup complete, in the browser, you can test your chat again by sending a message. At this point, you shouldn't get an error, but you'll notice no messages are being displayed. However, if you add a console.log(messages) to your chat component, you should be able to see the message data in the console.

If you dig in, you'll notice that you have an array of messages, each of which has a role property which clarifies if the message was from the user or the bot. Each message also includes a parts array. Each part includes a type property as well as additional information that we'll talk more about later. For more details, you can reference the UIMessage Interface.

With a basic understanding of the underlying message structure, we can now handle displaying messages from the user and the bot. To display these messages, you’ll have to iterate through each message and the parts within each message and render the text appropriately.

Here's the basic logic the Vercel AI SDK documentation provides for displaying messages which includes using a case/switch statement to render different types of messages.

There are quite a few more details to this when you get into it, but for simplicity sake, the starter code comes with a <MessageRenderer> component which handles all of this logic for us. Inside of your Chat component, uncomment the following snippet to render messages.

Now, you should be able to send a message such as "Hello World" and view the response in the chat window.

Showing a Loading State

You may have noticed that the <MessageRenderer> component accepts an isLoading prop which is hard-coded to false. We can calculate that loading state based on the status variable that comes from the useChat hook. We want isLoading to be set to true if the status is either "streaming" or "submitted".

Update the isLoading like so:

You should now see a loading state while you're waiting for a response from the bot.

MCP Servers

MCP Model Context Protocol is a protocol that allows LLMs to connect to external data sources and tools. By definition of the protocol, MCP servers define a list of tools that AI applications can call. Each tool includes details on what it does as well as type definitions of the parameters and return value. This enables LLMs the ability to autonomously make decisions on which tools to call.

One of the beautiful things about creating your own chat bot is the ability to add tools that the LLM can decide to call. You can hard-code tools yourself as shown in the Next.js App Router Getting Started example, but in this case, we are going to integrate with tools from the BigCommerce Storefront MCP Server.

Connecting to the BigCommerce Storefront MCP Server

NOTE: The BigCommerce MCP Server is currently in private Beta. If you'd like access, you can apply to join here.

The BigCommerce MCP server provides tools for:

  • Product search

  • Cart management (add, delete, update)

  • Checkout

For connecting to the Storefront MCP Server, you'll need the MCP server URL. For details on getting this URL, follow the BigCommerce MCP Server documentation. You'll then need to add this URL to your .env.local file.

The majority of the logic for connecting to the MCP server is already taken care of in the src/lib/mcpClient.ts file. You don't have to make any chances here, but below is a simplified version of that file which:

  • creates an instance of StreamableHTTPClientTransport with MCP server URL

  • creates an MCP client instance from the transport

  • returns the client and the session id from the transport

Next, you will need to register the tools from the MCP client with the bot itself. This happens in the API route.

Adding Tools to streamText

In your API Route handler, src/app/api/chat/route.ts, you'll need to call the getClient function we just looked at and retrieve its tools like so:

Then, you can pass those tools into the config object in the call to streamText:

The full chat route will look like this.

Testing it out

You should now be able to interact with your product data through the chat. This demo uses the sample plant data that comes with a fresh install of Catalyst. Based on the sample plant data, you could ask a question like, "What plant products do you have?", and you should see a list of products returned.

You can then ask to add an item to your cart like "Add the snake plant to my cart". This should display a message saying the cart was successfully created and the item added.

Lastly, you can ask to checkout by saying "I'd like to check out", and you should see a checkout URL generated for you.

Displaying Different Message Types

You may have noticed that different types of chat messages have resulted in different types of UI responses being displayed. For example, a product query displays a grid of product results, but updating your cart shows a different component. How does that work?

Each part of a given message includes a type property to say. The basic version of this is just text. However, the type can also be "dynamic-tool" which signifies that it includes the output from a call to a tool from an MCP server. In addition to the type value, each tool call will also have a toolName property which defines exactly which tool was called.

In our use case, we're handling a couple of different tool calls:

  • "search_products"

  • "add_item_to_cart"

  • "create_checkout_url"

You can probably imagine those align directly with the capabilities of the Storefront MCP Server that we defined early on. There's no action for you to take here other than understanding these are the properties that are being used to determine what UI to show to the user. For more information, you can reference the MessageRenderer.tsx, UserMessage.tsx, and BotMessage.tsx files.

Conclusion

I hope this gives you a general understanding of the different components of building a chatbot that also integrates with an MCP server. From here, you should have the tools to start building custom experiences for your customers. That said, we’re eager to hear from you on what you’re building, what exciting use cases you have, and how we can help.

One additional reminder that if you’d like access to the BigCommerce Storefront MCP Server, you can apply to join here.