Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Coded Smorgasbord: High Strung

      September 26, 2025

      Chainguard launches trusted collection of verified JavaScript libraries

      September 26, 2025

      CData launches Connect AI to provide agents access to enterprise data sources

      September 26, 2025

      PostgreSQL 18 adds asynchronous I/O to improve performance

      September 26, 2025

      Distribution Release: Neptune 9.0

      September 25, 2025

      Distribution Release: Kali Linux 2025.3

      September 23, 2025

      Distribution Release: SysLinuxOS 13

      September 23, 2025

      Development Release: MX Linux 25 Beta 1

      September 22, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      PHP 8.5.0 RC 1 available for testing

      September 26, 2025
      Recent

      PHP 8.5.0 RC 1 available for testing

      September 26, 2025

      Terraform Code Generator Using Ollama and CodeGemma

      September 26, 2025

      Beyond Denial: How AI Concierge Services Can Transform Healthcare from Reactive to Proactive

      September 25, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Distribution Release: Neptune 9.0

      September 25, 2025
      Recent

      Distribution Release: Neptune 9.0

      September 25, 2025

      FOSS Weekly #25.39: Kill Switch Phones, LMDE 7, Zorin OS 18 Beta, Polybar, Apt History and More Linux Stuff

      September 25, 2025

      Distribution Release: Kali Linux 2025.3

      September 23, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»How to Build a LangGraph and Composio-Powered Discord Bot

    How to Build a LangGraph and Composio-Powered Discord Bot

    June 24, 2025

    With the rise of AI tools over the past couple years, most of us are learning how to use them in our projects. And in this article, I’ll teach you how to build a quick Discord bot with LangGraph and Composio.

    You’ll use LangGraph nodes to build a branching flow that processes incoming messages and detects intent like chat, support, or tool usage. It’ll then route them to the right logic based on what the user says.

    I know it may sound a bit weird to use LangGraph for a Discord bot, but you’ll soon see that this project is a pretty fun way to visualize how node-based AI workflows actually run.

    For now, the workflow is simple: you’ll figure out if the user is just chatting, asking a support question, or requesting that the bot perform an action, and respond based on that.

    What you will learn: 👀

    • How to use LangGraph to create an AI-driven workflow that powers your bot’s logic.

    • How you can integrate Composio to let your bot take real-world actions using external tools.

    • How you can use Discord.js and handle different message types like replies, threads, and embeds.

    • How you can maintain per-channel context using message history and pass it into AI.

    By the end of this article, you’ll have a quite decent and functional Discord bot that you can add to your server. It replies to users based on message context and even has tool-calling support! (And there’s a small challenge for you to implement something yourself.) 😉

    Prerequisites

    Make sure you have Discord installed on your machine so you can test the bot easily.

    This project is designed to demonstrate how you can build a bot powered by LangGraph and Composio. Before proceeding, it is helpful to have a basic understanding of:

    • How to work with Node.js

    • Rough idea of what LangGraph is and how it works

    • How to work with Discord.js

    • What AI Agents are

    If you’re not confident about any of these, try following along anyway. You might pick things up just fine. And if it ever gets confusing, you can always check out the full source code here.

    Table of Contents

    • How to Set Up the Environment

      • Initialize the Project

      • Install Dependencies

      • Configure Composio

      • Configure Discord Integration

      • Add Environment Variables

    • Build the Application Logic

      • Define Types and Utility Helpers

      • Implement LangGraph Workflow

      • Set Up Discord.js Client

    • Wrapping Up

    How to Set Up the Environment

    In this section, we will get everything set up for building the project.

    Initialize the Project

    Initialize a Node.js application with the following command:

    💁 Here I’m using Bun, but you can choose any package manager of your choice.

    mkdir discord-bot-langgraph && <span class="hljs-built_in">cd</span> discord-bot-langgraph 
    && bun init -y
    

    Now, that our Node.js application is ready, let’s install some dependencies.

    Install Dependencies

    We’ll be using the following main packages and some other helper packages:

    • discord.js: Interacts with the Discord API

    • composio: Adds tools integration support to the bot

    • openai: Enables AI-powered responses

    • langchain: Manages LLM workflows

    • zod: Validates and parses data safely

    bun add discord.js openai @langchain/core @langchain/langgraph 
    langchain composio-core dotenv zod uuid
    

    Configure Composio

    💁 You’ll use Composio to add integrations to your application. You can choose the integration of your choice, but here I’m using Google sheets.

    First, before moving forward, you need to get access to a Composio API key.

    Go ahead and create an account on Composio, get your API key, and paste it in the .env file in the root of the project:

    Composio dashboard

    <span class="hljs-attr">COMPOSIO_API_KEY</span>=<your_composio_api_key>
    

    Authenticate yourself with the following command:

    composio login
    

    Once that’s done, run the composio whoami command, and if you see something like the below, you’re successfully logged in.

    Output of the `composio whoami` command

    You’re almost there: now you just need to set up integrations. Here, I’ll use Google sheets, but again you can set up any integration you like.

    Run the following command to set up the Google Sheets integration:

    composio add googlesheets
    

    You should see an output similar to this:

    Add Composio Google Sheets integration

    Head over to the URL that’s shown, and you should be authenticated like so:

    Composio authentication success

    That’s it. You’ve successfully added the Google Sheets integration and can access all its tools in your application.

    Once finished, run the composio integrations command to verify if it worked. You should see a list of all your integrations:

    Composio list of integrations

    Configure Discord Integration

    This is a bit off topic for this tutorial, but basically, you’ll create an application/bot on Discord and add it to your server.

    You can find a guide on how to create and add a bot to your server in the Discord.js documentation.

    And yes, it’s free if you’re wondering whether any step here requires a pro account or anything. 😉

    Make sure you populate these three environment variables:

    <span class="hljs-attr">DISCORD_BOT_TOKEN</span>=<YOUR_DISCORD_BOT_TOKEN>
    <span class="hljs-attr">DISCORD_BOT_GUILD_ID</span>=<YOUR_DISCORD_BOT_GUILD_ID>
    <span class="hljs-attr">DISCORD_BOT_CHANNEL_ID</span>=<YOUR_DISCORD_BOT_CHANNEL_ID>
    

    Add Environment Variables

    You’ll require a few other environment variables, including the OpenAI API key, for the bot to work.

    Your final .env file should look something like this:

    <span class="hljs-attr">OPENAI_API_KEY</span>=<YOUR_OPENAI_API_KEY>
    
    <span class="hljs-attr">COMPOSIO_API_KEY</span>=<YOUR_COMPOSIO_API_KEY>
    
    <span class="hljs-attr">DISCORD_BOT_TOKEN</span>=<YOUR_DISCORD_BOT_TOKEN>
    <span class="hljs-attr">DISCORD_BOT_GUILD_ID</span>=<YOUR_DISCORD_BOT_GUILD_ID>
    <span class="hljs-attr">DISCORD_BOT_CHANNEL_ID</span>=<YOUR_DISCORD_BOT_CHANNEL_ID>
    

    Build the Application Logic

    Now that you’ve laid all the groundwork, you can finally start coding the project.

    Define Types and Utility Helpers

    Let’s start by writing some helper functions and defining the types of data you’ll be working with.

    It’s important in any application, especially ones like the one we’re building – which is prone to errors due to multiple API calls – that we set up decent logging so we know when and how things go wrong.

    Create a new file named logger.ts inside the utils directory and add the following lines of code:

    <span class="hljs-comment">// 👇 discord-bot-langgraph/utils/logger.ts</span>
    
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> DEBUG = <span class="hljs-string">"DEBUG"</span>;
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> INFO = <span class="hljs-string">"INFO"</span>;
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> WARN = <span class="hljs-string">"WARN"</span>;
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> ERROR = <span class="hljs-string">"ERROR"</span>;
    
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">type</span> LogLevel = <span class="hljs-keyword">typeof</span> DEBUG | <span class="hljs-keyword">typeof</span> INFO | <span class="hljs-keyword">typeof</span> WARN | <span class="hljs-keyword">typeof</span> ERROR;
    
    <span class="hljs-comment">// eslint-disable-next-line  @typescript-eslint/no-explicit-any</span>
    <span class="hljs-keyword">export</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">log</span>(<span class="hljs-params">level: LogLevel, message: <span class="hljs-built_in">string</span>, ...data: <span class="hljs-built_in">any</span>[]</span>) </span>{
      <span class="hljs-keyword">const</span> timestamp = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Date</span>().toLocaleString();
      <span class="hljs-keyword">const</span> prefix = <span class="hljs-string">`[<span class="hljs-subst">${timestamp}</span>] [<span class="hljs-subst">${level}</span>]`</span>;
    
      <span class="hljs-keyword">switch</span> (level) {
        <span class="hljs-keyword">case</span> ERROR:
          <span class="hljs-built_in">console</span>.error(prefix, message, ...data);
          <span class="hljs-keyword">break</span>;
        <span class="hljs-keyword">case</span> WARN:
          <span class="hljs-built_in">console</span>.warn(prefix, message, ...data);
          <span class="hljs-keyword">break</span>;
        <span class="hljs-keyword">default</span>:
          <span class="hljs-built_in">console</span>.log(prefix, message, ...data);
      }
    }
    

    This is already looking great. Why not write a small environment variables validator? Run this during the initial program startup, and if something goes wrong, the application will exit with clear logs so users know if any environment variables are missing.

    Create a new file named env-validator.ts in the utils directory and add the following lines of code:

    <span class="hljs-comment">// 👇 discord-bot-langgraph/utils/env-validator.ts</span>
    
    <span class="hljs-keyword">import</span> { log, ERROR } <span class="hljs-keyword">from</span> <span class="hljs-string">"./logger.js"</span>;
    
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> OPENAI_API_KEY = <span class="hljs-string">"OPENAI_API_KEY"</span>;
    
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> DISCORD_BOT_TOKEN = <span class="hljs-string">"DISCORD_BOT_TOKEN"</span>;
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> DISCORD_BOT_GUILD_ID = <span class="hljs-string">"DISCORD_BOT_GUILD_ID"</span>;
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> DISCORD_BOT_CLIENT_ID = <span class="hljs-string">"DISCORD_BOT_CLIENT_ID"</span>;
    
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> COMPOSIO_API_KEY = <span class="hljs-string">"COMPOSIO_API_KEY"</span>;
    
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> validateEnvVars = (requiredEnvVars: <span class="hljs-built_in">string</span>[]): <span class="hljs-function"><span class="hljs-params">void</span> =></span> {
      <span class="hljs-keyword">const</span> missingVars: <span class="hljs-built_in">string</span>[] = [];
    
      <span class="hljs-keyword">for</span> (<span class="hljs-keyword">const</span> envVar <span class="hljs-keyword">of</span> requiredEnvVars) {
        <span class="hljs-keyword">if</span> (!process.env[envVar]) {
          missingVars.push(envVar);
        }
      }
    
      <span class="hljs-keyword">if</span> (missingVars.length > <span class="hljs-number">0</span>) {
        log(
          ERROR,
          <span class="hljs-string">"missing required environment variables. please create a .env file and add the following:"</span>,
        );
        missingVars.forEach(<span class="hljs-function">(<span class="hljs-params">envVar</span>) =></span> <span class="hljs-built_in">console</span>.error(<span class="hljs-string">`- <span class="hljs-subst">${envVar}</span>`</span>));
        process.exit(<span class="hljs-number">1</span>);
      }
    };
    

    Now, let’s also define the type of data you’ll be working with:

    Create a new file named types.ts inside the types directory and add the following lines of code:

    <span class="hljs-comment">// 👇 discord-bot-langgraph/types/types.ts</span>
    
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> QUESTION = <span class="hljs-string">"QUESTION"</span>;
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> HELP = <span class="hljs-string">"HELP"</span>;
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> SUPPORT = <span class="hljs-string">"SUPPORT"</span>;
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> OTHER = <span class="hljs-string">"OTHER"</span>;
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> TOOL_CALL_REQUEST = <span class="hljs-string">"TOOL_CALL_REQUEST"</span>;
    
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">type</span> FinalAction =
      | { <span class="hljs-keyword">type</span>: <span class="hljs-string">"REPLY"</span>; content: <span class="hljs-built_in">string</span> }
      | { <span class="hljs-keyword">type</span>: <span class="hljs-string">"REPLY_IN_THREAD"</span>; content: <span class="hljs-built_in">string</span> }
      | {
          <span class="hljs-keyword">type</span>: <span class="hljs-string">"CREATE_EMBED"</span>;
          title: <span class="hljs-built_in">string</span>;
          description: <span class="hljs-built_in">string</span>;
          roleToPing?: <span class="hljs-built_in">string</span>;
        };
    
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">type</span> MessageChoice =
      | <span class="hljs-keyword">typeof</span> SUPPORT
      | <span class="hljs-keyword">typeof</span> OTHER
      | <span class="hljs-keyword">typeof</span> TOOL_CALL_REQUEST;
    
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">type</span> SupportTicketType = <span class="hljs-keyword">typeof</span> QUESTION | <span class="hljs-keyword">typeof</span> HELP;
    
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">type</span> Message = {
      author: <span class="hljs-built_in">string</span>;
      content: <span class="hljs-built_in">string</span>;
    };
    
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">type</span> SupportTicketQuestion = {
      description: <span class="hljs-built_in">string</span>;
      answer: <span class="hljs-built_in">string</span>;
    };
    
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">type</span> SupportTicket = {
      <span class="hljs-keyword">type</span>?: SupportTicketType;
      question?: SupportTicketQuestion;
    };
    
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">type</span> ToolCallRequestAction = {
      <span class="hljs-comment">// actionLog is not intended to be shown to the end-user.</span>
      <span class="hljs-comment">// This is solely for logging purpose.</span>
      actionLog: <span class="hljs-built_in">string</span>;
      status: <span class="hljs-string">"success"</span> | <span class="hljs-string">"failed"</span> | <span class="hljs-string">"acknowledged"</span>;
    };
    

    The types are pretty self-explanatory, but here’s a quick overview.

    Message holds the user’s input and author. Each message can be marked as support, a tool call request, or just other, like spam or small talk.

    Support messages are further labeled as either help or a question using SupportTicketType.

    The graph returns a FinalAction, which can be a direct reply, a reply in a thread, or an embed. If it’s CREATE_EMBED and has roleToPing set, it denotes support help, so we can ping the mod.

    For tool-based responses, ToolCallRequestAction stores the status and an internal log used for debugging.

    Now, you need one last helper function to use in your nodes to extract the response from the LLM. Create a new file named helpers.ts and add the following code:

    <span class="hljs-comment">// 👇 discord-bot-langgraph/utils/helpers.ts</span>
    
    <span class="hljs-keyword">import</span> <span class="hljs-keyword">type</span> { AIMessage } <span class="hljs-keyword">from</span> <span class="hljs-string">"@langchain/core/messages"</span>;
    
    <span class="hljs-keyword">export</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">extractStringFromAIMessage</span>(<span class="hljs-params">
      message: AIMessage,
      fallback: <span class="hljs-built_in">string</span> = "No valid response generated by the LLM.",
    </span>): <span class="hljs-title">string</span> </span>{
      <span class="hljs-keyword">if</span> (<span class="hljs-keyword">typeof</span> message.content === <span class="hljs-string">"string"</span>) {
        <span class="hljs-keyword">return</span> message.content;
      }
    
      <span class="hljs-keyword">if</span> (<span class="hljs-built_in">Array</span>.isArray(message.content)) {
        <span class="hljs-keyword">const</span> textContent = message.content
          .map(<span class="hljs-function">(<span class="hljs-params">item</span>) =></span> (<span class="hljs-keyword">typeof</span> item === <span class="hljs-string">"string"</span> ? item : <span class="hljs-string">""</span>))
          .join(<span class="hljs-string">" "</span>);
        <span class="hljs-keyword">return</span> textContent.trim() || fallback;
      }
    
      <span class="hljs-keyword">return</span> fallback;
    }
    

    You’re all set for now with these helper functions in place. Now, you can start coding the logic.

    Implement LangGraph Workflow

    Now that you have the types defined, structure your graph and connect it with some edges.

    Create a new file named graph.ts inside the src directory and add the following lines of code:

    <span class="hljs-comment">// 👇 discord-bot-langgraph/src/graph.ts</span>
    
    <span class="hljs-keyword">import</span> { Annotation, END, START, StateGraph } <span class="hljs-keyword">from</span> <span class="hljs-string">"@langchain/langgraph"</span>;
    <span class="hljs-keyword">import</span> {
      <span class="hljs-keyword">type</span> FinalAction,
      <span class="hljs-keyword">type</span> ToolCallRequestAction,
      <span class="hljs-keyword">type</span> Message,
      <span class="hljs-keyword">type</span> MessageChoice,
      <span class="hljs-keyword">type</span> SupportTicket,
    } <span class="hljs-keyword">from</span> <span class="hljs-string">"../types/types.js"</span>;
    <span class="hljs-keyword">import</span> {
      processToolCall,
      processMessage,
      processOther,
      processSupport,
      processSupportHelp,
      processSupportQuestion,
    } <span class="hljs-keyword">from</span> <span class="hljs-string">"./nodes.js"</span>;
    <span class="hljs-keyword">import</span> { processMessageEdges, processSupportEdges } <span class="hljs-keyword">from</span> <span class="hljs-string">"./edges.js"</span>;
    
    <span class="hljs-keyword">const</span> state = Annotation.Root({
      message: Annotation<Message>(),
      previousMessages: Annotation<Message[]>(),
      messageChoice: Annotation<MessageChoice>(),
      supportTicket: Annotation<SupportTicket>(),
      toolCallRequest: Annotation<ToolCallRequestAction>(),
      finalAction: Annotation<FinalAction>(),
    });
    
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">type</span> State = <span class="hljs-keyword">typeof</span> state.State;
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">type</span> Update = <span class="hljs-keyword">typeof</span> state.Update;
    
    <span class="hljs-keyword">export</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">initializeGraph</span>(<span class="hljs-params"></span>) </span>{
      <span class="hljs-keyword">const</span> workflow = <span class="hljs-keyword">new</span> StateGraph(state);
    
      workflow
        .addNode(<span class="hljs-string">"process-message"</span>, processMessage)
        .addNode(<span class="hljs-string">"process-support"</span>, processSupport)
        .addNode(<span class="hljs-string">"process-other"</span>, processOther)
    
        .addNode(<span class="hljs-string">"process-support-question"</span>, processSupportQuestion)
        .addNode(<span class="hljs-string">"process-support-help"</span>, processSupportHelp)
        .addNode(<span class="hljs-string">"process-tool-call"</span>, processToolCall)
    
        <span class="hljs-comment">// Edges setup starts here....</span>
        .addEdge(START, <span class="hljs-string">"process-message"</span>)
    
        .addConditionalEdges(<span class="hljs-string">"process-message"</span>, processMessageEdges)
        .addConditionalEdges(<span class="hljs-string">"process-support"</span>, processSupportEdges)
    
        .addEdge(<span class="hljs-string">"process-other"</span>, END)
        .addEdge(<span class="hljs-string">"process-support-question"</span>, END)
        .addEdge(<span class="hljs-string">"process-support-help"</span>, END)
        .addEdge(<span class="hljs-string">"process-tool-call"</span>, END);
    
      <span class="hljs-keyword">const</span> graph = workflow.compile();
    
      <span class="hljs-comment">// To get the graph in png</span>
      <span class="hljs-comment">// getGraph() is deprecated though</span>
      <span class="hljs-comment">// Bun.write("graph/graph.png", await graph.getGraph().drawMermaidPng());</span>
    
      <span class="hljs-keyword">return</span> graph;
    }
    

    The initializeGraph function, as the name suggests, returns the graph you can use to execute the workflow.

    The process-message node is the starting point of the graph. It takes in the user’s message, processes it, and routes it to the appropriate next node: process-support, process-tool-call, or process-other.

    The process-support node further classifies the support message and decides whether it should go to process-support-help or process-support-question.

    The process-tool-call node handles messages when the user tries to trigger some kind of tool or action.

    The process-other node handles everything that doesn’t fall into the support or tool call categories. These are general or fallback responses.

    To help you visualize how things will shape up, here’s how the graph looks with all the different nodes (yet to work on!):

    LangGraph nodes for the Discord bot workflow

    To wire everything together, you need to define edges between nodes, including conditional edges that dynamically decide the next step based on the state.

    Create a new file named edges.ts inside the src directory and add the following lines of code:

    <span class="hljs-comment">// 👇 discord-bot-langgraph/src/edges.ts</span>
    
    <span class="hljs-keyword">import</span> { END } <span class="hljs-keyword">from</span> <span class="hljs-string">"@langchain/langgraph"</span>;
    <span class="hljs-keyword">import</span> { <span class="hljs-keyword">type</span> State } <span class="hljs-keyword">from</span> <span class="hljs-string">"./graph.js"</span>;
    <span class="hljs-keyword">import</span> { QUESTION, OTHER, SUPPORT, TOOL_CALL_REQUEST } <span class="hljs-keyword">from</span> <span class="hljs-string">"../types/types.js"</span>;
    <span class="hljs-keyword">import</span> { log, WARN } <span class="hljs-keyword">from</span> <span class="hljs-string">"../utils/logger.js"</span>;
    
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> processMessageEdges = (
      state: State,
    ): <span class="hljs-string">"process-support"</span> | <span class="hljs-string">"process-other"</span> | <span class="hljs-string">"process-tool-call"</span> | <span class="hljs-string">"__end__"</span> => {
      <span class="hljs-keyword">if</span> (!state.messageChoice) {
        log(WARN, <span class="hljs-string">"state.messageChoice is undefined. Returning..."</span>);
        <span class="hljs-keyword">return</span> END;
      }
    
      <span class="hljs-keyword">switch</span> (state.messageChoice) {
        <span class="hljs-keyword">case</span> SUPPORT:
          <span class="hljs-keyword">return</span> <span class="hljs-string">"process-support"</span>;
        <span class="hljs-keyword">case</span> TOOL_CALL_REQUEST:
          <span class="hljs-keyword">return</span> <span class="hljs-string">"process-tool-call"</span>;
        <span class="hljs-keyword">case</span> OTHER:
          <span class="hljs-keyword">return</span> <span class="hljs-string">"process-other"</span>;
        <span class="hljs-keyword">default</span>:
          log(WARN, <span class="hljs-string">"unknown message choice. Returning..."</span>);
          <span class="hljs-keyword">return</span> END;
      }
    };
    
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> processSupportEdges = (
      state: State,
    ): <span class="hljs-string">"process-support-question"</span> | <span class="hljs-string">"process-support-help"</span> | <span class="hljs-string">"__end__"</span> => {
      <span class="hljs-keyword">if</span> (!state.supportTicket?.type) {
        log(WARN, <span class="hljs-string">"state.supportTicket.type is undefined. Returning..."</span>);
        <span class="hljs-keyword">return</span> END;
      }
    
      <span class="hljs-keyword">return</span> state.supportTicket.type === QUESTION
        ? <span class="hljs-string">"process-support-question"</span>
        : <span class="hljs-string">"process-support-help"</span>;
    };
    

    These are the edges that connect different nodes in your application. They direct the flow in your graph.

    Things are really shaping up – so let’s finish the core logic by implementing all the nodes for your application.

    Create a new file named nodes.ts inside the src directory and add the following lines of code:

    <span class="hljs-comment">// 👇 discord-bot-langgraph/src/nodes.ts</span>
    
    <span class="hljs-keyword">import</span> { <span class="hljs-keyword">type</span> State, <span class="hljs-keyword">type</span> Update } <span class="hljs-keyword">from</span> <span class="hljs-string">"./graph.js"</span>;
    <span class="hljs-keyword">import</span> { ChatOpenAI } <span class="hljs-keyword">from</span> <span class="hljs-string">"@langchain/openai"</span>;
    <span class="hljs-keyword">import</span> { z } <span class="hljs-keyword">from</span> <span class="hljs-string">"zod"</span>;
    <span class="hljs-keyword">import</span> {
      HELP,
      TOOL_CALL_REQUEST,
      OTHER,
      QUESTION,
      SUPPORT,
    } <span class="hljs-keyword">from</span> <span class="hljs-string">"../types/types.js"</span>;
    <span class="hljs-keyword">import</span> { extractStringFromAIMessage } <span class="hljs-keyword">from</span> <span class="hljs-string">"../utils/helpers.js"</span>;
    <span class="hljs-keyword">import</span> { OpenAIToolSet } <span class="hljs-keyword">from</span> <span class="hljs-string">"composio-core"</span>;
    <span class="hljs-keyword">import</span> <span class="hljs-keyword">type</span> { ChatCompletionMessageToolCall } <span class="hljs-keyword">from</span> <span class="hljs-string">"openai/resources/chat/completions.mjs"</span>;
    <span class="hljs-keyword">import</span> { v4 <span class="hljs-keyword">as</span> uuidv4 } <span class="hljs-keyword">from</span> <span class="hljs-string">"uuid"</span>;
    <span class="hljs-keyword">import</span> { DEBUG, ERROR, INFO, log, WARN } <span class="hljs-keyword">from</span> <span class="hljs-string">"../utils/logger.js"</span>;
    <span class="hljs-keyword">import</span> {
      SystemMessage,
      HumanMessage,
      ToolMessage,
      BaseMessage,
    } <span class="hljs-keyword">from</span> <span class="hljs-string">"@langchain/core/messages"</span>;
    
    <span class="hljs-comment">// feel free to use any model. Here I'm going with gpt-4o-mini</span>
    <span class="hljs-keyword">const</span> model = <span class="hljs-string">"gpt-4o-mini"</span>;
    
    <span class="hljs-keyword">const</span> toolset = <span class="hljs-keyword">new</span> OpenAIToolSet();
    <span class="hljs-keyword">const</span> llm = <span class="hljs-keyword">new</span> ChatOpenAI({
      model,
      apiKey: process.env.OPENAI_API_KEY,
      temperature: <span class="hljs-number">0</span>,
    });
    
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> processMessage = <span class="hljs-keyword">async</span> (state: State): <span class="hljs-built_in">Promise</span><Update> => {
      log(DEBUG, <span class="hljs-string">"message in process message:"</span>, state.message);
    
      <span class="hljs-keyword">const</span> llm = <span class="hljs-keyword">new</span> ChatOpenAI({
        model,
        apiKey: process.env.OPENAI_API_KEY,
        temperature: <span class="hljs-number">0</span>,
      });
    
      <span class="hljs-keyword">const</span> structuredLlm = llm.withStructuredOutput(
        z.object({
          <span class="hljs-keyword">type</span>: z.enum([SUPPORT, OTHER, TOOL_CALL_REQUEST]).describe(<span class="hljs-string">`
    Categorize the user's message:
    - <span class="hljs-subst">${SUPPORT}</span>: Technical support, help with problems, or questions about AI.
    - <span class="hljs-subst">${TOOL_CALL_REQUEST}</span>: User asks the bot to perform tool action (e.g., "send an email", "summarize chat", "summarize google sheets").
    - <span class="hljs-subst">${OTHER}</span>: General conversation, spam, or off-topic messages.
    `</span>),
        }),
      );
    
      <span class="hljs-keyword">const</span> res = <span class="hljs-keyword">await</span> structuredLlm.invoke([
        [
          <span class="hljs-string">"system"</span>,
          <span class="hljs-string">`You are an expert message analyzer AI. You need to categorize the message into
    one of these categories:
    
    - <span class="hljs-subst">${SUPPORT}</span>: If the message asks for technical support, help with a problem, or questions about AIs and LLMs.
    - <span class="hljs-subst">${TOOL_CALL_REQUEST}</span>: If the message is a direct command or request for the bot to perform an action using external tools/services. Examples: "Summarize a document or Google Sheet", "Summarize the last hour of chat", "Send an email to devteam about this bug", "Create a Trello card for this feature request". Prioritize this if the user is asking the bot to *do* something beyond just answering.
    - <span class="hljs-subst">${OTHER}</span>: For general chit-chat, spam, off-topic messages, or anything not fitting <span class="hljs-subst">${SUPPORT}</span> or <span class="hljs-subst">${TOOL_CALL_REQUEST}</span>.
    `</span>,
        ],
        [<span class="hljs-string">"human"</span>, state.message.content],
      ]);
    
      <span class="hljs-keyword">return</span> {
        messageChoice: res.type,
      };
    };
    
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> processSupport = <span class="hljs-keyword">async</span> (state: State): <span class="hljs-built_in">Promise</span><Update> => {
      log(DEBUG, <span class="hljs-string">"message in support:"</span>, state.message);
    
      <span class="hljs-keyword">const</span> llm = <span class="hljs-keyword">new</span> ChatOpenAI({
        model,
        apiKey: process.env.OPENAI_API_KEY,
        temperature: <span class="hljs-number">0</span>,
      });
    
      <span class="hljs-keyword">const</span> structuredLlm = llm.withStructuredOutput(
        z.object({
          <span class="hljs-keyword">type</span>: z.enum([QUESTION, HELP]).describe(<span class="hljs-string">`
    Type of support needed:
    - <span class="hljs-subst">${QUESTION}</span>: User asks a specific question seeking information or an answer.
    - <span class="hljs-subst">${HELP}</span>: User needs broader assistance, guidance, or reports an issue requiring intervention/troubleshooting.
    `</span>),
        }),
      );
    
      <span class="hljs-keyword">const</span> res = <span class="hljs-keyword">await</span> structuredLlm.invoke([
        [
          <span class="hljs-string">"system"</span>,
          <span class="hljs-string">`
    You are a support ticket analyzer. Given a support message, categorize it as <span class="hljs-subst">${QUESTION}</span> or <span class="hljs-subst">${HELP}</span>.
    - <span class="hljs-subst">${QUESTION}</span>: For specific questions.
    - <span class="hljs-subst">${HELP}</span>: For requests for assistance, troubleshooting, or problem reports.
    `</span>,
        ],
        [<span class="hljs-string">"human"</span>, state.message.content],
      ]);
    
      <span class="hljs-keyword">return</span> {
        supportTicket: {
          ...state.supportTicket,
          <span class="hljs-keyword">type</span>: res.type,
        },
      };
    };
    
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> processSupportHelp = <span class="hljs-keyword">async</span> (state: State): <span class="hljs-built_in">Promise</span><Update> => {
      log(DEBUG, <span class="hljs-string">"message in support help:"</span>, state.message);
    
      <span class="hljs-keyword">return</span> {
        supportTicket: {
          ...state.supportTicket,
        },
        finalAction: {
          <span class="hljs-keyword">type</span>: <span class="hljs-string">"CREATE_EMBED"</span>,
          title: <span class="hljs-string">"🚨 Help Needed!"</span>,
          description: <span class="hljs-string">`A new request for help has been raised by **@<span class="hljs-subst">${state.message.author}</span>**.nn**Query:**n> <span class="hljs-subst">${state.message.content}</span>`</span>,
          roleToPing: process.env.DISCORD_SUPPORT_MOD_ID,
        },
      };
    };
    
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> processSupportQuestion = <span class="hljs-keyword">async</span> (state: State): <span class="hljs-built_in">Promise</span><Update> => {
      log(DEBUG, <span class="hljs-string">"message in support question category:"</span>, state.message);
    
      <span class="hljs-keyword">const</span> llm = <span class="hljs-keyword">new</span> ChatOpenAI({
        model,
        apiKey: process.env.OPENAI_API_KEY,
        temperature: <span class="hljs-number">0</span>,
      });
    
      <span class="hljs-keyword">const</span> systemPrompt = <span class="hljs-string">`
    You are a helpful AI assistant specializing in AI, and LLMs. Answer
    the user's question concisely and accurately based on general knowledge in
    these areas. If the question is outside this scope (e.g., personal advice,
    non-technical topics), politely state you cannot answer. User's question:
    `</span>;
    
      <span class="hljs-keyword">const</span> res = <span class="hljs-keyword">await</span> llm.invoke([
        [<span class="hljs-string">"system"</span>, systemPrompt],
        [<span class="hljs-string">"human"</span>, state.message.content],
      ]);
    
      <span class="hljs-keyword">const</span> llmResponse = extractStringFromAIMessage(res);
      <span class="hljs-keyword">return</span> {
        supportTicket: {
          ...state.supportTicket,
          question: {
            description: state.message.content,
            answer: llmResponse,
          },
        },
        finalAction: {
          <span class="hljs-keyword">type</span>: <span class="hljs-string">"REPLY"</span>,
          content: llmResponse,
        },
      };
    };
    
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> processOther = <span class="hljs-keyword">async</span> (state: State): <span class="hljs-built_in">Promise</span><Update> => {
      log(DEBUG, <span class="hljs-string">"message in other category:"</span>, state.message);
    
      <span class="hljs-keyword">const</span> response =
        <span class="hljs-string">"This seems to be a general message. I'm here to help with technical support or perform specific actions if you ask. How can I assist you with those?"</span>;
    
      <span class="hljs-keyword">return</span> {
        finalAction: {
          <span class="hljs-keyword">type</span>: <span class="hljs-string">"REPLY_IN_THREAD"</span>,
          content: response,
        },
      };
    };
    

    There’s not much to explain for these nodes. Each node in the flow functions as a message classifier. It spins up a Chat LLM instance and uses structured output to ensure the model returns a specific label from a predefined set like QUESTION or HELP for support messages. The system prompt clearly defines what each label means, and your user message is passed in for classification.

    You’re almost there. But there’s one piece missing. Can you spot it?

    The process-tool-call node that’s supposed to handle the workflow when the user asks to use a tool. This is a big piece of the workflow.

    It’s a bit longer, so I’ll explain it separately.

    Modify the above nodes.ts file to add the missing node:

    <span class="hljs-comment">// 👇 discord-bot-langgraph/src/nodes.ts</span>
    
    <span class="hljs-comment">// Rest of the code...</span>
    <span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> processToolCall = <span class="hljs-keyword">async</span> (state: State): <span class="hljs-built_in">Promise</span><Update> => {
      log(DEBUG, <span class="hljs-string">"message in tool call request category:"</span>, state.message);
    
      <span class="hljs-keyword">const</span> structuredOutputType = z.object({
        service: z
          .string()
          .describe(<span class="hljs-string">"The target service (e.g., 'email', 'discord')."</span>),
        task: z
          .string()
          .describe(
            <span class="hljs-string">"A concise description of the task (e.g., 'send email to X', 'summarize recent chat', 'create task Y')."</span>,
          ),
        details: z
          .string()
          .optional()
          .describe(
            <span class="hljs-string">"Any specific details or parameters extracted from the message relevant to the task."</span>,
          ),
      });
    
      <span class="hljs-keyword">const</span> structuredLlm = llm.withStructuredOutput(structuredOutputType);
    
      <span class="hljs-keyword">let</span> parsedActionDetails: z.infer<<span class="hljs-keyword">typeof</span> structuredOutputType> = {
        service: <span class="hljs-string">"unknown"</span>,
        task: <span class="hljs-string">"perform a requested action"</span>,
      };
    
      <span class="hljs-keyword">try</span> {
        <span class="hljs-keyword">const</span> res = <span class="hljs-keyword">await</span> structuredLlm.invoke([
          [
            <span class="hljs-string">"system"</span>,
            <span class="hljs-string">`Parse the user's request to identify an action. Extract the target service, a description of the task, and any relevant details or parameters.
          Examples:
          - "Remind me to check emails at 5 PM": service: calendar/reminder, task: set reminder, details: check emails at 5 PM
          - "Send a summary of this conversation to #general channel": service: discord, task: send summary to channel, details: channel #general
          - "Create a bug report for 'login fails on mobile'": service: project_manager, task: create bug report, details: title 'login fails on mobile'`</span>,
          ],
          [<span class="hljs-string">"human"</span>, state.message.content],
        ]);
    
        parsedActionDetails = res;
        log(INFO, <span class="hljs-string">"initial parsing action details:"</span>, parsedActionDetails);
      } <span class="hljs-keyword">catch</span> (error) {
        log(ERROR, <span class="hljs-string">"initial parsing error:"</span>, error);
        <span class="hljs-keyword">return</span> {
          toolCallRequest: {
            actionLog: <span class="hljs-string">`Failed to parse user request: <span class="hljs-subst">${state.message.content}</span>`</span>,
            status: <span class="hljs-string">"failed"</span>,
          },
          finalAction: {
            <span class="hljs-keyword">type</span>: <span class="hljs-string">"REPLY_IN_THREAD"</span>,
            content:
              <span class="hljs-string">"I'm sorry, I had trouble understanding that action. Could you please rephrase it?"</span>,
          },
        };
      }
    
      <span class="hljs-keyword">try</span> {
        log(INFO, <span class="hljs-string">"fetching composio tools"</span>);
        <span class="hljs-keyword">const</span> tools = <span class="hljs-keyword">await</span> toolset.getTools({
          apps: [<span class="hljs-string">"GOOGLESHEETS"</span>],
        });
    
        log(INFO, <span class="hljs-string">`fetched <span class="hljs-subst">${tools.length}</span> tools. Errors if > 128 for OpenAI:`</span>);
    
        <span class="hljs-keyword">if</span> (tools.length === <span class="hljs-number">0</span>) {
          log(WARN, <span class="hljs-string">"no tools fetched from Composio. skipping..."</span>);
          <span class="hljs-keyword">return</span> {
            toolCallRequest: {
              actionLog: <span class="hljs-string">`Service: <span class="hljs-subst">${parsedActionDetails.service}</span>, Task: <span class="hljs-subst">${parsedActionDetails.task}</span>. No composio tools found`</span>,
              status: <span class="hljs-string">"failed"</span>,
            },
            finalAction: {
              <span class="hljs-keyword">type</span>: <span class="hljs-string">"REPLY_IN_THREAD"</span>,
              content: <span class="hljs-string">"Couldn't find any tools to perform your action."</span>,
            },
          };
        }
    
        log(DEBUG, <span class="hljs-string">"starting iterative tool execution loop"</span>);
    
        <span class="hljs-keyword">const</span> conversationHistory: BaseMessage[] = [
          <span class="hljs-keyword">new</span> SystemMessage(
            <span class="hljs-string">"You are a helpful assistant that performs tool calls. Your task is to understand the user's request and use the available tools to fulfill the request completely. You can use multiple tools in sequence to accomplish complex tasks. Always provide a brief, conversational summary of what you accomplished after using tools."</span>,
          ),
          <span class="hljs-keyword">new</span> HumanMessage(state.message.content),
        ];
    
        <span class="hljs-keyword">let</span> totalToolsUsed = <span class="hljs-number">0</span>;
        <span class="hljs-keyword">let</span> finalResponse: <span class="hljs-built_in">string</span> | <span class="hljs-literal">null</span> = <span class="hljs-literal">null</span>;
    
        <span class="hljs-keyword">const</span> maxIterations = <span class="hljs-number">5</span>;
        <span class="hljs-keyword">let</span> iteration = <span class="hljs-number">0</span>;
    
        <span class="hljs-keyword">while</span> (iteration < maxIterations) {
          iteration++;
          log(
            DEBUG,
            <span class="hljs-string">`Iteration <span class="hljs-subst">${iteration}</span>: calling LLM with <span class="hljs-subst">${tools.length}</span> tools`</span>,
          );
    
          <span class="hljs-keyword">const</span> llmResponse = <span class="hljs-keyword">await</span> llm.invoke(conversationHistory, {
            tools: tools,
          });
    
          log(DEBUG, <span class="hljs-string">`Iteration <span class="hljs-subst">${iteration}</span> LLM response:`</span>, llmResponse);
    
          <span class="hljs-keyword">const</span> toolCalls = llmResponse.tool_calls;
    
          <span class="hljs-keyword">if</span> ((!toolCalls || toolCalls.length === <span class="hljs-number">0</span>) && llmResponse.content) {
            finalResponse =
              <span class="hljs-keyword">typeof</span> llmResponse.content === <span class="hljs-string">"string"</span>
                ? llmResponse.content
                : <span class="hljs-built_in">JSON</span>.stringify(llmResponse.content);
            log(
              INFO,
              <span class="hljs-string">`Final response received after <span class="hljs-subst">${iteration}</span> iterations:`</span>,
              finalResponse,
            );
            <span class="hljs-keyword">break</span>;
          }
    
          <span class="hljs-keyword">if</span> (toolCalls && toolCalls.length > <span class="hljs-number">0</span>) {
            log(
              INFO,
              <span class="hljs-string">`Iteration <span class="hljs-subst">${iteration}</span>: executing <span class="hljs-subst">${toolCalls.length}</span> tool(s)`</span>,
            );
            totalToolsUsed += toolCalls.length;
    
            conversationHistory.push(llmResponse);
    
            <span class="hljs-keyword">for</span> (<span class="hljs-keyword">const</span> toolCall <span class="hljs-keyword">of</span> toolCalls) {
              log(
                INFO,
                <span class="hljs-string">`Executing tool: <span class="hljs-subst">${toolCall.name}</span> with args:`</span>,
                toolCall.args,
              );
    
              <span class="hljs-keyword">const</span> composioCompatibleToolCall: ChatCompletionMessageToolCall = {
                id: toolCall.id || uuidv4(),
                <span class="hljs-keyword">type</span>: <span class="hljs-string">"function"</span>,
                <span class="hljs-function"><span class="hljs-keyword">function</span>: </span>{
                  name: toolCall.name,
                  <span class="hljs-built_in">arguments</span>: <span class="hljs-built_in">JSON</span>.stringify(toolCall.args),
                },
              };
    
              <span class="hljs-keyword">let</span> toolOutputContent: <span class="hljs-built_in">string</span>;
              <span class="hljs-keyword">try</span> {
                <span class="hljs-keyword">const</span> executionResult = <span class="hljs-keyword">await</span> toolset.executeToolCall(
                  composioCompatibleToolCall,
                );
                log(
                  INFO,
                  <span class="hljs-string">`Tool <span class="hljs-subst">${toolCall.name}</span> execution result:`</span>,
                  executionResult,
                );
                toolOutputContent = <span class="hljs-built_in">JSON</span>.stringify(executionResult);
              } <span class="hljs-keyword">catch</span> (toolError) {
                log(ERROR, <span class="hljs-string">`Tool <span class="hljs-subst">${toolCall.name}</span> execution error:`</span>, toolError);
                <span class="hljs-keyword">const</span> errorMessage =
                  toolError <span class="hljs-keyword">instanceof</span> <span class="hljs-built_in">Error</span>
                    ? toolError.message
                    : <span class="hljs-built_in">String</span>(toolError);
    
                toolOutputContent = <span class="hljs-string">`Error: <span class="hljs-subst">${errorMessage}</span>`</span>;
              }
    
              conversationHistory.push(
                <span class="hljs-keyword">new</span> ToolMessage({
                  content: toolOutputContent,
                  tool_call_id: toolCall.id || uuidv4(),
                }),
              );
            }
    
            <span class="hljs-keyword">continue</span>;
          }
    
          log(
            WARN,
            <span class="hljs-string">`Iteration <span class="hljs-subst">${iteration}</span>: LLM provided no tool calls or content`</span>,
          );
          <span class="hljs-keyword">break</span>;
        }
    
        <span class="hljs-keyword">let</span> userFriendlyResponse: <span class="hljs-built_in">string</span>;
    
        <span class="hljs-keyword">if</span> (totalToolsUsed > <span class="hljs-number">0</span>) {
          log(DEBUG, <span class="hljs-string">"Generating user-friendly summary using LLM"</span>);
    
          <span class="hljs-keyword">try</span> {
            <span class="hljs-keyword">const</span> summaryResponse = <span class="hljs-keyword">await</span> llm.invoke([
              <span class="hljs-keyword">new</span> SystemMessage(
                <span class="hljs-string">"You are tasked with creating a brief, friendly summary for a Discord user about what actions were just completed. Keep it conversational, under 2-3 sentences, and focus on what was accomplished rather than technical details. Start with phrases like 'Done!', 'Successfully completed', 'All set!', etc."</span>,
              ),
              <span class="hljs-keyword">new</span> HumanMessage(
                <span class="hljs-string">`The user requested: "<span class="hljs-subst">${state.message.content}</span>"
    
    I used <span class="hljs-subst">${totalToolsUsed}</span> tools across <span class="hljs-subst">${iteration}</span> iterations to complete their request. <span class="hljs-subst">${finalResponse ? <span class="hljs-string">`My final response was: <span class="hljs-subst">${finalResponse}</span>`</span> : <span class="hljs-string">"The task was completed successfully."</span>}</span>
    
    Generate a brief, friendly summary of what was accomplished.`</span>,
              ),
            ]);
    
            userFriendlyResponse =
              <span class="hljs-keyword">typeof</span> summaryResponse.content === <span class="hljs-string">"string"</span>
                ? summaryResponse.content
                : <span class="hljs-string">`Done! I've completed your request using <span class="hljs-subst">${totalToolsUsed}</span> action<span class="hljs-subst">${totalToolsUsed > <span class="hljs-number">1</span> ? <span class="hljs-string">"s"</span> : <span class="hljs-string">""</span>}</span>.`</span>;
    
            log(INFO, <span class="hljs-string">"Generated user-friendly summary:"</span>, userFriendlyResponse);
          } <span class="hljs-keyword">catch</span> (summaryError) {
            log(ERROR, <span class="hljs-string">"Failed to generate summary:"</span>, summaryError);
            userFriendlyResponse = <span class="hljs-string">`All set! I've completed your request using <span class="hljs-subst">${totalToolsUsed}</span> action<span class="hljs-subst">${totalToolsUsed > <span class="hljs-number">1</span> ? <span class="hljs-string">"s"</span> : <span class="hljs-string">""</span>}</span>.`</span>;
          }
        } <span class="hljs-keyword">else</span> {
          userFriendlyResponse =
            finalResponse ||
            <span class="hljs-string">`I understood your request about '<span class="hljs-subst">${parsedActionDetails.task}</span>' but couldn't find the right tools to complete it.`</span>;
        }
    
        <span class="hljs-keyword">const</span> actionLog = <span class="hljs-string">`Service: <span class="hljs-subst">${parsedActionDetails.service}</span>, Task: <span class="hljs-subst">${parsedActionDetails.task}</span>. Used <span class="hljs-subst">${totalToolsUsed}</span> tools across <span class="hljs-subst">${iteration}</span> iterations.`</span>;
    
        <span class="hljs-keyword">return</span> {
          toolCallRequest: {
            actionLog,
            status: totalToolsUsed > <span class="hljs-number">0</span> ? <span class="hljs-string">"success"</span> : <span class="hljs-string">"acknowledged"</span>,
          },
          finalAction: {
            <span class="hljs-keyword">type</span>: <span class="hljs-string">"REPLY_IN_THREAD"</span>,
            content: userFriendlyResponse,
          },
        };
      } <span class="hljs-keyword">catch</span> (error) {
        log(ERROR, <span class="hljs-string">"processing tool call with Composio:"</span>, error);
        <span class="hljs-keyword">const</span> errorMessage = error <span class="hljs-keyword">instanceof</span> <span class="hljs-built_in">Error</span> ? error.message : <span class="hljs-built_in">String</span>(error);
    
        <span class="hljs-keyword">return</span> {
          toolCallRequest: {
            actionLog: <span class="hljs-string">`Error during tool call (Service: <span class="hljs-subst">${parsedActionDetails.service}</span>, Task: <span class="hljs-subst">${parsedActionDetails.task}</span>). Error: <span class="hljs-subst">${errorMessage}</span>`</span>,
            status: <span class="hljs-string">"failed"</span>,
          },
          finalAction: {
            <span class="hljs-keyword">type</span>: <span class="hljs-string">"REPLY_IN_THREAD"</span>,
            content: <span class="hljs-string">"Sorry, I encountered an error while processing your request."</span>,
          },
        };
      }
    };
    

    The part up until the first try-catch block is the same. Up until then, you’re figuring out the tool the user is trying to call. Now comes the juicy part: actually handling tool calls.

    At this point, you need to fetch the tools from Composio. Here, I’m just passing in Google Sheets as the option for demo purposes, but you could use literally anything once you authenticate yourself as shown above.

    After fetching the tools, you enter a loop where the LLM can use them. It reviews the conversation history and decides which tools to call. You execute these calls, feed the results back, and repeat for up to 5 iterations or until the LLM gives a final answer.

    This loop runs up to 5 times as a safeguard so the LLM doesn’t get stuck in an endless back-and-forth.

    If tools were used, you ask the LLM to write a friendly summary for the user instead of dumping the raw JSON response. If no tools worked or none matched, just let the user know you couldn’t perform the action.

    Now with that, you’re done with the difficult part (I mean, it was pretty easy though, right?). From here on, you just need to set up and work with the Discord API using Discord.js.

    Set Up Discord.js Client

    In this application, you’re using slash commands. To use slash commands in Discord, you need to register them first. You can do this manually, but why not automate it as well? 😉

    Create a new file named slash-deploy.ts inside the utils directory and add the following lines of code:

    <span class="hljs-comment">// 👇 discord-bot-langgraph/utils/slash-deploy.ts</span>
    
    <span class="hljs-keyword">import</span> { REST, Routes } <span class="hljs-keyword">from</span> <span class="hljs-string">"discord.js"</span>;
    <span class="hljs-keyword">import</span> dotenv <span class="hljs-keyword">from</span> <span class="hljs-string">"dotenv"</span>;
    <span class="hljs-keyword">import</span> { log, INFO, ERROR } <span class="hljs-keyword">from</span> <span class="hljs-string">"./logger.js"</span>;
    <span class="hljs-keyword">import</span> {
      DISCORD_BOT_TOKEN,
      DISCORD_BOT_GUILD_ID,
      OPENAI_API_KEY,
      DISCORD_BOT_CLIENT_ID,
      validateEnvVars,
    } <span class="hljs-keyword">from</span> <span class="hljs-string">"./env-validator.js"</span>;
    
    dotenv.config();
    
    <span class="hljs-keyword">const</span> requiredEnvVars = [
      DISCORD_BOT_TOKEN,
      DISCORD_BOT_GUILD_ID,
      DISCORD_BOT_CLIENT_ID,
      OPENAI_API_KEY,
    ];
    validateEnvVars(requiredEnvVars);
    
    <span class="hljs-keyword">const</span> commands = [
      {
        name: <span class="hljs-string">"ask"</span>,
        description: <span class="hljs-string">"Ask the AI assistant a question or give it a command."</span>,
        options: [
          {
            name: <span class="hljs-string">"prompt"</span>,
            <span class="hljs-keyword">type</span>: <span class="hljs-number">3</span>,
            description: <span class="hljs-string">"Your question or command for the bot"</span>,
            required: <span class="hljs-literal">true</span>,
          },
        ],
      },
    ];
    
    <span class="hljs-keyword">const</span> rest = <span class="hljs-keyword">new</span> REST({ version: <span class="hljs-string">"10"</span> }).setToken(
      process.env.DISCORD_BOT_TOKEN!,
    );
    
    (<span class="hljs-keyword">async</span> () => {
      <span class="hljs-keyword">try</span> {
        log(INFO, <span class="hljs-string">"deploying slash(/) commands"</span>);
        <span class="hljs-keyword">await</span> rest.put(
          Routes.applicationGuildCommands(
            process.env.DISCORD_BOT_CLIENT_ID!,
            process.env.DISCORD_BOT_GUILD_ID!,
          ),
          {
            body: commands,
          },
        );
    
        log(INFO, <span class="hljs-string">"slash(/) commands deployed"</span>);
      } <span class="hljs-keyword">catch</span> (error) {
        log(ERROR, <span class="hljs-string">"deploying slash(/) commands:"</span>, error);
      }
    })();
    

    See your validateEnvVars function in action? Here, you’re specifying the environment variables that must be set before running the program. If any are missing and you try to run the program, you’ll get an error.

    Command failed output for deploying slash command to Discord

    The way you deploy the slash commands to Discord is using the REST API provided by discord.js, specifically by calling rest.put with your command data and target guild.

    Now, simply run the commands:deploy bun script and you should have /ask registered as a slash command in your Discord.

    2d5b22df-cd43-4e54-b985-b64576831316

    At this point, you should see the /ask slash command available in your server. All that’s left is to create the index.ts file, which will be the entry point to your Discord bot.

    Create a new file named index.ts inside the src directory and add the following lines of code:

    <span class="hljs-comment">// 👇 discord-bot-langgraph/src/index.ts</span>
    
    <span class="hljs-keyword">import</span> dotenv <span class="hljs-keyword">from</span> <span class="hljs-string">"dotenv"</span>;
    <span class="hljs-keyword">import</span> {
      Client,
      Events,
      GatewayIntentBits,
      EmbedBuilder,
      <span class="hljs-keyword">type</span> Interaction,
    } <span class="hljs-keyword">from</span> <span class="hljs-string">"discord.js"</span>;
    <span class="hljs-keyword">import</span> { initializeGraph } <span class="hljs-keyword">from</span> <span class="hljs-string">"./graph.js"</span>;
    <span class="hljs-keyword">import</span> { <span class="hljs-keyword">type</span> Message <span class="hljs-keyword">as</span> ChatMessage } <span class="hljs-keyword">from</span> <span class="hljs-string">"../types/types.js"</span>;
    <span class="hljs-keyword">import</span> { ERROR, INFO, log } <span class="hljs-keyword">from</span> <span class="hljs-string">"../utils/logger.js"</span>;
    <span class="hljs-keyword">import</span> {
      DISCORD_BOT_TOKEN,
      DISCORD_BOT_GUILD_ID,
      OPENAI_API_KEY,
      validateEnvVars,
      DISCORD_BOT_CLIENT_ID,
      COMPOSIO_API_KEY,
    } <span class="hljs-keyword">from</span> <span class="hljs-string">"../utils/env-validator.js"</span>;
    
    dotenv.config();
    
    <span class="hljs-keyword">const</span> requiredEnvVars = [
      DISCORD_BOT_CLIENT_ID,
      DISCORD_BOT_TOKEN,
      DISCORD_BOT_GUILD_ID,
    
      OPENAI_API_KEY,
    
      COMPOSIO_API_KEY,
    ];
    validateEnvVars(requiredEnvVars);
    
    <span class="hljs-keyword">const</span> graph = initializeGraph();
    
    <span class="hljs-keyword">const</span> client = <span class="hljs-keyword">new</span> Client({
      intents: [
        GatewayIntentBits.Guilds,
        GatewayIntentBits.GuildMessages,
        GatewayIntentBits.MessageContent,
      ],
    });
    
    <span class="hljs-comment">// use a map to store history per channel to make it work properly with all the</span>
    <span class="hljs-comment">// channels and not for one specific channel.</span>
    <span class="hljs-keyword">const</span> channelHistories = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Map</span><<span class="hljs-built_in">string</span>, ChatMessage[]>();
    
    client.on(Events.ClientReady, <span class="hljs-keyword">async</span> (readyClient) => {
      log(INFO, <span class="hljs-string">`logged in as <span class="hljs-subst">${readyClient.user.tag}</span>. ready to process commands!`</span>);
    });
    
    client.on(Events.InteractionCreate, <span class="hljs-keyword">async</span> (interaction: Interaction) => {
      <span class="hljs-keyword">if</span> (!interaction.isChatInputCommand()) <span class="hljs-keyword">return</span>;
      <span class="hljs-keyword">if</span> (interaction.commandName !== <span class="hljs-string">"ask"</span>) <span class="hljs-keyword">return</span>;
    
      <span class="hljs-keyword">const</span> userPrompt = interaction.options.getString(<span class="hljs-string">"prompt"</span>, <span class="hljs-literal">true</span>);
      <span class="hljs-keyword">const</span> user = interaction.user;
      <span class="hljs-keyword">const</span> channelId = interaction.channelId;
    
      <span class="hljs-keyword">if</span> (!channelHistories.has(channelId)) channelHistories.set(channelId, []);
    
      <span class="hljs-keyword">const</span> messageHistory = channelHistories.get(channelId)!;
    
      <span class="hljs-keyword">const</span> currentUserMessage: ChatMessage = {
        author: user.username,
        content: userPrompt,
      };
    
      <span class="hljs-keyword">const</span> graphInput = {
        message: currentUserMessage,
        previousMessages: [...messageHistory],
      };
    
      messageHistory.push(currentUserMessage);
      <span class="hljs-keyword">if</span> (messageHistory.length > <span class="hljs-number">20</span>) messageHistory.shift();
    
      <span class="hljs-keyword">try</span> {
        <span class="hljs-keyword">await</span> interaction.reply({
          content: <span class="hljs-string">"Hmm... processing your request! 🐀"</span>,
        });
    
        <span class="hljs-keyword">const</span> finalState = <span class="hljs-keyword">await</span> graph.invoke(graphInput);
    
        <span class="hljs-keyword">if</span> (!finalState.finalAction) {
          log(ERROR, <span class="hljs-string">"no final action found"</span>);
          <span class="hljs-keyword">await</span> interaction.editReply({
            content: <span class="hljs-string">"I'm sorry, I couldn't process your request."</span>,
          });
          <span class="hljs-keyword">return</span>;
        }
    
        <span class="hljs-keyword">const</span> userPing = <span class="hljs-string">`<@<span class="hljs-subst">${user.id}</span>>`</span>;
        <span class="hljs-keyword">const</span> action = finalState.finalAction;
    
        <span class="hljs-keyword">const</span> quotedPrompt = <span class="hljs-string">`🗣️ "<span class="hljs-subst">${userPrompt}</span>"`</span>;
    
        <span class="hljs-keyword">switch</span> (action.type) {
          <span class="hljs-keyword">case</span> <span class="hljs-string">"REPLY"</span>:
            <span class="hljs-keyword">await</span> interaction.editReply({
              content: <span class="hljs-string">`<span class="hljs-subst">${userPing}</span>nn<span class="hljs-subst">${quotedPrompt}</span>nn<span class="hljs-subst">${action.content}</span>`</span>,
            });
            <span class="hljs-keyword">break</span>;
    
          <span class="hljs-keyword">case</span> <span class="hljs-string">"REPLY_IN_THREAD"</span>:
            <span class="hljs-keyword">if</span> (!interaction.channel || !(<span class="hljs-string">"threads"</span> <span class="hljs-keyword">in</span> interaction.channel)) {
              <span class="hljs-keyword">await</span> interaction.editReply({
                content: <span class="hljs-string">"Cannot create a thread in this channel"</span>,
              });
              <span class="hljs-keyword">return</span>;
            }
    
            <span class="hljs-keyword">try</span> {
              <span class="hljs-keyword">const</span> thread = <span class="hljs-keyword">await</span> interaction.channel.threads.create({
                name: <span class="hljs-string">`Action: <span class="hljs-subst">${userPrompt.substring(<span class="hljs-number">0</span>, <span class="hljs-number">50</span>)}</span>...`</span>,
                autoArchiveDuration: <span class="hljs-number">60</span>,
              });
    
              <span class="hljs-keyword">await</span> thread.send(
                <span class="hljs-string">`<span class="hljs-subst">${userPing}</span>nn<span class="hljs-subst">${quotedPrompt}</span>nn<span class="hljs-subst">${action.content}</span>`</span>,
              );
              <span class="hljs-keyword">await</span> interaction.editReply({
                content: <span class="hljs-string">`I've created a thread for you: <span class="hljs-subst">${thread.url}</span>`</span>,
              });
            } <span class="hljs-keyword">catch</span> (threadError) {
              log(ERROR, <span class="hljs-string">"failed to create or reply in thread:"</span>, threadError);
              <span class="hljs-keyword">await</span> interaction.editReply({
                content: <span class="hljs-string">`<span class="hljs-subst">${userPing}</span>nn<span class="hljs-subst">${quotedPrompt}</span>nnI tried to create a thread but failed. Here is your response:nn<span class="hljs-subst">${action.content}</span>`</span>,
              });
            }
            <span class="hljs-keyword">break</span>;
    
          <span class="hljs-keyword">case</span> <span class="hljs-string">"CREATE_EMBED"</span>: {
            <span class="hljs-keyword">const</span> embed = <span class="hljs-keyword">new</span> EmbedBuilder()
              .setColor(<span class="hljs-number">0xffa500</span>)
              .setTitle(action.title)
              .setDescription(action.description)
              .setTimestamp()
              .setFooter({ text: <span class="hljs-string">"Support System"</span> });
    
            <span class="hljs-keyword">const</span> rolePing = action.roleToPing ? <span class="hljs-string">`<@<span class="hljs-subst">${action.roleToPing}</span>>`</span> : <span class="hljs-string">""</span>;
    
            <span class="hljs-keyword">await</span> interaction.editReply({
              content: <span class="hljs-string">`<span class="hljs-subst">${userPing}</span> <span class="hljs-subst">${rolePing}</span>`</span>,
              embeds: ,
            });
            <span class="hljs-keyword">break</span>;
          }
        }
      } <span class="hljs-keyword">catch</span> (error) {
        log(ERROR, <span class="hljs-string">"generating AI response or processing graph:"</span>, error);
        <span class="hljs-keyword">const</span> errorMessage =
          <span class="hljs-string">"sorry, I encountered an error while processing your request."</span>;
        <span class="hljs-keyword">if</span> (interaction.replied || interaction.deferred) {
          <span class="hljs-keyword">await</span> interaction.followUp({ content: errorMessage, ephemeral: <span class="hljs-literal">true</span> });
        } <span class="hljs-keyword">else</span> {
          <span class="hljs-keyword">await</span> interaction.reply({ content: errorMessage, ephemeral: <span class="hljs-literal">true</span> });
        }
      }
    });
    
    <span class="hljs-keyword">const</span> token = process.env.DISCORD_BOT_TOKEN!;
    client.login(token);
    

    At the core of our bot is the Client object from discord.js. This represents your bot and handles everything from connecting to Discord’s API to listening for events like user messages or interactions.

    What’s with that intent? Discord uses intents as a way for bots to declare what kind of data they want access to. In our case:

    • Guilds lets the bot connect to servers

    • GuildMessages allows it to see messages

    • MessageContent gives access to the actual content of messages

    These are quite standard, and there are many more based on different use cases. You can always check them all out here.

    You also keep a Map to store per-channel message history so the bot can respond with context across multiple channels:

    <span class="hljs-keyword">const</span> channelHistories = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Map</span><<span class="hljs-built_in">string</span>, ChatMessage[]>();
    

    Discord.js provides access to a few events that you can listen to. When you work with slash commands, it registers an Events.InteractionCreate, which is what you’re listening to.

    With every /ask command, you take the user’s prompt and any previous messages. If channelHistories does not have a key with that specific channelId, meaning it’s being used for the first time, you initialize it with an empty array and feed them into the AI state.

    <span class="hljs-keyword">const</span> finalState = <span class="hljs-keyword">await</span> graph.invoke({
      message: currentUserMessage,
      previousMessages: [...messageHistory],
    });
    

    Depending on what the graph finalAction.type returns, you either:

    • reply directly,

    • create a thread and respond there,

    • or send an embed (for support-type replies).

    If a thread can’t be created, you fall back to replying in the main channel. Message history is capped at 20 to keep things lightweight.

    Note that we’re not really using previousMessages much at the moment in the application, but I’ve prepared everything you need to handle querying previous conversations. You could easily create a new LangGraph node that queries or reasons over history if the bot needs to reference past conversations. (Take this as your challenge!)

    This project should give you a basic idea of how you can use LangGraph + Composio to build a somewhat useful bot that can already handle decent stuff. There’s a lot more you could improve. I’ll leave that up to you. ✌️

    Here’s a quick demo of what we’ve built so far:


    Wrapping Up

    By now you should have a good idea of how LangGraph works and also how to power the bot with integrations using Composio.

    This is just a fraction of what you can do. Try adding more features and more integration support to the bot to fit your workflow. This can come in really handy.

    If you got lost somewhere while coding along, you can find the source code here.

    So, that is it for this article. Thank you so much for reading! See you next time. 🫡

    Love to build cool stuff like this? I regularly build such stuff every few weeks. Feel free to reach out to me here:

    • GitHub: github.com/shricodev

    • Portfolio: techwithshrijal.com

    • LinkedIn: linkedin.com/in/iamshrijal

    Source: freeCodeCamp Programming Tutorials: Python, JavaScript, Git & More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleI’m an off-grid expert, and this 2600W power station on sale for $800 off won’t last long
    Next Article How I Landed 20+ Conference Talks – and How You Can, Too

    Related Posts

    Development

    PHP 8.5.0 RC 1 available for testing

    September 26, 2025
    Development

    Terraform Code Generator Using Ollama and CodeGemma

    September 26, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    ChatGPT just made it easy to find and edit all the AI images you’ve ever generated

    News & Updates

    OnTheSpot – GUI music downloader

    Linux

    CVE-2025-34078 – NSClient++ Privilege Escalation (Local)

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2024-36486 – Parallels Desktop for Mac Privilege Escalation Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    CVE-2025-44895 – D-Link FW-WGS-804HPT Stack Overflow Vulnerability

    May 21, 2025

    CVE ID : CVE-2025-44895

    Published : May 21, 2025, 2:15 p.m. | 35 minutes ago

    Description : FW-WGS-804HPT v1.305b241111 was discovered to contain a stack overflow via the ipv4Aclkey parameter in the web_acl_ipv4BasedAceAdd function.

    Severity: 0.0 | NA

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    Russian Host Proton66 Tied to SuperBlack and WeaXor Ransomware

    April 22, 2025

    How Avast’s free AI-powered Scam Guardian protects you from online con artists

    June 25, 2025

    CVE-2025-4435 – TarFile Errorlevel Extraction Vulnerability

    June 3, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.