Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Error’d: Pickup Sticklers

      September 27, 2025

      From Prompt To Partner: Designing Your Custom AI Assistant

      September 27, 2025

      Microsoft unveils reimagined Marketplace for cloud solutions, AI apps, and more

      September 27, 2025

      Design Dialects: Breaking the Rules, Not the System

      September 27, 2025

      Building personal apps with open source and AI

      September 12, 2025

      What Can We Actually Do With corner-shape?

      September 12, 2025

      Craft, Clarity, and Care: The Story and Work of Mengchu Yao

      September 12, 2025

      Cailabs secures €57M to accelerate growth and industrial scale-up

      September 12, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The first browser with JavaScript landed 30 years ago

      September 27, 2025
      Recent

      The first browser with JavaScript landed 30 years ago

      September 27, 2025

      Four Different Meanings of “Template” a WordPress Pro Should Know

      September 27, 2025

      Adding Functionality with functions.php, a Heart of WordPress Theme Development

      September 27, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured
      Recent
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Implementing an LLM Agent with Tool Access Using MCP-Use

    Implementing an LLM Agent with Tool Access Using MCP-Use

    May 13, 2025

    MCP-Use is an open-source library that lets you connect any LLM to any MCP server, giving your agents tool access like web browsing, file operations, and more — all without relying on closed-source clients. In this tutorial, we’ll use langchain-groq and MCP-Use’s built-in conversation memory to build a simple chatbot that can interact with tools via MCP. 

    Step 1: Setting Up the Environment

    Installing uv package manager

    We will first set up our environment and start with installing the uv package manager. For Mac or Linux:

    Copy CodeCopiedUse a different Browser
    curl -LsSf https://astral.sh/uv/install.sh | sh 

    For Windows (PowerShell):

    Copy CodeCopiedUse a different Browser
    powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"

    Creating a new directory and activating a virtual environment

    We will then create a new project directory and initialize it with uv

    Copy CodeCopiedUse a different Browser
    uv init mcp-use-demo
    cd mcp-use-demo

    We can now create and activate a virtual environment. For Mac or Linux:

    Copy CodeCopiedUse a different Browser
    uv venv
    source .venv/bin/activate

    For Windows:

    Copy CodeCopiedUse a different Browser
    uv venv
    .venvScriptsactivate

    Installing Python dependencies

    We will now install the required dependencies

    Copy CodeCopiedUse a different Browser
    uv add mcp-use langchain-groq python-dotenv

    Step 2: Setting Up the Environment Variables

    Groq API Key

    To use Groq’s LLMs:

    1. Visit Groq Console and generate an API key.
    2. Create a .env file in your project directory and add the following line:
    Copy CodeCopiedUse a different Browser
    GROQ_API_KEY=<YOUR_API_KEY>

     Replace <YOUR_API_KEY> with the key you just generated.

    Brave Search API Key

    This tutorial uses the Brave Search MCP Server.

    1. Get your Brave Search API key from: Brave Search API
    2. Create a file named mcp.json in the project root with the following content:
    Copy CodeCopiedUse a different Browser
    {
      "mcpServers": {
        "brave-search": {
          "command": "npx",
          "args": [
            "-y",
            "@modelcontextprotocol/server-brave-search"
          ],
          "env": {
            "BRAVE_API_KEY": "<YOUR_BRAVE_SEARCH_API>"
          }
        }
      }
    }

    Replace <YOUR_BRAVE_SEARCH_API> with your actual Brave API key.

    Node JS

    Some MCP servers (including Brave Search) require npx, which comes with Node.js.

    • Download the latest version of Node.js from nodejs.org
    • Run the installer.
    • Leave all settings as default and complete the installation

    Using other servers

    If you’d like to use a different MCP server, simply replace the contents of mcp.json with the configuration for that server.

    Step 3: Implementing the chatbot and integrating the MCP server

    Create an app.py file in the directory and add the following content:

    Importing the libraries

    Copy CodeCopiedUse a different Browser
    from dotenv import load_dotenv
    from langchain_groq import ChatGroq
    from mcp_use import MCPAgent, MCPClient
    import os
    import sys
    import warnings
    
    warnings.filterwarnings("ignore", category=ResourceWarning)

    This section loads environment variables and imports required modules for LangChain, MCP-Use, and Groq. It also suppresses ResourceWarning for cleaner output.

    Setting up the chatbot

    Copy CodeCopiedUse a different Browser
    async def run_chatbot():
        """ Running a chat using MCPAgent's built in conversation memory """
        load_dotenv()
        os.environ["GROQ_API_KEY"] = os.getenv("GROQ_API_KEY")
    
        configFile = "mcp.json"
        print("Starting chatbot...")
    
        # Creating MCP client and LLM instance
        client = MCPClient.from_config_file(configFile)
        llm = ChatGroq(model="llama-3.1-8b-instant")
    
        # Creating an agent with memory enabled
        agent = MCPAgent(
            llm=llm,
            client=client,
            max_steps=15,
            memory_enabled=True,
            verbose=False
        )

    This section loads the Groq API key from the .env file and initializes the MCP client using the configuration provided in mcp.json. It then sets up the LangChain Groq LLM and creates a memory-enabled agent to handle conversations.

    Implementing the chatbot

    Copy CodeCopiedUse a different Browser
    # Add this in the run_chatbot function
        print("n-----Interactive MCP Chat----")
        print("Type 'exit' or 'quit' to end the conversation")
        print("Type 'clear' to clear conversation history")
    
        try:
            while True:
                user_input = input("nYou: ")
    
                if user_input.lower() in ["exit", "quit"]:
                    print("Ending conversation....")
                    break
               
                if user_input.lower() == "clear":
                    agent.clear_conversation_history()
                    print("Conversation history cleared....")
                    continue
               
                print("nAssistant: ", end="", flush=True)
    
                try:
                    response = await agent.run(user_input)
                    print(response)
               
                except Exception as e:
                    print(f"nError: {e}")
    
        finally:
            if client and client.sessions:
                await client.close_all_sessions()

    This section enables interactive chatting, allowing the user to input queries and receive responses from the assistant. It also supports clearing the chat history when requested. The assistant’s responses are displayed in real-time, and the code ensures that all MCP sessions are closed cleanly when the conversation ends or is interrupted.

    Running the app

    Copy CodeCopiedUse a different Browser
    if __name__ == "__main__":
        import asyncio
        try:
            asyncio.run(run_chatbot())
        except KeyboardInterrupt:
            print("Session interrupted. Goodbye!")
       
        finally:
            sys.stderr = open(os.devnull, "w")

    This section runs the asynchronous chatbot loop, managing continuous interaction with the user. It also handles keyboard interruptions gracefully, ensuring the program exits without errors when the user terminates the session.

    You can find the entire code here

    Step 4: Running the app

    To run the app, run the following command

    Copy CodeCopiedUse a different Browser
    uv run app.py

    This will start the app, and you can interact with the chatbot and use the server for the session

    The post Implementing an LLM Agent with Tool Access Using MCP-Use appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleA Step-by-Step Guide to Deploy a Fully Integrated Firecrawl-Powered MCP Server on Claude Desktop with Smithery and VeryaX
    Next Article Securing Amazon Bedrock Agents: A guide to safeguarding against indirect prompt injections

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    September 3, 2025
    Machine Learning

    Announcing the new cluster creation experience for Amazon SageMaker HyperPod

    September 3, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Critical 10-Year-Old Roundcube Webmail Bug Allows Authenticated Users Run Malicious Code

    Development

    AI Product Development: Transforming Visionary Ideas into Market-Leading Solutions🚀

    Web Development

    MongoDB Create Datbases and Collections

    Development

    Happy 20th birthday MDN!

    Development

    Highlights

    Tech & Work

    CodeSOD: Functionally, a Date

    September 16, 2025

    Dates are messy things, full of complicated edge cases and surprising ways for our assumptions…

    Brody King Abolish ICE Shirt

    June 22, 2025

    Best React.js Development Services in 2025: Features, Benefits & What to Look For

    August 29, 2025

    Empowering students with disabilities: University Startups’ generative AI solution for personalized student pathways

    August 14, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.