Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      How AI further empowers value stream management

      June 27, 2025

      12 Top ReactJS Development Companies in 2025

      June 27, 2025

      Not sure where to go with AI? Here’s your roadmap.

      June 27, 2025

      This week in AI dev tools: A2A donated to Linux Foundation, OpenAI adds Deep Research to API, and more (June 27, 2025)

      June 27, 2025

      Your Slack app is getting a big upgrade – here’s how to try the new AI features

      June 29, 2025

      5 Kindle accessories every user should have (and why they make such a big difference)

      June 29, 2025

      These premium outdoor speakers made me reconsider switching to Bluetooth audio – here’s why

      June 29, 2025

      Google just gave its Photos app the feature upgrade it deserves – here’s what’s new

      June 29, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      How Code Feedback MCP Enhances AI-Generated Code Quality

      June 28, 2025
      Recent

      How Code Feedback MCP Enhances AI-Generated Code Quality

      June 28, 2025

      PRSS Site Creator – Create Blogs and Websites from Your Desktop

      June 28, 2025

      Say hello to ECMAScript 2025

      June 27, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      HopToDesk – remote desktop tool

      June 29, 2025
      Recent

      HopToDesk – remote desktop tool

      June 29, 2025

      Le notizie minori del mondo GNU/Linux e dintorni della settimana nr 26/2025

      June 29, 2025

      Wayland vs X11: progresso necessario o strategia di marketing?

      June 29, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Implementing an LLM Agent with Tool Access Using MCP-Use

    Implementing an LLM Agent with Tool Access Using MCP-Use

    May 13, 2025

    MCP-Use is an open-source library that lets you connect any LLM to any MCP server, giving your agents tool access like web browsing, file operations, and more — all without relying on closed-source clients. In this tutorial, we’ll use langchain-groq and MCP-Use’s built-in conversation memory to build a simple chatbot that can interact with tools via MCP. 

    Step 1: Setting Up the Environment

    Installing uv package manager

    We will first set up our environment and start with installing the uv package manager. For Mac or Linux:

    Copy CodeCopiedUse a different Browser
    curl -LsSf https://astral.sh/uv/install.sh | sh 

    For Windows (PowerShell):

    Copy CodeCopiedUse a different Browser
    powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"

    Creating a new directory and activating a virtual environment

    We will then create a new project directory and initialize it with uv

    Copy CodeCopiedUse a different Browser
    uv init mcp-use-demo
    cd mcp-use-demo

    We can now create and activate a virtual environment. For Mac or Linux:

    Copy CodeCopiedUse a different Browser
    uv venv
    source .venv/bin/activate

    For Windows:

    Copy CodeCopiedUse a different Browser
    uv venv
    .venvScriptsactivate

    Installing Python dependencies

    We will now install the required dependencies

    Copy CodeCopiedUse a different Browser
    uv add mcp-use langchain-groq python-dotenv

    Step 2: Setting Up the Environment Variables

    Groq API Key

    To use Groq’s LLMs:

    1. Visit Groq Console and generate an API key.
    2. Create a .env file in your project directory and add the following line:
    Copy CodeCopiedUse a different Browser
    GROQ_API_KEY=<YOUR_API_KEY>

     Replace <YOUR_API_KEY> with the key you just generated.

    Brave Search API Key

    This tutorial uses the Brave Search MCP Server.

    1. Get your Brave Search API key from: Brave Search API
    2. Create a file named mcp.json in the project root with the following content:
    Copy CodeCopiedUse a different Browser
    {
      "mcpServers": {
        "brave-search": {
          "command": "npx",
          "args": [
            "-y",
            "@modelcontextprotocol/server-brave-search"
          ],
          "env": {
            "BRAVE_API_KEY": "<YOUR_BRAVE_SEARCH_API>"
          }
        }
      }
    }

    Replace <YOUR_BRAVE_SEARCH_API> with your actual Brave API key.

    Node JS

    Some MCP servers (including Brave Search) require npx, which comes with Node.js.

    • Download the latest version of Node.js from nodejs.org
    • Run the installer.
    • Leave all settings as default and complete the installation

    Using other servers

    If you’d like to use a different MCP server, simply replace the contents of mcp.json with the configuration for that server.

    Step 3: Implementing the chatbot and integrating the MCP server

    Create an app.py file in the directory and add the following content:

    Importing the libraries

    Copy CodeCopiedUse a different Browser
    from dotenv import load_dotenv
    from langchain_groq import ChatGroq
    from mcp_use import MCPAgent, MCPClient
    import os
    import sys
    import warnings
    
    warnings.filterwarnings("ignore", category=ResourceWarning)

    This section loads environment variables and imports required modules for LangChain, MCP-Use, and Groq. It also suppresses ResourceWarning for cleaner output.

    Setting up the chatbot

    Copy CodeCopiedUse a different Browser
    async def run_chatbot():
        """ Running a chat using MCPAgent's built in conversation memory """
        load_dotenv()
        os.environ["GROQ_API_KEY"] = os.getenv("GROQ_API_KEY")
    
        configFile = "mcp.json"
        print("Starting chatbot...")
    
        # Creating MCP client and LLM instance
        client = MCPClient.from_config_file(configFile)
        llm = ChatGroq(model="llama-3.1-8b-instant")
    
        # Creating an agent with memory enabled
        agent = MCPAgent(
            llm=llm,
            client=client,
            max_steps=15,
            memory_enabled=True,
            verbose=False
        )

    This section loads the Groq API key from the .env file and initializes the MCP client using the configuration provided in mcp.json. It then sets up the LangChain Groq LLM and creates a memory-enabled agent to handle conversations.

    Implementing the chatbot

    Copy CodeCopiedUse a different Browser
    # Add this in the run_chatbot function
        print("n-----Interactive MCP Chat----")
        print("Type 'exit' or 'quit' to end the conversation")
        print("Type 'clear' to clear conversation history")
    
        try:
            while True:
                user_input = input("nYou: ")
    
                if user_input.lower() in ["exit", "quit"]:
                    print("Ending conversation....")
                    break
               
                if user_input.lower() == "clear":
                    agent.clear_conversation_history()
                    print("Conversation history cleared....")
                    continue
               
                print("nAssistant: ", end="", flush=True)
    
                try:
                    response = await agent.run(user_input)
                    print(response)
               
                except Exception as e:
                    print(f"nError: {e}")
    
        finally:
            if client and client.sessions:
                await client.close_all_sessions()

    This section enables interactive chatting, allowing the user to input queries and receive responses from the assistant. It also supports clearing the chat history when requested. The assistant’s responses are displayed in real-time, and the code ensures that all MCP sessions are closed cleanly when the conversation ends or is interrupted.

    Running the app

    Copy CodeCopiedUse a different Browser
    if __name__ == "__main__":
        import asyncio
        try:
            asyncio.run(run_chatbot())
        except KeyboardInterrupt:
            print("Session interrupted. Goodbye!")
       
        finally:
            sys.stderr = open(os.devnull, "w")

    This section runs the asynchronous chatbot loop, managing continuous interaction with the user. It also handles keyboard interruptions gracefully, ensuring the program exits without errors when the user terminates the session.

    You can find the entire code here

    Step 4: Running the app

    To run the app, run the following command

    Copy CodeCopiedUse a different Browser
    uv run app.py

    This will start the app, and you can interact with the chatbot and use the server for the session

    The post Implementing an LLM Agent with Tool Access Using MCP-Use appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleA Step-by-Step Guide to Deploy a Fully Integrated Firecrawl-Powered MCP Server on Claude Desktop with Smithery and VeryaX
    Next Article Securing Amazon Bedrock Agents: A guide to safeguarding against indirect prompt injections

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    June 29, 2025
    Machine Learning

    AWS costs estimation using Amazon Q CLI and AWS Cost Analysis MCP

    June 27, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Breaking into Freelance UX Research: A Guide for Experienced Practitioners

    Web Development

    Windows 11 version 25H2: Everything you need to know about Microsoft’s next OS release

    News & Updates

    How to Create Partitions in Linux Using fdisk Command

    Learning Resources

    CVE-2025-5362 – Campcodes Online Hospital Management System SQL Injection Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    CVE-2025-20216 – Cisco Catalyst SD-WAN Manager Cross-Site Scripting (XSS)

    May 7, 2025

    CVE ID : CVE-2025-20216

    Published : May 7, 2025, 6:15 p.m. | 1 hour, 20 minutes ago

    Description : A vulnerability in the web interface of Cisco Catalyst SD-WAN Manager, formerly Cisco SD-WAN vManage, could allow an unauthenticated, remote attacker to inject HTML into the browser of an authenticated user.

    This vulnerability is due to improper sanitization of input to the web interface. An attacker could exploit this vulnerability by convincing an authenticated user to click a malicious link. A successful exploit could allow the attacker to inject HTML into the browser of an authenticated Cisco Catalyst SD-WAN Manager user.

    Severity: 4.7 | MEDIUM

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    CISA Warns SAP 0-day Vulnerability Exploited in the Wild

    April 30, 2025

    What they’re not teaching in design class

    June 13, 2025

    SERP MCP Server Directory

    May 7, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.