Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      CodeSOD: An Echo In Here in here

      September 19, 2025

      How To Minimize The Environmental Impact Of Your Website

      September 19, 2025

      Progress adds AI coding assistance to Telerik and Kendo UI libraries

      September 19, 2025

      Wasm 3.0 standard is now officially complete

      September 19, 2025

      Development Release: Ubuntu 25.10 Beta

      September 18, 2025

      Development Release: Linux Mint 7 Beta “LMDE”

      September 18, 2025

      Distribution Release: Tails 7.0

      September 18, 2025

      Distribution Release: Security Onion 2.4.180

      September 18, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      GenStudio for Performance Marketing: What’s New and What We’ve Learned

      September 19, 2025
      Recent

      GenStudio for Performance Marketing: What’s New and What We’ve Learned

      September 19, 2025

      Agentic and Generative Commerce Can Elevate CX in B2B

      September 19, 2025

      AI Momentum and Perficient’s Inclusion in Analyst Reports – Highlights From 2025 So Far

      September 18, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Denmark’s Strategic Leap Replacing Microsoft Office 365 with LibreOffice for Digital Independence

      September 19, 2025
      Recent

      Denmark’s Strategic Leap Replacing Microsoft Office 365 with LibreOffice for Digital Independence

      September 19, 2025

      Development Release: Ubuntu 25.10 Beta

      September 18, 2025

      Development Release: Linux Mint 7 Beta “LMDE”

      September 18, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Build an Intelligent Multi-Tool AI Agent Interface Using Streamlit for Seamless Real-Time Interaction

    Build an Intelligent Multi-Tool AI Agent Interface Using Streamlit for Seamless Real-Time Interaction

    June 20, 2025

    In this tutorial, we’ll build a powerful and interactive Streamlit application that brings together the capabilities of LangChain, the Google Gemini API, and a suite of advanced tools to create a smart AI assistant. Using Streamlit’s intuitive interface, we’ll create a chat-based system that can search the web, fetch Wikipedia content, perform calculations, remember key details, and handle conversation history, all in real time. Whether we’re developers, researchers, or just exploring AI, this setup allows us to interact with a multi-agent system directly from the browser with minimal code and maximum flexibility.

    Copy CodeCopiedUse a different Browser
    !pip install -q streamlit langchain langchain-google-genai langchain-community
    !pip install -q pyngrok python-dotenv wikipedia duckduckgo-search
    !npm install -g localtunnel
    
    
    import streamlit as st
    import os
    from langchain_google_genai import ChatGoogleGenerativeAI
    from langchain.agents import create_react_agent, AgentExecutor
    from langchain.tools import Tool, WikipediaQueryRun, DuckDuckGoSearchRun
    from langchain.memory import ConversationBufferWindowMemory
    from langchain.prompts import PromptTemplate
    from langchain.callbacks.streamlit import StreamlitCallbackHandler
    from langchain_community.utilities import WikipediaAPIWrapper, DuckDuckGoSearchAPIWrapper
    import asyncio
    import threading
    import time
    from datetime import datetime
    import json

    We begin by installing all the necessary Python and Node.js packages required for our AI assistant app. This includes Streamlit for the frontend, LangChain for agent logic, and tools like Wikipedia, DuckDuckGo, and ngrok/localtunnel for external search and hosting. Once set up, we import all modules to start building our interactive multi-tool AI agent.

    Copy CodeCopiedUse a different Browser
    GOOGLE_API_KEY = "Use Your API Key Here" 
    NGROK_AUTH_TOKEN = "Use Your Auth Token Here" 
    os.environ["GOOGLE_API_KEY"] = GOOGLE_API_KEY

    Next, we configure our environment by setting the Google Gemini API key and the ngrok authentication token. We assign these credentials to variables and set the GOOGLE_API_KEY so the LangChain agent can securely access the Gemini model during execution.

    Copy CodeCopiedUse a different Browser
    class InnovativeAgentTools:
       """Advanced tool collection for the multi-agent system"""
      
       @staticmethod
       def get_calculator_tool():
           def calculate(expression: str) -> str:
               """Calculate mathematical expressions safely"""
               try:
                   allowed_chars = set('0123456789+-*/.() ')
                   if all(c in allowed_chars for c in expression):
                       result = eval(expression)
                       return f"Result: {result}"
                   else:
                       return "Error: Invalid mathematical expression"
               except Exception as e:
                   return f"Calculation error: {str(e)}"
          
           return Tool(
               name="Calculator",
               func=calculate,
               description="Calculate mathematical expressions. Input should be a valid math expression."
           )
      
       @staticmethod
       def get_memory_tool(memory_store):
           def save_memory(key_value: str) -> str:
               """Save information to memory"""
               try:
                   key, value = key_value.split(":", 1)
                   memory_store[key.strip()] = value.strip()
                   return f"Saved '{key.strip()}' to memory"
               except:
                   return "Error: Use format 'key: value'"
          
           def recall_memory(key: str) -> str:
               """Recall information from memory"""
               return memory_store.get(key.strip(), f"No memory found for '{key}'")
          
           return [
               Tool(name="SaveMemory", func=save_memory,
                    description="Save information to memory. Format: 'key: value'"),
               Tool(name="RecallMemory", func=recall_memory,
                    description="Recall saved information. Input: key to recall")
           ]
      
       @staticmethod
       def get_datetime_tool():
           def get_current_datetime(format_type: str = "full") -> str:
               """Get current date and time"""
               now = datetime.now()
               if format_type == "date":
                   return now.strftime("%Y-%m-%d")
               elif format_type == "time":
                   return now.strftime("%H:%M:%S")
               else:
                   return now.strftime("%Y-%m-%d %H:%M:%S")
          
           return Tool(
               name="DateTime",
               func=get_current_datetime,
               description="Get current date/time. Options: 'date', 'time', or 'full'"
           )

    Here, we define the InnovativeAgentTools class to equip our AI agent with specialized capabilities. We implement tools such as a Calculator for safe expression evaluation, Memory Tools to save and recall information across turns, and a date and time tool to fetch the current date and time. These tools enable our Streamlit AI agent to reason, remember, and respond contextually, much like a true assistant. Check out the full Notebook here

    Copy CodeCopiedUse a different Browser
    class MultiAgentSystem:
       """Innovative multi-agent system with specialized capabilities"""
      
       def __init__(self, api_key: str):
           self.llm = ChatGoogleGenerativeAI(
               model="gemini-pro",
               google_api_key=api_key,
               temperature=0.7,
               convert_system_message_to_human=True
           )
           self.memory_store = {}
           self.conversation_memory = ConversationBufferWindowMemory(
               memory_key="chat_history",
               k=10,
               return_messages=True
           )
           self.tools = self._initialize_tools()
           self.agent = self._create_agent()
      
       def _initialize_tools(self):
           """Initialize all available tools"""
           tools = []
          
           tools.extend([
               DuckDuckGoSearchRun(api_wrapper=DuckDuckGoSearchAPIWrapper()),
               WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())
           ])
          
           tools.append(InnovativeAgentTools.get_calculator_tool())
           tools.append(InnovativeAgentTools.get_datetime_tool())
           tools.extend(InnovativeAgentTools.get_memory_tool(self.memory_store))
          
           return tools
      
       def _create_agent(self):
           """Create the ReAct agent with advanced prompt"""
           prompt = PromptTemplate.from_template("""
    <img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f916.png" alt="🤖" class="wp-smiley" /> You are an advanced AI assistant with access to multiple tools and persistent memory.
    
    
    AVAILABLE TOOLS:
    {tools}
    
    
    TOOL USAGE FORMAT:
    - Think step by step about what you need to do
    - Use Action: tool_name
    - Use Action Input: your input
    - Wait for Observation
    - Continue until you have a final answer
    
    
    MEMORY CAPABILITIES:
    - You can save important information using SaveMemory
    - You can recall previous information using RecallMemory
    - Always try to remember user preferences and context
    
    
    CONVERSATION HISTORY:
    {chat_history}
    
    
    CURRENT QUESTION: {input}
    
    
    REASONING PROCESS:
    {agent_scratchpad}
    
    
    Begin your response with your thought process, then take action if needed.
    """)
          
           agent = create_react_agent(self.llm, self.tools, prompt)
           return AgentExecutor(
               agent=agent,
               tools=self.tools,
               memory=self.conversation_memory,
               verbose=True,
               handle_parsing_errors=True,
               max_iterations=5
           )
      
       def chat(self, message: str, callback_handler=None):
           """Process user message and return response"""
           try:
               if callback_handler:
                   response = self.agent.invoke(
                       {"input": message},
                       {"callbacks": [callback_handler]}
                   )
               else:
                   response = self.agent.invoke({"input": message})
               return response["output"]
           except Exception as e:
               return f"Error processing request: {str(e)}"

    In this section, we build the core of our application, the MultiAgentSystem class. Here, we integrate the Gemini Pro model using LangChain and initialize all essential tools, including web search, memory, and calculator functions. We configure a ReAct-style agent using a custom prompt that guides tool usage and memory handling. Finally, we define a chat method that allows the agent to process user input, invoke tools when necessary, and generate intelligent, context-aware responses. Check out the full Notebook here

    Copy CodeCopiedUse a different Browser
    def create_streamlit_app():
       """Create the innovative Streamlit application"""
      
       st.set_page_config(
           page_title="<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f680.png" alt="🚀" class="wp-smiley" /> Advanced LangChain Agent with Gemini",
           page_icon="<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f916.png" alt="🤖" class="wp-smiley" />",
           layout="wide",
           initial_sidebar_state="expanded"
       )
      
       st.markdown("""
       <style>
       .main-header {
           background: linear-gradient(90deg, #667eea 0%, #764ba2 100%);
           padding: 1rem;
           border-radius: 10px;
           color: white;
           text-align: center;
           margin-bottom: 2rem;
       }
       .agent-response {
           background-color: #f0f2f6;
           padding: 1rem;
           border-radius: 10px;
           border-left: 4px solid #667eea;
           margin: 1rem 0;
       }
       .memory-card {
           background-color: #e8f4fd;
           padding: 1rem;
           border-radius: 8px;
           margin: 0.5rem 0;
       }
       </style>
       """, unsafe_allow_html=True)
      
       st.markdown("""
       <div class="main-header">
           <h1><img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f680.png" alt="🚀" class="wp-smiley" /> Advanced Multi-Agent System</h1>
           <p>Powered by LangChain + Gemini API + Streamlit</p>
       </div>
       """, unsafe_allow_html=True)
      
       with st.sidebar:
           st.header("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f527.png" alt="🔧" class="wp-smiley" /> Configuration")
          
           api_key = st.text_input(
               "<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f511.png" alt="🔑" class="wp-smiley" /> Google AI API Key",
               type="password",
               value=GOOGLE_API_KEY if GOOGLE_API_KEY != "your-gemini-api-key-here" else "",
               help="Get your API key from https://ai.google.dev/"
           )
          
           if not api_key:
               st.error("Please enter your Google AI API key to continue")
               st.stop()
          
           st.success("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/2705.png" alt="✅" class="wp-smiley" /> API Key configured")
          
           st.header("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f916.png" alt="🤖" class="wp-smiley" /> Agent Capabilities")
           st.markdown("""
           - <img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f50d.png" alt="🔍" class="wp-smiley" /> **Web Search** (DuckDuckGo)
           - <img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f4da.png" alt="📚" class="wp-smiley" /> **Wikipedia Lookup**
           - <img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f9ee.png" alt="🧮" class="wp-smiley" /> **Mathematical Calculator**
           - <img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f9e0.png" alt="🧠" class="wp-smiley" /> **Persistent Memory**
           - <img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f4c5.png" alt="📅" class="wp-smiley" /> **Date & Time**
           - <img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f4ac.png" alt="💬" class="wp-smiley" /> **Conversation History**
           """)
          
           if 'agent_system' in st.session_state:
               st.header("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f9e0.png" alt="🧠" class="wp-smiley" /> Memory Store")
               memory = st.session_state.agent_system.memory_store
               if memory:
                   for key, value in memory.items():
                       st.markdown(f"""
                       <div class="memory-card">
                           <strong>{key}:</strong> {value}
                       </div>
                       """, unsafe_allow_html=True)
               else:
                   st.info("No memories stored yet")
      
       if 'agent_system' not in st.session_state:
           with st.spinner("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f504.png" alt="🔄" class="wp-smiley" /> Initializing Advanced Agent System..."):
               st.session_state.agent_system = MultiAgentSystem(api_key)
           st.success("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/2705.png" alt="✅" class="wp-smiley" /> Agent System Ready!")
      
       st.header("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f4ac.png" alt="💬" class="wp-smiley" /> Interactive Chat")
      
       if 'messages' not in st.session_state:
           st.session_state.messages = [{
               "role": "assistant",
               "content": """<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f916.png" alt="🤖" class="wp-smiley" /> Hello! I'm your advanced AI assistant powered by Gemini. I can:
    
    
    • Search the web and Wikipedia for information
    • Perform mathematical calculations
    • Remember important information across our conversation
    • Provide current date and time
    • Maintain conversation context
    
    
    Try asking me something like:
    - "Calculate 15 * 8 + 32"
    - "Search for recent news about AI"
    - "Remember that my favorite color is blue"
    - "What's the current time?"
    """
           }]
      
       for message in st.session_state.messages:
           with st.chat_message(message["role"]):
               st.markdown(message["content"])
      
       if prompt := st.chat_input("Ask me anything..."):
           st.session_state.messages.append({"role": "user", "content": prompt})
           with st.chat_message("user"):
               st.markdown(prompt)
          
           with st.chat_message("assistant"):
               callback_handler = StreamlitCallbackHandler(st.container())
              
               with st.spinner("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f914.png" alt="🤔" class="wp-smiley" /> Thinking..."):
                   response = st.session_state.agent_system.chat(prompt, callback_handler)
              
               st.markdown(f"""
               <div class="agent-response">
                   {response}
               </div>
               """, unsafe_allow_html=True)
              
               st.session_state.messages.append({"role": "assistant", "content": response})
      
       st.header("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f4a1.png" alt="💡" class="wp-smiley" /> Example Queries")
       col1, col2, col3 = st.columns(3)
      
       with col1:
           if st.button("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f50d.png" alt="🔍" class="wp-smiley" /> Search Example"):
               example = "Search for the latest developments in quantum computing"
               st.session_state.example_query = example
      
       with col2:
           if st.button("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f9ee.png" alt="🧮" class="wp-smiley" /> Math Example"):
               example = "Calculate the compound interest on $1000 at 5% for 3 years"
               st.session_state.example_query = example
      
       with col3:
           if st.button("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f9e0.png" alt="🧠" class="wp-smiley" /> Memory Example"):
               example = "Remember that I work as a data scientist at TechCorp"
               st.session_state.example_query = example
      
       if 'example_query' in st.session_state:
           st.info(f"Example query: {st.session_state.example_query}")

    In this section, we bring everything together by building an interactive web interface using Streamlit. We configure the app layout, define custom CSS styles, and set up a sidebar for inputting API keys and configuring agent capabilities. We initialize the multi-agent system, maintain a message history, and enable a chat interface that allows users to interact in real-time. To make it even easier to explore, we also provide example buttons for search, math, and memory-related queries,  all in a beautifully styled, responsive UI. Check out the full Notebook here

    Copy CodeCopiedUse a different Browser
    def setup_ngrok_auth(auth_token):
       """Setup ngrok authentication"""
       try:
           from pyngrok import ngrok, conf
          
           conf.get_default().auth_token = auth_token
          
           try:
               tunnels = ngrok.get_tunnels()
               print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/2705.png" alt="✅" class="wp-smiley" /> Ngrok authentication successful!")
               return True
           except Exception as e:
               print(f"<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/274c.png" alt="❌" class="wp-smiley" /> Ngrok authentication failed: {e}")
               return False
              
       except ImportError:
           print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/274c.png" alt="❌" class="wp-smiley" /> pyngrok not installed. Installing...")
           import subprocess
           subprocess.run(['pip', 'install', 'pyngrok'], check=True)
           return setup_ngrok_auth(auth_token)
    
    
    def get_ngrok_token_instructions():
       """Provide instructions for getting ngrok token"""
       return """
    <img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f527.png" alt="🔧" class="wp-smiley" /> NGROK AUTHENTICATION SETUP:
    
    
    1. Sign up for an ngrok account:
      - Visit: https://dashboard.ngrok.com/signup
      - Create a free account
    
    
    2. Get your authentication token:
      - Go to: https://dashboard.ngrok.com/get-started/your-authtoken
      - Copy your authtoken
    
    
    3. Replace 'your-ngrok-auth-token-here' in the code with your actual token
    
    
    4. Alternative methods if ngrok fails:
      - Use Google Colab's built-in public URL feature
      - Use localtunnel: !npx localtunnel --port 8501
      - Use serveo.net: !ssh -R 80:localhost:8501 serveo.net
    """
    

    Here, we set up a helper function to authenticate ngrok, which allows us to expose our local Streamlit app to the internet. We use the pyngrok library to configure the authentication token and verify the connection. If the token is missing or invalid, we provide detailed instructions on how to obtain one and suggest alternative tunneling methods, such as LocalTunnel or Serveo, making it easy for us to host and share our app from environments like Google Colab.

    Copy CodeCopiedUse a different Browser
    def main():
       """Main function to run the application"""
       try:
           create_streamlit_app()
       except Exception as e:
           st.error(f"Application error: {str(e)}")
           st.info("Please check your API key and try refreshing the page")

    This main() function acts as the entry point for our Streamlit application. We simply call create_streamlit_app() to launch the full interface. If anything goes wrong, such as a missing API key or a failed tool initialization, we catch the error gracefully and display a helpful message, ensuring the user knows how to recover and continue using the app smoothly.

    Copy CodeCopiedUse a different Browser
    def run_in_colab():
       """Run the application in Google Colab with proper ngrok setup"""
      
       print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f680.png" alt="🚀" class="wp-smiley" /> Starting Advanced LangChain Agent Setup...")
      
       if NGROK_AUTH_TOKEN == "your-ngrok-auth-token-here":
           print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/26a0.png" alt="⚠" class="wp-smiley" />  NGROK_AUTH_TOKEN not configured!")
           print(get_ngrok_token_instructions())
          
           print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f504.png" alt="🔄" class="wp-smiley" /> Attempting alternative tunnel methods...")
           try_alternative_tunnels()
           return
      
       print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f4e6.png" alt="📦" class="wp-smiley" /> Installing required packages...")
       import subprocess
      
       packages = [
           'streamlit',
           'langchain',
           'langchain-google-genai',
           'langchain-community',
           'wikipedia',
           'duckduckgo-search',
           'pyngrok'
       ]
      
       for package in packages:
           try:
               subprocess.run(['pip', 'install', package], check=True, capture_output=True)
               print(f"<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/2705.png" alt="✅" class="wp-smiley" /> {package} installed")
           except subprocess.CalledProcessError:
               print(f"<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/26a0.png" alt="⚠" class="wp-smiley" />  Failed to install {package}")
      
       app_content = '''
    import streamlit as st
    import os
    from langchain_google_genai import ChatGoogleGenerativeAI
    from langchain.agents import create_react_agent, AgentExecutor
    from langchain.tools import Tool, WikipediaQueryRun, DuckDuckGoSearchRun
    from langchain.memory import ConversationBufferWindowMemory
    from langchain.prompts import PromptTemplate
    from langchain.callbacks.streamlit import StreamlitCallbackHandler
    from langchain_community.utilities import WikipediaAPIWrapper, DuckDuckGoSearchAPIWrapper
    from datetime import datetime
    
    
    # Configuration - Replace with your actual keys
    GOOGLE_API_KEY = "''' + GOOGLE_API_KEY + '''"
    os.environ["GOOGLE_API_KEY"] = GOOGLE_API_KEY
    
    
    class InnovativeAgentTools:
       @staticmethod
       def get_calculator_tool():
           def calculate(expression: str) -> str:
               try:
                   allowed_chars = set('0123456789+-*/.() ')
                   if all(c in allowed_chars for c in expression):
                       result = eval(expression)
                       return f"Result: {result}"
                   else:
                       return "Error: Invalid mathematical expression"
               except Exception as e:
                   return f"Calculation error: {str(e)}"
          
           return Tool(name="Calculator", func=calculate,
                      description="Calculate mathematical expressions. Input should be a valid math expression.")
      
       @staticmethod
       def get_memory_tool(memory_store):
           def save_memory(key_value: str) -> str:
               try:
                   key, value = key_value.split(":", 1)
                   memory_store[key.strip()] = value.strip()
                   return f"Saved '{key.strip()}' to memory"
               except:
                   return "Error: Use format 'key: value'"
          
           def recall_memory(key: str) -> str:
               return memory_store.get(key.strip(), f"No memory found for '{key}'")
          
           return [
               Tool(name="SaveMemory", func=save_memory, description="Save information to memory. Format: 'key: value'"),
               Tool(name="RecallMemory", func=recall_memory, description="Recall saved information. Input: key to recall")
           ]
      
       @staticmethod
       def get_datetime_tool():
           def get_current_datetime(format_type: str = "full") -> str:
               now = datetime.now()
               if format_type == "date":
                   return now.strftime("%Y-%m-%d")
               elif format_type == "time":
                   return now.strftime("%H:%M:%S")
               else:
                   return now.strftime("%Y-%m-%d %H:%M:%S")
          
           return Tool(name="DateTime", func=get_current_datetime,
                      description="Get current date/time. Options: 'date', 'time', or 'full'")
    
    
    class MultiAgentSystem:
       def __init__(self, api_key: str):
           self.llm = ChatGoogleGenerativeAI(
               model="gemini-pro",
               google_api_key=api_key,
               temperature=0.7,
               convert_system_message_to_human=True
           )
           self.memory_store = {}
           self.conversation_memory = ConversationBufferWindowMemory(
               memory_key="chat_history", k=10, return_messages=True
           )
           self.tools = self._initialize_tools()
           self.agent = self._create_agent()
      
       def _initialize_tools(self):
           tools = []
           try:
               tools.extend([
                   DuckDuckGoSearchRun(api_wrapper=DuckDuckGoSearchAPIWrapper()),
                   WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())
               ])
           except Exception as e:
               st.warning(f"Search tools may have limited functionality: {e}")
          
           tools.append(InnovativeAgentTools.get_calculator_tool())
           tools.append(InnovativeAgentTools.get_datetime_tool())
           tools.extend(InnovativeAgentTools.get_memory_tool(self.memory_store))
           return tools
      
       def _create_agent(self):
           prompt = PromptTemplate.from_template("""
    <img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f916.png" alt="🤖" class="wp-smiley" /> You are an advanced AI assistant with access to multiple tools and persistent memory.
    
    
    AVAILABLE TOOLS:
    {tools}
    
    
    TOOL USAGE FORMAT:
    - Think step by step about what you need to do
    - Use Action: tool_name
    - Use Action Input: your input
    - Wait for Observation
    - Continue until you have a final answer
    
    
    CONVERSATION HISTORY:
    {chat_history}
    
    
    CURRENT QUESTION: {input}
    
    
    REASONING PROCESS:
    {agent_scratchpad}
    
    
    Begin your response with your thought process, then take action if needed.
    """)
          
           agent = create_react_agent(self.llm, self.tools, prompt)
           return AgentExecutor(agent=agent, tools=self.tools, memory=self.conversation_memory,
                              verbose=True, handle_parsing_errors=True, max_iterations=5)
      
       def chat(self, message: str, callback_handler=None):
           try:
               if callback_handler:
                   response = self.agent.invoke({"input": message}, {"callbacks": [callback_handler]})
               else:
                   response = self.agent.invoke({"input": message})
               return response["output"]
           except Exception as e:
               return f"Error processing request: {str(e)}"
    
    
    # Streamlit App
    st.set_page_config(page_title="<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f680.png" alt="🚀" class="wp-smiley" /> Advanced LangChain Agent", page_icon="<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f916.png" alt="🤖" class="wp-smiley" />", layout="wide")
    
    
    st.markdown("""
    <style>
    .main-header {
       background: linear-gradient(90deg, #667eea 0%, #764ba2 100%);
       padding: 1rem; border-radius: 10px; color: white; text-align: center; margin-bottom: 2rem;
    }
    .agent-response {
       background-color: #f0f2f6; padding: 1rem; border-radius: 10px;
       border-left: 4px solid #667eea; margin: 1rem 0;
    }
    .memory-card {
       background-color: #e8f4fd; padding: 1rem; border-radius: 8px; margin: 0.5rem 0;
    }
    </style>
    """, unsafe_allow_html=True)
    
    
    st.markdown('<div class="main-header"><h1><img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f680.png" alt="🚀" class="wp-smiley" /> Advanced Multi-Agent System</h1><p>Powered by LangChain + Gemini API</p></div>', unsafe_allow_html=True)
    
    
    with st.sidebar:
       st.header("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f527.png" alt="🔧" class="wp-smiley" /> Configuration")
       api_key = st.text_input("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f511.png" alt="🔑" class="wp-smiley" /> Google AI API Key", type="password", value=GOOGLE_API_KEY)
      
       if not api_key:
           st.error("Please enter your Google AI API key")
           st.stop()
      
       st.success("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/2705.png" alt="✅" class="wp-smiley" /> API Key configured")
      
       st.header("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f916.png" alt="🤖" class="wp-smiley" /> Agent Capabilities")
       st.markdown("- <img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f50d.png" alt="🔍" class="wp-smiley" /> Web Search\n- <img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f4da.png" alt="📚" class="wp-smiley" /> Wikipedia\n- <img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f9ee.png" alt="🧮" class="wp-smiley" /> Calculator\n- <img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f9e0.png" alt="🧠" class="wp-smiley" /> Memory\n- <img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f4c5.png" alt="📅" class="wp-smiley" /> Date/Time")
      
       if 'agent_system' in st.session_state and st.session_state.agent_system.memory_store:
           st.header("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f9e0.png" alt="🧠" class="wp-smiley" /> Memory Store")
           for key, value in st.session_state.agent_system.memory_store.items():
               st.markdown(f'<div class="memory-card"><strong>{key}:</strong> {value}</div>', unsafe_allow_html=True)
    
    
    if 'agent_system' not in st.session_state:
       with st.spinner("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f504.png" alt="🔄" class="wp-smiley" /> Initializing Agent..."):
           st.session_state.agent_system = MultiAgentSystem(api_key)
       st.success("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/2705.png" alt="✅" class="wp-smiley" /> Agent Ready!")
    
    
    if 'messages' not in st.session_state:
       st.session_state.messages = [{
           "role": "assistant",
           "content": "<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f916.png" alt="🤖" class="wp-smiley" /> Hello! I'm your advanced AI assistant. I can search, calculate, remember information, and more! Try asking me to: calculate something, search for information, or remember a fact about you."
       }]
    
    
    for message in st.session_state.messages:
       with st.chat_message(message["role"]):
           st.markdown(message["content"])
    
    
    if prompt := st.chat_input("Ask me anything..."):
       st.session_state.messages.append({"role": "user", "content": prompt})
       with st.chat_message("user"):
           st.markdown(prompt)
      
       with st.chat_message("assistant"):
           callback_handler = StreamlitCallbackHandler(st.container())
           with st.spinner("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f914.png" alt="🤔" class="wp-smiley" /> Thinking..."):
               response = st.session_state.agent_system.chat(prompt, callback_handler)
           st.markdown(f'<div class="agent-response">{response}</div>', unsafe_allow_html=True)
           st.session_state.messages.append({"role": "assistant", "content": response})
    
    
    # Example buttons
    st.header("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f4a1.png" alt="💡" class="wp-smiley" /> Try These Examples")
    col1, col2, col3 = st.columns(3)
    with col1:
       if st.button("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f9ee.png" alt="🧮" class="wp-smiley" /> Calculate 15 * 8 + 32"):
           st.rerun()
    with col2:
       if st.button("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f50d.png" alt="🔍" class="wp-smiley" /> Search AI news"):
           st.rerun()
    with col3:
       if st.button("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f9e0.png" alt="🧠" class="wp-smiley" /> Remember my name is Alex"):
           st.rerun()
    '''
      
       with open('streamlit_app.py', 'w') as f:
           f.write(app_content)
      
       print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/2705.png" alt="✅" class="wp-smiley" /> Streamlit app file created successfully!")
      
       if setup_ngrok_auth(NGROK_AUTH_TOKEN):
           start_streamlit_with_ngrok()
       else:
           print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/274c.png" alt="❌" class="wp-smiley" /> Ngrok authentication failed. Trying alternative methods...")
           try_alternative_tunnels()

    In the run_in_colab() function, we make it easy to deploy the Streamlit app directly from a Google Colab environment. We begin by installing all required packages, then dynamically generate and write the complete Streamlit app code to a streamlit_app.py file. We verify the presence of a valid ngrok token to enable public access to the app from Colab, and if it’s missing or invalid, we guide ourselves through fallback tunneling options. This setup allows us to interact with our AI agent from anywhere, all within a few cells in Colab. Check out the full Notebook here

    Copy CodeCopiedUse a different Browser
    def start_streamlit_with_ngrok():
       """Start Streamlit with ngrok tunnel"""
       import subprocess
       import threading
       from pyngrok import ngrok
      
       def start_streamlit():
           subprocess.run(['streamlit', 'run', 'streamlit_app.py', '--server.port=8501', '--server.headless=true'])
      
       print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f680.png" alt="🚀" class="wp-smiley" /> Starting Streamlit server...")
       thread = threading.Thread(target=start_streamlit)
       thread.daemon = True
       thread.start()
      
       time.sleep(5)
      
       try:
           print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f310.png" alt="🌐" class="wp-smiley" /> Creating ngrok tunnel...")
           public_url = ngrok.connect(8501)
           print(f"<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f517.png" alt="🔗" class="wp-smiley" /> SUCCESS! Access your app at: {public_url}")
           print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/2728.png" alt="✨" class="wp-smiley" /> Your Advanced LangChain Agent is now running publicly!")
           print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f4f1.png" alt="📱" class="wp-smiley" /> You can share this URL with others!")
          
           print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/23f3.png" alt="⏳" class="wp-smiley" /> Keeping tunnel alive... Press Ctrl+C to stop")
           try:
               ngrok_process = ngrok.get_ngrok_process()
               ngrok_process.proc.wait()
           except KeyboardInterrupt:
               print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f44b.png" alt="👋" class="wp-smiley" /> Shutting down...")
               ngrok.kill()
              
       except Exception as e:
           print(f"<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/274c.png" alt="❌" class="wp-smiley" /> Ngrok tunnel failed: {e}")
           try_alternative_tunnels()
    
    
    def try_alternative_tunnels():
       """Try alternative tunneling methods"""
       print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f504.png" alt="🔄" class="wp-smiley" /> Trying alternative tunnel methods...")
      
       import subprocess
       import threading
      
       def start_streamlit():
           subprocess.run(['streamlit', 'run', 'streamlit_app.py', '--server.port=8501', '--server.headless=true'])
      
       thread = threading.Thread(target=start_streamlit)
       thread.daemon = True
       thread.start()
      
       time.sleep(3)
      
       print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f310.png" alt="🌐" class="wp-smiley" /> Streamlit is running on http://localhost:8501")
       print("n<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f4cb.png" alt="📋" class="wp-smiley" /> ALTERNATIVE TUNNEL OPTIONS:")
       print("1. localtunnel: Run this in a new cell:")
       print("   !npx localtunnel --port 8501")
       print("n2. serveo.net: Run this in a new cell:")
       print("   !ssh -R 80:localhost:8501 serveo.net")
       print("n3. Colab public URL (if available):")
       print("   Use the 'Public URL' button in Colab's interface")
      
       try:
           while True:
               time.sleep(60)
       except KeyboardInterrupt:
           print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f44b.png" alt="👋" class="wp-smiley" /> Shutting down...")
    
    
    if __name__ == "__main__":
       try:
           get_ipython()
           print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f680.png" alt="🚀" class="wp-smiley" /> Google Colab detected - starting setup...")
           run_in_colab()
       except NameError:
           main()

    In this final part, we set up the execution logic to run the app either in a local environment or inside Google Colab. The start_streamlit_with_ngrok() function launches the Streamlit server in the background and uses ngrok to expose it publicly, making it easy to access and share. If ngrok fails, the try_alternative_tunnels() function activates with alternative tunneling options, such as LocalTunnel and Serveo. With the __main__ block, we automatically detect if we’re in Colab and launch the appropriate setup, making the entire deployment process smooth, flexible, and shareable from anywhere.

    In conclusion, we’ll have a fully functional AI agent running inside a sleek Streamlit interface, capable of answering queries, remembering user inputs, and even sharing its services publicly using ngrok. We’ve seen how easily Streamlit enables us to integrate advanced AI functionalities into an engaging and user-friendly app. From here, we can expand the agent’s tools, plug it into larger workflows, or deploy it as part of our intelligent applications. With Streamlit as the front-end and LangChain agents powering the logic, we’ve built a solid foundation for next-gen interactive AI experiences.


    Check out the full Notebook here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

    The post Build an Intelligent Multi-Tool AI Agent Interface Using Streamlit for Seamless Real-Time Interaction appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleStage – Git GUI client for Linux desktops
    Next Article UC Berkeley Introduces CyberGym: A Real-World Cybersecurity Evaluation Framework to Evaluate AI Agents on Large-Scale Vulnerabilities Across Massive Codebases

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    September 3, 2025
    Machine Learning

    Announcing the new cluster creation experience for Amazon SageMaker HyperPod

    September 3, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    ToolShell: An all-you-can-eat buffet for threat actors

    Development

    Linux App Release Roundup (April 2025)

    Linux

    CVE-2025-4433 – Devolutions Server Privilege Escalation Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-6065 – WordPress Image Resizer On The Fly Remote Code Execution Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    Infortrend NAS CS4000U Storage Cost and Price in India – Affordable and Reliable

    May 16, 2025

    Post Content Source: Read More 

    How AI in Staff Augmentation Transforms Hiring and Delivery

    September 12, 2025

    CVE-2025-48373 – Schule Client-Side Role Hijacking Vulnerability

    May 22, 2025

    CVE-2025-50067 – Oracle Application Express Strategic Planner Starter App HTTP Takeover Vulnerability

    July 16, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.