Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Gemini 2.5 Pro and Flash are generally available and Gemini 2.5 Flash-Lite preview is announced

      June 19, 2025

      CSS Cascade Layers Vs. BEM Vs. Utility Classes: Specificity Control

      June 19, 2025

      IBM launches new integration to help unify AI security and governance

      June 18, 2025

      Meet Accessible UX Research, A Brand-New Smashing Book

      June 18, 2025

      I’ve tested dozens of robot vacuums. These are the three I recommend most to family and friends

      June 20, 2025

      These apps are quietly draining your phone battery – how to find and shut them down

      June 20, 2025

      184 million passwords for Google, Microsoft, Facebook, and more leaked in massive data breach

      June 20, 2025

      I tested the world’s thinnest SSD enclosure – here’s why it’s the perfect PC accessory for me

      June 20, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Importance of Performance Adaptation in Frontend Development

      June 20, 2025
      Recent

      Importance of Performance Adaptation in Frontend Development

      June 20, 2025

      Proactive, Not Reactive – The Key to Inclusive and Accessible Design

      June 20, 2025

      Reset Rate Limits Dynamically with Laravel’s clear Method

      June 20, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Stage – Git GUI client for Linux desktops

      June 20, 2025
      Recent

      Stage – Git GUI client for Linux desktops

      June 20, 2025

      Edit: L’editor di testo a riga di comando di Microsoft anche per GNU/Linux

      June 20, 2025

      Splitcat – split and merge files

      June 20, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Build an Intelligent Multi-Tool AI Agent Interface Using Streamlit for Seamless Real-Time Interaction

    Build an Intelligent Multi-Tool AI Agent Interface Using Streamlit for Seamless Real-Time Interaction

    June 20, 2025

    In this tutorial, we’ll build a powerful and interactive Streamlit application that brings together the capabilities of LangChain, the Google Gemini API, and a suite of advanced tools to create a smart AI assistant. Using Streamlit’s intuitive interface, we’ll create a chat-based system that can search the web, fetch Wikipedia content, perform calculations, remember key details, and handle conversation history, all in real time. Whether we’re developers, researchers, or just exploring AI, this setup allows us to interact with a multi-agent system directly from the browser with minimal code and maximum flexibility.

    Copy CodeCopiedUse a different Browser
    !pip install -q streamlit langchain langchain-google-genai langchain-community
    !pip install -q pyngrok python-dotenv wikipedia duckduckgo-search
    !npm install -g localtunnel
    
    
    import streamlit as st
    import os
    from langchain_google_genai import ChatGoogleGenerativeAI
    from langchain.agents import create_react_agent, AgentExecutor
    from langchain.tools import Tool, WikipediaQueryRun, DuckDuckGoSearchRun
    from langchain.memory import ConversationBufferWindowMemory
    from langchain.prompts import PromptTemplate
    from langchain.callbacks.streamlit import StreamlitCallbackHandler
    from langchain_community.utilities import WikipediaAPIWrapper, DuckDuckGoSearchAPIWrapper
    import asyncio
    import threading
    import time
    from datetime import datetime
    import json

    We begin by installing all the necessary Python and Node.js packages required for our AI assistant app. This includes Streamlit for the frontend, LangChain for agent logic, and tools like Wikipedia, DuckDuckGo, and ngrok/localtunnel for external search and hosting. Once set up, we import all modules to start building our interactive multi-tool AI agent.

    Copy CodeCopiedUse a different Browser
    GOOGLE_API_KEY = "Use Your API Key Here" 
    NGROK_AUTH_TOKEN = "Use Your Auth Token Here" 
    os.environ["GOOGLE_API_KEY"] = GOOGLE_API_KEY

    Next, we configure our environment by setting the Google Gemini API key and the ngrok authentication token. We assign these credentials to variables and set the GOOGLE_API_KEY so the LangChain agent can securely access the Gemini model during execution.

    Copy CodeCopiedUse a different Browser
    class InnovativeAgentTools:
       """Advanced tool collection for the multi-agent system"""
      
       @staticmethod
       def get_calculator_tool():
           def calculate(expression: str) -> str:
               """Calculate mathematical expressions safely"""
               try:
                   allowed_chars = set('0123456789+-*/.() ')
                   if all(c in allowed_chars for c in expression):
                       result = eval(expression)
                       return f"Result: {result}"
                   else:
                       return "Error: Invalid mathematical expression"
               except Exception as e:
                   return f"Calculation error: {str(e)}"
          
           return Tool(
               name="Calculator",
               func=calculate,
               description="Calculate mathematical expressions. Input should be a valid math expression."
           )
      
       @staticmethod
       def get_memory_tool(memory_store):
           def save_memory(key_value: str) -> str:
               """Save information to memory"""
               try:
                   key, value = key_value.split(":", 1)
                   memory_store[key.strip()] = value.strip()
                   return f"Saved '{key.strip()}' to memory"
               except:
                   return "Error: Use format 'key: value'"
          
           def recall_memory(key: str) -> str:
               """Recall information from memory"""
               return memory_store.get(key.strip(), f"No memory found for '{key}'")
          
           return [
               Tool(name="SaveMemory", func=save_memory,
                    description="Save information to memory. Format: 'key: value'"),
               Tool(name="RecallMemory", func=recall_memory,
                    description="Recall saved information. Input: key to recall")
           ]
      
       @staticmethod
       def get_datetime_tool():
           def get_current_datetime(format_type: str = "full") -> str:
               """Get current date and time"""
               now = datetime.now()
               if format_type == "date":
                   return now.strftime("%Y-%m-%d")
               elif format_type == "time":
                   return now.strftime("%H:%M:%S")
               else:
                   return now.strftime("%Y-%m-%d %H:%M:%S")
          
           return Tool(
               name="DateTime",
               func=get_current_datetime,
               description="Get current date/time. Options: 'date', 'time', or 'full'"
           )

    Here, we define the InnovativeAgentTools class to equip our AI agent with specialized capabilities. We implement tools such as a Calculator for safe expression evaluation, Memory Tools to save and recall information across turns, and a date and time tool to fetch the current date and time. These tools enable our Streamlit AI agent to reason, remember, and respond contextually, much like a true assistant. Check out the full Notebook here

    Copy CodeCopiedUse a different Browser
    class MultiAgentSystem:
       """Innovative multi-agent system with specialized capabilities"""
      
       def __init__(self, api_key: str):
           self.llm = ChatGoogleGenerativeAI(
               model="gemini-pro",
               google_api_key=api_key,
               temperature=0.7,
               convert_system_message_to_human=True
           )
           self.memory_store = {}
           self.conversation_memory = ConversationBufferWindowMemory(
               memory_key="chat_history",
               k=10,
               return_messages=True
           )
           self.tools = self._initialize_tools()
           self.agent = self._create_agent()
      
       def _initialize_tools(self):
           """Initialize all available tools"""
           tools = []
          
           tools.extend([
               DuckDuckGoSearchRun(api_wrapper=DuckDuckGoSearchAPIWrapper()),
               WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())
           ])
          
           tools.append(InnovativeAgentTools.get_calculator_tool())
           tools.append(InnovativeAgentTools.get_datetime_tool())
           tools.extend(InnovativeAgentTools.get_memory_tool(self.memory_store))
          
           return tools
      
       def _create_agent(self):
           """Create the ReAct agent with advanced prompt"""
           prompt = PromptTemplate.from_template("""
    🤖 You are an advanced AI assistant with access to multiple tools and persistent memory.
    
    
    AVAILABLE TOOLS:
    {tools}
    
    
    TOOL USAGE FORMAT:
    - Think step by step about what you need to do
    - Use Action: tool_name
    - Use Action Input: your input
    - Wait for Observation
    - Continue until you have a final answer
    
    
    MEMORY CAPABILITIES:
    - You can save important information using SaveMemory
    - You can recall previous information using RecallMemory
    - Always try to remember user preferences and context
    
    
    CONVERSATION HISTORY:
    {chat_history}
    
    
    CURRENT QUESTION: {input}
    
    
    REASONING PROCESS:
    {agent_scratchpad}
    
    
    Begin your response with your thought process, then take action if needed.
    """)
          
           agent = create_react_agent(self.llm, self.tools, prompt)
           return AgentExecutor(
               agent=agent,
               tools=self.tools,
               memory=self.conversation_memory,
               verbose=True,
               handle_parsing_errors=True,
               max_iterations=5
           )
      
       def chat(self, message: str, callback_handler=None):
           """Process user message and return response"""
           try:
               if callback_handler:
                   response = self.agent.invoke(
                       {"input": message},
                       {"callbacks": [callback_handler]}
                   )
               else:
                   response = self.agent.invoke({"input": message})
               return response["output"]
           except Exception as e:
               return f"Error processing request: {str(e)}"

    In this section, we build the core of our application, the MultiAgentSystem class. Here, we integrate the Gemini Pro model using LangChain and initialize all essential tools, including web search, memory, and calculator functions. We configure a ReAct-style agent using a custom prompt that guides tool usage and memory handling. Finally, we define a chat method that allows the agent to process user input, invoke tools when necessary, and generate intelligent, context-aware responses. Check out the full Notebook here

    Copy CodeCopiedUse a different Browser
    def create_streamlit_app():
       """Create the innovative Streamlit application"""
      
       st.set_page_config(
           page_title="🚀 Advanced LangChain Agent with Gemini",
           page_icon="🤖",
           layout="wide",
           initial_sidebar_state="expanded"
       )
      
       st.markdown("""
       <style>
       .main-header {
           background: linear-gradient(90deg, #667eea 0%, #764ba2 100%);
           padding: 1rem;
           border-radius: 10px;
           color: white;
           text-align: center;
           margin-bottom: 2rem;
       }
       .agent-response {
           background-color: #f0f2f6;
           padding: 1rem;
           border-radius: 10px;
           border-left: 4px solid #667eea;
           margin: 1rem 0;
       }
       .memory-card {
           background-color: #e8f4fd;
           padding: 1rem;
           border-radius: 8px;
           margin: 0.5rem 0;
       }
       </style>
       """, unsafe_allow_html=True)
      
       st.markdown("""
       <div class="main-header">
           <h1>🚀 Advanced Multi-Agent System</h1>
           <p>Powered by LangChain + Gemini API + Streamlit</p>
       </div>
       """, unsafe_allow_html=True)
      
       with st.sidebar:
           st.header("🔧 Configuration")
          
           api_key = st.text_input(
               "🔑 Google AI API Key",
               type="password",
               value=GOOGLE_API_KEY if GOOGLE_API_KEY != "your-gemini-api-key-here" else "",
               help="Get your API key from https://ai.google.dev/"
           )
          
           if not api_key:
               st.error("Please enter your Google AI API key to continue")
               st.stop()
          
           st.success("✅ API Key configured")
          
           st.header("🤖 Agent Capabilities")
           st.markdown("""
           - 🔍 **Web Search** (DuckDuckGo)
           - 📚 **Wikipedia Lookup**
           - 🧮 **Mathematical Calculator**
           - 🧠 **Persistent Memory**
           - 📅 **Date & Time**
           - 💬 **Conversation History**
           """)
          
           if 'agent_system' in st.session_state:
               st.header("🧠 Memory Store")
               memory = st.session_state.agent_system.memory_store
               if memory:
                   for key, value in memory.items():
                       st.markdown(f"""
                       <div class="memory-card">
                           <strong>{key}:</strong> {value}
                       </div>
                       """, unsafe_allow_html=True)
               else:
                   st.info("No memories stored yet")
      
       if 'agent_system' not in st.session_state:
           with st.spinner("🔄 Initializing Advanced Agent System..."):
               st.session_state.agent_system = MultiAgentSystem(api_key)
           st.success("✅ Agent System Ready!")
      
       st.header("💬 Interactive Chat")
      
       if 'messages' not in st.session_state:
           st.session_state.messages = [{
               "role": "assistant",
               "content": """🤖 Hello! I'm your advanced AI assistant powered by Gemini. I can:
    
    
    • Search the web and Wikipedia for information
    • Perform mathematical calculations
    • Remember important information across our conversation
    • Provide current date and time
    • Maintain conversation context
    
    
    Try asking me something like:
    - "Calculate 15 * 8 + 32"
    - "Search for recent news about AI"
    - "Remember that my favorite color is blue"
    - "What's the current time?"
    """
           }]
      
       for message in st.session_state.messages:
           with st.chat_message(message["role"]):
               st.markdown(message["content"])
      
       if prompt := st.chat_input("Ask me anything..."):
           st.session_state.messages.append({"role": "user", "content": prompt})
           with st.chat_message("user"):
               st.markdown(prompt)
          
           with st.chat_message("assistant"):
               callback_handler = StreamlitCallbackHandler(st.container())
              
               with st.spinner("🤔 Thinking..."):
                   response = st.session_state.agent_system.chat(prompt, callback_handler)
              
               st.markdown(f"""
               <div class="agent-response">
                   {response}
               </div>
               """, unsafe_allow_html=True)
              
               st.session_state.messages.append({"role": "assistant", "content": response})
      
       st.header("💡 Example Queries")
       col1, col2, col3 = st.columns(3)
      
       with col1:
           if st.button("🔍 Search Example"):
               example = "Search for the latest developments in quantum computing"
               st.session_state.example_query = example
      
       with col2:
           if st.button("🧮 Math Example"):
               example = "Calculate the compound interest on $1000 at 5% for 3 years"
               st.session_state.example_query = example
      
       with col3:
           if st.button("🧠 Memory Example"):
               example = "Remember that I work as a data scientist at TechCorp"
               st.session_state.example_query = example
      
       if 'example_query' in st.session_state:
           st.info(f"Example query: {st.session_state.example_query}")

    In this section, we bring everything together by building an interactive web interface using Streamlit. We configure the app layout, define custom CSS styles, and set up a sidebar for inputting API keys and configuring agent capabilities. We initialize the multi-agent system, maintain a message history, and enable a chat interface that allows users to interact in real-time. To make it even easier to explore, we also provide example buttons for search, math, and memory-related queries,  all in a beautifully styled, responsive UI. Check out the full Notebook here

    Copy CodeCopiedUse a different Browser
    def setup_ngrok_auth(auth_token):
       """Setup ngrok authentication"""
       try:
           from pyngrok import ngrok, conf
          
           conf.get_default().auth_token = auth_token
          
           try:
               tunnels = ngrok.get_tunnels()
               print("✅ Ngrok authentication successful!")
               return True
           except Exception as e:
               print(f"❌ Ngrok authentication failed: {e}")
               return False
              
       except ImportError:
           print("❌ pyngrok not installed. Installing...")
           import subprocess
           subprocess.run(['pip', 'install', 'pyngrok'], check=True)
           return setup_ngrok_auth(auth_token)
    
    
    def get_ngrok_token_instructions():
       """Provide instructions for getting ngrok token"""
       return """
    🔧 NGROK AUTHENTICATION SETUP:
    
    
    1. Sign up for an ngrok account:
      - Visit: https://dashboard.ngrok.com/signup
      - Create a free account
    
    
    2. Get your authentication token:
      - Go to: https://dashboard.ngrok.com/get-started/your-authtoken
      - Copy your authtoken
    
    
    3. Replace 'your-ngrok-auth-token-here' in the code with your actual token
    
    
    4. Alternative methods if ngrok fails:
      - Use Google Colab's built-in public URL feature
      - Use localtunnel: !npx localtunnel --port 8501
      - Use serveo.net: !ssh -R 80:localhost:8501 serveo.net
    """
    

    Here, we set up a helper function to authenticate ngrok, which allows us to expose our local Streamlit app to the internet. We use the pyngrok library to configure the authentication token and verify the connection. If the token is missing or invalid, we provide detailed instructions on how to obtain one and suggest alternative tunneling methods, such as LocalTunnel or Serveo, making it easy for us to host and share our app from environments like Google Colab.

    Copy CodeCopiedUse a different Browser
    def main():
       """Main function to run the application"""
       try:
           create_streamlit_app()
       except Exception as e:
           st.error(f"Application error: {str(e)}")
           st.info("Please check your API key and try refreshing the page")

    This main() function acts as the entry point for our Streamlit application. We simply call create_streamlit_app() to launch the full interface. If anything goes wrong, such as a missing API key or a failed tool initialization, we catch the error gracefully and display a helpful message, ensuring the user knows how to recover and continue using the app smoothly.

    Copy CodeCopiedUse a different Browser
    def run_in_colab():
       """Run the application in Google Colab with proper ngrok setup"""
      
       print("🚀 Starting Advanced LangChain Agent Setup...")
      
       if NGROK_AUTH_TOKEN == "your-ngrok-auth-token-here":
           print("⚠  NGROK_AUTH_TOKEN not configured!")
           print(get_ngrok_token_instructions())
          
           print("🔄 Attempting alternative tunnel methods...")
           try_alternative_tunnels()
           return
      
       print("📦 Installing required packages...")
       import subprocess
      
       packages = [
           'streamlit',
           'langchain',
           'langchain-google-genai',
           'langchain-community',
           'wikipedia',
           'duckduckgo-search',
           'pyngrok'
       ]
      
       for package in packages:
           try:
               subprocess.run(['pip', 'install', package], check=True, capture_output=True)
               print(f"✅ {package} installed")
           except subprocess.CalledProcessError:
               print(f"⚠  Failed to install {package}")
      
       app_content = '''
    import streamlit as st
    import os
    from langchain_google_genai import ChatGoogleGenerativeAI
    from langchain.agents import create_react_agent, AgentExecutor
    from langchain.tools import Tool, WikipediaQueryRun, DuckDuckGoSearchRun
    from langchain.memory import ConversationBufferWindowMemory
    from langchain.prompts import PromptTemplate
    from langchain.callbacks.streamlit import StreamlitCallbackHandler
    from langchain_community.utilities import WikipediaAPIWrapper, DuckDuckGoSearchAPIWrapper
    from datetime import datetime
    
    
    # Configuration - Replace with your actual keys
    GOOGLE_API_KEY = "''' + GOOGLE_API_KEY + '''"
    os.environ["GOOGLE_API_KEY"] = GOOGLE_API_KEY
    
    
    class InnovativeAgentTools:
       @staticmethod
       def get_calculator_tool():
           def calculate(expression: str) -> str:
               try:
                   allowed_chars = set('0123456789+-*/.() ')
                   if all(c in allowed_chars for c in expression):
                       result = eval(expression)
                       return f"Result: {result}"
                   else:
                       return "Error: Invalid mathematical expression"
               except Exception as e:
                   return f"Calculation error: {str(e)}"
          
           return Tool(name="Calculator", func=calculate,
                      description="Calculate mathematical expressions. Input should be a valid math expression.")
      
       @staticmethod
       def get_memory_tool(memory_store):
           def save_memory(key_value: str) -> str:
               try:
                   key, value = key_value.split(":", 1)
                   memory_store[key.strip()] = value.strip()
                   return f"Saved '{key.strip()}' to memory"
               except:
                   return "Error: Use format 'key: value'"
          
           def recall_memory(key: str) -> str:
               return memory_store.get(key.strip(), f"No memory found for '{key}'")
          
           return [
               Tool(name="SaveMemory", func=save_memory, description="Save information to memory. Format: 'key: value'"),
               Tool(name="RecallMemory", func=recall_memory, description="Recall saved information. Input: key to recall")
           ]
      
       @staticmethod
       def get_datetime_tool():
           def get_current_datetime(format_type: str = "full") -> str:
               now = datetime.now()
               if format_type == "date":
                   return now.strftime("%Y-%m-%d")
               elif format_type == "time":
                   return now.strftime("%H:%M:%S")
               else:
                   return now.strftime("%Y-%m-%d %H:%M:%S")
          
           return Tool(name="DateTime", func=get_current_datetime,
                      description="Get current date/time. Options: 'date', 'time', or 'full'")
    
    
    class MultiAgentSystem:
       def __init__(self, api_key: str):
           self.llm = ChatGoogleGenerativeAI(
               model="gemini-pro",
               google_api_key=api_key,
               temperature=0.7,
               convert_system_message_to_human=True
           )
           self.memory_store = {}
           self.conversation_memory = ConversationBufferWindowMemory(
               memory_key="chat_history", k=10, return_messages=True
           )
           self.tools = self._initialize_tools()
           self.agent = self._create_agent()
      
       def _initialize_tools(self):
           tools = []
           try:
               tools.extend([
                   DuckDuckGoSearchRun(api_wrapper=DuckDuckGoSearchAPIWrapper()),
                   WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())
               ])
           except Exception as e:
               st.warning(f"Search tools may have limited functionality: {e}")
          
           tools.append(InnovativeAgentTools.get_calculator_tool())
           tools.append(InnovativeAgentTools.get_datetime_tool())
           tools.extend(InnovativeAgentTools.get_memory_tool(self.memory_store))
           return tools
      
       def _create_agent(self):
           prompt = PromptTemplate.from_template("""
    🤖 You are an advanced AI assistant with access to multiple tools and persistent memory.
    
    
    AVAILABLE TOOLS:
    {tools}
    
    
    TOOL USAGE FORMAT:
    - Think step by step about what you need to do
    - Use Action: tool_name
    - Use Action Input: your input
    - Wait for Observation
    - Continue until you have a final answer
    
    
    CONVERSATION HISTORY:
    {chat_history}
    
    
    CURRENT QUESTION: {input}
    
    
    REASONING PROCESS:
    {agent_scratchpad}
    
    
    Begin your response with your thought process, then take action if needed.
    """)
          
           agent = create_react_agent(self.llm, self.tools, prompt)
           return AgentExecutor(agent=agent, tools=self.tools, memory=self.conversation_memory,
                              verbose=True, handle_parsing_errors=True, max_iterations=5)
      
       def chat(self, message: str, callback_handler=None):
           try:
               if callback_handler:
                   response = self.agent.invoke({"input": message}, {"callbacks": [callback_handler]})
               else:
                   response = self.agent.invoke({"input": message})
               return response["output"]
           except Exception as e:
               return f"Error processing request: {str(e)}"
    
    
    # Streamlit App
    st.set_page_config(page_title="🚀 Advanced LangChain Agent", page_icon="🤖", layout="wide")
    
    
    st.markdown("""
    <style>
    .main-header {
       background: linear-gradient(90deg, #667eea 0%, #764ba2 100%);
       padding: 1rem; border-radius: 10px; color: white; text-align: center; margin-bottom: 2rem;
    }
    .agent-response {
       background-color: #f0f2f6; padding: 1rem; border-radius: 10px;
       border-left: 4px solid #667eea; margin: 1rem 0;
    }
    .memory-card {
       background-color: #e8f4fd; padding: 1rem; border-radius: 8px; margin: 0.5rem 0;
    }
    </style>
    """, unsafe_allow_html=True)
    
    
    st.markdown('<div class="main-header"><h1>🚀 Advanced Multi-Agent System</h1><p>Powered by LangChain + Gemini API</p></div>', unsafe_allow_html=True)
    
    
    with st.sidebar:
       st.header("🔧 Configuration")
       api_key = st.text_input("🔑 Google AI API Key", type="password", value=GOOGLE_API_KEY)
      
       if not api_key:
           st.error("Please enter your Google AI API key")
           st.stop()
      
       st.success("✅ API Key configured")
      
       st.header("🤖 Agent Capabilities")
       st.markdown("- 🔍 Web Search\n- 📚 Wikipedia\n- 🧮 Calculator\n- 🧠 Memory\n- 📅 Date/Time")
      
       if 'agent_system' in st.session_state and st.session_state.agent_system.memory_store:
           st.header("🧠 Memory Store")
           for key, value in st.session_state.agent_system.memory_store.items():
               st.markdown(f'<div class="memory-card"><strong>{key}:</strong> {value}</div>', unsafe_allow_html=True)
    
    
    if 'agent_system' not in st.session_state:
       with st.spinner("🔄 Initializing Agent..."):
           st.session_state.agent_system = MultiAgentSystem(api_key)
       st.success("✅ Agent Ready!")
    
    
    if 'messages' not in st.session_state:
       st.session_state.messages = [{
           "role": "assistant",
           "content": "🤖 Hello! I'm your advanced AI assistant. I can search, calculate, remember information, and more! Try asking me to: calculate something, search for information, or remember a fact about you."
       }]
    
    
    for message in st.session_state.messages:
       with st.chat_message(message["role"]):
           st.markdown(message["content"])
    
    
    if prompt := st.chat_input("Ask me anything..."):
       st.session_state.messages.append({"role": "user", "content": prompt})
       with st.chat_message("user"):
           st.markdown(prompt)
      
       with st.chat_message("assistant"):
           callback_handler = StreamlitCallbackHandler(st.container())
           with st.spinner("🤔 Thinking..."):
               response = st.session_state.agent_system.chat(prompt, callback_handler)
           st.markdown(f'<div class="agent-response">{response}</div>', unsafe_allow_html=True)
           st.session_state.messages.append({"role": "assistant", "content": response})
    
    
    # Example buttons
    st.header("💡 Try These Examples")
    col1, col2, col3 = st.columns(3)
    with col1:
       if st.button("🧮 Calculate 15 * 8 + 32"):
           st.rerun()
    with col2:
       if st.button("🔍 Search AI news"):
           st.rerun()
    with col3:
       if st.button("🧠 Remember my name is Alex"):
           st.rerun()
    '''
      
       with open('streamlit_app.py', 'w') as f:
           f.write(app_content)
      
       print("✅ Streamlit app file created successfully!")
      
       if setup_ngrok_auth(NGROK_AUTH_TOKEN):
           start_streamlit_with_ngrok()
       else:
           print("❌ Ngrok authentication failed. Trying alternative methods...")
           try_alternative_tunnels()

    In the run_in_colab() function, we make it easy to deploy the Streamlit app directly from a Google Colab environment. We begin by installing all required packages, then dynamically generate and write the complete Streamlit app code to a streamlit_app.py file. We verify the presence of a valid ngrok token to enable public access to the app from Colab, and if it’s missing or invalid, we guide ourselves through fallback tunneling options. This setup allows us to interact with our AI agent from anywhere, all within a few cells in Colab. Check out the full Notebook here

    Copy CodeCopiedUse a different Browser
    def start_streamlit_with_ngrok():
       """Start Streamlit with ngrok tunnel"""
       import subprocess
       import threading
       from pyngrok import ngrok
      
       def start_streamlit():
           subprocess.run(['streamlit', 'run', 'streamlit_app.py', '--server.port=8501', '--server.headless=true'])
      
       print("🚀 Starting Streamlit server...")
       thread = threading.Thread(target=start_streamlit)
       thread.daemon = True
       thread.start()
      
       time.sleep(5)
      
       try:
           print("🌐 Creating ngrok tunnel...")
           public_url = ngrok.connect(8501)
           print(f"🔗 SUCCESS! Access your app at: {public_url}")
           print("✨ Your Advanced LangChain Agent is now running publicly!")
           print("📱 You can share this URL with others!")
          
           print("⏳ Keeping tunnel alive... Press Ctrl+C to stop")
           try:
               ngrok_process = ngrok.get_ngrok_process()
               ngrok_process.proc.wait()
           except KeyboardInterrupt:
               print("👋 Shutting down...")
               ngrok.kill()
              
       except Exception as e:
           print(f"❌ Ngrok tunnel failed: {e}")
           try_alternative_tunnels()
    
    
    def try_alternative_tunnels():
       """Try alternative tunneling methods"""
       print("🔄 Trying alternative tunnel methods...")
      
       import subprocess
       import threading
      
       def start_streamlit():
           subprocess.run(['streamlit', 'run', 'streamlit_app.py', '--server.port=8501', '--server.headless=true'])
      
       thread = threading.Thread(target=start_streamlit)
       thread.daemon = True
       thread.start()
      
       time.sleep(3)
      
       print("🌐 Streamlit is running on http://localhost:8501")
       print("n📋 ALTERNATIVE TUNNEL OPTIONS:")
       print("1. localtunnel: Run this in a new cell:")
       print("   !npx localtunnel --port 8501")
       print("n2. serveo.net: Run this in a new cell:")
       print("   !ssh -R 80:localhost:8501 serveo.net")
       print("n3. Colab public URL (if available):")
       print("   Use the 'Public URL' button in Colab's interface")
      
       try:
           while True:
               time.sleep(60)
       except KeyboardInterrupt:
           print("👋 Shutting down...")
    
    
    if __name__ == "__main__":
       try:
           get_ipython()
           print("🚀 Google Colab detected - starting setup...")
           run_in_colab()
       except NameError:
           main()

    In this final part, we set up the execution logic to run the app either in a local environment or inside Google Colab. The start_streamlit_with_ngrok() function launches the Streamlit server in the background and uses ngrok to expose it publicly, making it easy to access and share. If ngrok fails, the try_alternative_tunnels() function activates with alternative tunneling options, such as LocalTunnel and Serveo. With the __main__ block, we automatically detect if we’re in Colab and launch the appropriate setup, making the entire deployment process smooth, flexible, and shareable from anywhere.

    In conclusion, we’ll have a fully functional AI agent running inside a sleek Streamlit interface, capable of answering queries, remembering user inputs, and even sharing its services publicly using ngrok. We’ve seen how easily Streamlit enables us to integrate advanced AI functionalities into an engaging and user-friendly app. From here, we can expand the agent’s tools, plug it into larger workflows, or deploy it as part of our intelligent applications. With Streamlit as the front-end and LangChain agents powering the logic, we’ve built a solid foundation for next-gen interactive AI experiences.


    Check out the full Notebook here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

    The post Build an Intelligent Multi-Tool AI Agent Interface Using Streamlit for Seamless Real-Time Interaction appeared first on MarkTechPost.

    Source: Read MoreÂ

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleStage – Git GUI client for Linux desktops
    Next Article UC Berkeley Introduces CyberGym: A Real-World Cybersecurity Evaluation Framework to Evaluate AI Agents on Large-Scale Vulnerabilities Across Massive Codebases

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    June 20, 2025
    Machine Learning

    From Backend Automation to Frontend Collaboration: What’s New in AG-UI Latest Update for AI Agent-User Interaction

    June 20, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    These 6 products helped me cut ties with cable – and save $1,200 a year

    News & Updates

    Data Cloud Workshop: From Readiness to Results

    Development

    CISA and FBI Warn Fast Flux is Powering Resilient Malware, C2, and Phishing Networks

    Development

    CVE-2025-5375 – PHPGurukul HPGurukul Online Birth Certificate System SQL Injection Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    Linux

    Building a GitOps Workflow with Kubernetes and ArgoCD

    May 22, 2025

    Hey there! Managing Kubernetes clusters can be tough, right? You’ve got so many YAML files,…

    CVE-2025-4100 – Nautic Pages WordPress Stored Cross-Site Scripting

    May 1, 2025

    As Path of Exile 2’s review score continues to plummet, another ARPG overtakes it on Steam

    April 18, 2025

    Introduction and Overview Microsoft 365 Admin Center

    April 15, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.