Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      From Data To Decisions: UX Strategies For Real-Time Dashboards

      September 13, 2025

      Honeycomb launches AI observability suite for developers

      September 13, 2025

      Low-Code vs No-Code Platforms for Node.js: What CTOs Must Know Before Investing

      September 12, 2025

      ServiceNow unveils Zurich AI platform

      September 12, 2025

      Building personal apps with open source and AI

      September 12, 2025

      What Can We Actually Do With corner-shape?

      September 12, 2025

      Craft, Clarity, and Care: The Story and Work of Mengchu Yao

      September 12, 2025

      Distribution Release: Q4OS 6.1

      September 12, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Learning from PHP Log to File Example

      September 13, 2025
      Recent

      Learning from PHP Log to File Example

      September 13, 2025

      Online EMI Calculator using PHP – Calculate Loan EMI, Interest, and Amortization Schedule

      September 13, 2025

      Package efficiency and dependency hygiene

      September 13, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Dmitry — The Deep Magic

      September 13, 2025
      Recent

      Dmitry — The Deep Magic

      September 13, 2025

      Right way to record and share our Terminal sessions

      September 13, 2025

      Windows 11 Powers Up WSL: How GPU Acceleration & Kernel Upgrades Change the Game

      September 13, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Develop a Multi-Tool AI Agent with Secure Python Execution using Riza and Gemini

    Develop a Multi-Tool AI Agent with Secure Python Execution using Riza and Gemini

    June 11, 2025

    In this tutorial, we’ll harness Riza’s secure Python execution as the cornerstone of a powerful, tool-augmented AI agent in Google Colab. Beginning with seamless API key management, through Colab secrets, environment variables, or hidden prompts, we’ll configure your Riza credentials to enable sandboxed, audit-ready code execution. We’ll integrate Riza’s ExecPython tool into a LangChain agent alongside Google’s Gemini generative model, define an AdvancedCallbackHandler that captures both tool invocations and Riza execution logs, and build custom utilities for complex math and in-depth text analysis.

    Copy CodeCopiedUse a different Browser
    %pip install --upgrade --quiet langchain-community langchain-google-genai rizaio python-dotenv
    
    
    import os
    from typing import Dict, Any, List
    from datetime import datetime
    import json
    import getpass
    from google.colab import userdata

    We will install and upgrade the core libraries, LangChain Community extensions, Google Gemini integration, Riza’s secure execution package, and dotenv support, quietly in Colab. We then import standard utilities (e.g., os, datetime, json), typing annotations, secure input via getpass, and Colab’s user data API to manage environment variables and user secrets seamlessly.

    Copy CodeCopiedUse a different Browser
    def setup_api_keys():
        """Set up API keys using multiple secure methods."""
       
        try:
            os.environ['GOOGLE_API_KEY'] = userdata.get('GOOGLE_API_KEY')
            os.environ['RIZA_API_KEY'] = userdata.get('RIZA_API_KEY')
            print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/2705.png" alt="✅" class="wp-smiley" /> API keys loaded from Colab secrets")
            return True
        except:
            pass
       
        if os.getenv('GOOGLE_API_KEY') and os.getenv('RIZA_API_KEY'):
            print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/2705.png" alt="✅" class="wp-smiley" /> API keys found in environment")
            return True
       
        try:
            if not os.getenv('GOOGLE_API_KEY'):
                google_key = getpass.getpass("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f511.png" alt="🔑" class="wp-smiley" /> Enter your Google Gemini API key: ")
                os.environ['GOOGLE_API_KEY'] = google_key
           
            if not os.getenv('RIZA_API_KEY'):
                riza_key = getpass.getpass("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f511.png" alt="🔑" class="wp-smiley" /> Enter your Riza API key: ")
                os.environ['RIZA_API_KEY'] = riza_key
           
            print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/2705.png" alt="✅" class="wp-smiley" /> API keys set securely via input")
            return True
        except:
            print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/274c.png" alt="❌" class="wp-smiley" /> Failed to set API keys")
            return False
    
    
    if not setup_api_keys():
        print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/26a0.png" alt="⚠" class="wp-smiley" />  Please set up your API keys using one of these methods:")
        print("   1. Colab Secrets: Go to <img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f511.png" alt="🔑" class="wp-smiley" /> in left panel, add GOOGLE_API_KEY and RIZA_API_KEY")
        print("   2. Environment: Set GOOGLE_API_KEY and RIZA_API_KEY before running")
        print("   3. Manual input: Run the cell and enter keys when prompted")
        exit()

    The above cell defines a setup_api_keys() function that securely retrieves your Google Gemini and Riza API keys by first attempting to load them from Colab secrets, then falling back to existing environment variables, and finally prompting you to enter them via hidden input if needed. If none of these methods succeed, it prints instructions on how to provide your keys and exits the notebook.

    Copy CodeCopiedUse a different Browser
    from langchain_community.tools.riza.command import ExecPython
    from langchain_google_genai import ChatGoogleGenerativeAI
    from langchain.agents import AgentExecutor, create_tool_calling_agent
    from langchain_core.prompts import ChatPromptTemplate
    from langchain_core.messages import HumanMessage, AIMessage
    from langchain.memory import ConversationBufferWindowMemory
    from langchain.tools import Tool
    from langchain.callbacks.base import BaseCallbackHandler

    We import Riza’s ExecPython tool alongside LangChain’s core components for building a tool‐calling agent, namely the Gemini LLM wrapper (ChatGoogleGenerativeAI), the agent executor and creation functions (AgentExecutor, create_tool_calling_agent), the prompt and message templates, conversation memory buffer, generic Tool wrapper, and the base callback handler for logging and monitoring agent actions. These building blocks let you assemble, configure, and track a memory-enabled, multi-tool AI agent in Colab.

    Copy CodeCopiedUse a different Browser
    class AdvancedCallbackHandler(BaseCallbackHandler):
        """Enhanced callback handler for detailed logging and metrics."""
       
        def __init__(self):
            self.execution_log = []
            self.start_time = None
            self.token_count = 0
       
        def on_agent_action(self, action, **kwargs):
            timestamp = datetime.now().strftime("%H:%M:%S")
            self.execution_log.append({
                "timestamp": timestamp,
                "action": action.tool,
                "input": str(action.tool_input)[:100] + "..." if len(str(action.tool_input)) > 100 else str(action.tool_input)
            })
            print(f"<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f527.png" alt="🔧" class="wp-smiley" /> [{timestamp}] Using tool: {action.tool}")
       
        def on_agent_finish(self, finish, **kwargs):
            timestamp = datetime.now().strftime("%H:%M:%S")
            print(f"<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/2705.png" alt="✅" class="wp-smiley" /> [{timestamp}] Agent completed successfully")
       
        def get_execution_summary(self):
            return {
                "total_actions": len(self.execution_log),
                "execution_log": self.execution_log
            }
    
    
    class MathTool:
        """Advanced mathematical operations tool."""
       
        @staticmethod
        def complex_calculation(expression: str) -> str:
            """Evaluate complex mathematical expressions safely."""
            try:
                import math
                import numpy as np
               
                safe_dict = {
                    "__builtins__": {},
                    "abs": abs, "round": round, "min": min, "max": max,
                    "sum": sum, "len": len, "pow": pow,
                    "math": math, "np": np,
                    "sin": math.sin, "cos": math.cos, "tan": math.tan,
                    "log": math.log, "sqrt": math.sqrt, "pi": math.pi, "e": math.e
                }
               
                result = eval(expression, safe_dict)
                return f"Result: {result}"
            except Exception as e:
                return f"Math Error: {str(e)}"
    
    
    class TextAnalyzer:
        """Advanced text analysis tool."""
       
        @staticmethod
        def analyze_text(text: str) -> str:
            """Perform comprehensive text analysis."""
            try:
                char_freq = {}
                for char in text.lower():
                    if char.isalpha():
                        char_freq[char] = char_freq.get(char, 0) + 1
               
                words = text.split()
                word_count = len(words)
                avg_word_length = sum(len(word) for word in words) / max(word_count, 1)
               
                specific_chars = {}
                for char in set(text.lower()):
                    if char.isalpha():
                        specific_chars[char] = text.lower().count(char)
               
                analysis = {
                    "total_characters": len(text),
                    "total_words": word_count,
                    "average_word_length": round(avg_word_length, 2),
                    "character_frequencies": dict(sorted(char_freq.items(), key=lambda x: x[1], reverse=True)[:10]),
                    "specific_character_counts": specific_chars
                }
               
                return json.dumps(analysis, indent=2)
            except Exception as e:
                return f"Analysis Error: {str(e)}"

    Above cell brings together three essential pieces: an AdvancedCallbackHandler that captures every tool invocation with a timestamped log and can summarize the total actions taken; a MathTool class that safely evaluates complex mathematical expressions in a restricted environment to prevent unwanted operations; and a TextAnalyzer class that computes detailed text statistics, such as character frequencies, word counts, and average word length, and returns the results as formatted JSON.

    Copy CodeCopiedUse a different Browser
    def validate_api_keys():
        """Validate API keys before creating agents."""
        try:
            test_llm = ChatGoogleGenerativeAI(
                model="gemini-1.5-flash",  
                temperature=0
            )
            test_llm.invoke("test")
            print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/2705.png" alt="✅" class="wp-smiley" /> Gemini API key validated")
           
            test_tool = ExecPython()
            print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/2705.png" alt="✅" class="wp-smiley" /> Riza API key validated")
           
            return True
        except Exception as e:
            print(f"<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/274c.png" alt="❌" class="wp-smiley" /> API key validation failed: {str(e)}")
            print("Please check your API keys and try again")
            return False
    
    
    if not validate_api_keys():
        exit()
    
    
    python_tool = ExecPython()
    math_tool = Tool(
        name="advanced_math",
        description="Perform complex mathematical calculations and evaluations",
        func=MathTool.complex_calculation
    )
    text_analyzer_tool = Tool(
        name="text_analyzer",
        description="Analyze text for character frequencies, word statistics, and specific character counts",
        func=TextAnalyzer.analyze_text
    )
    
    
    tools = [python_tool, math_tool, text_analyzer_tool]
    
    
    try:
        llm = ChatGoogleGenerativeAI(
            model="gemini-1.5-flash",
            temperature=0.1,
            max_tokens=2048,
            top_p=0.8,
            top_k=40
        )
        print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/2705.png" alt="✅" class="wp-smiley" /> Gemini model initialized successfully")
    except Exception as e:
        print(f"<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/26a0.png" alt="⚠" class="wp-smiley" />  Gemini Pro failed, falling back to Flash: {e}")
        llm = ChatGoogleGenerativeAI(
            model="gemini-1.5-flash",
            temperature=0.1,
            max_tokens=2048
        )
    

    In this cell, we first define and run validate_api_keys() to ensure that both the Gemini and Riza credentials work, attempting a dummy LLM call and instantiating the Riza ExecPython tool. We exit the notebook if validation fails. We then instantiate python_tool for secure code execution, wrap our MathTool and TextAnalyzer methods into LangChain Tool objects, and collect them into the tools list. Finally, we initialize the Gemini model with custom settings (temperature, max_tokens, top_p, top_k), and if the “Pro” configuration fails, we gracefully fall back to the lighter “Flash” variant.

    Copy CodeCopiedUse a different Browser
    prompt_template = ChatPromptTemplate.from_messages([
        ("system", """You are an advanced AI assistant with access to powerful tools.
    
    
    Key capabilities:
    - Python code execution for complex computations
    - Advanced mathematical operations
    - Text analysis and character counting
    - Problem decomposition and step-by-step reasoning
    
    
    Instructions:
    1. Always break down complex problems into smaller steps
    2. Use the most appropriate tool for each task
    3. Verify your results when possible
    4. Provide clear explanations of your reasoning
    5. For text analysis questions (like counting characters), use the text_analyzer tool first, then verify with Python if needed
    
    
    Be precise, thorough, and helpful."""),
        ("human", "{input}"),
        ("placeholder", "{agent_scratchpad}"),
    ])
    
    
    memory = ConversationBufferWindowMemory(
        k=5,  
        return_messages=True,
        memory_key="chat_history"
    )
    
    
    callback_handler = AdvancedCallbackHandler()
    
    
    agent = create_tool_calling_agent(llm, tools, prompt_template)
    agent_executor = AgentExecutor(
        agent=agent,
        tools=tools,
        verbose=True,
        memory=memory,
        callbacks=[callback_handler],
        max_iterations=10,
        early_stopping_method="generate"
    )
    

    This cell constructs the agent’s “brain” and workflow: it defines a structured ChatPromptTemplate that instructs the system on its toolset and reasoning style, sets up a sliding-window conversation memory to retain the last five exchanges, and instantiates the AdvancedCallbackHandler for real-time logging. It then creates a tool‐calling agent by binding the Gemini LLM, custom tools, and prompt template, and wraps it in an AgentExecutor that manages execution (up to ten steps), leverages memory for context, streams verbose output, and halts cleanly once the agent generates a final response.

    Copy CodeCopiedUse a different Browser
    def ask_question(question: str) -> Dict[str, Any]:
        """Ask a question to the advanced agent and return detailed results."""
        print(f"n<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f916.png" alt="🤖" class="wp-smiley" /> Processing: {question}")
        print("=" * 50)
       
        try:
            result = agent_executor.invoke({"input": question})
           
            output = result.get("output", "No output generated")
           
            print("n<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f4ca.png" alt="📊" class="wp-smiley" /> Execution Summary:")
            summary = callback_handler.get_execution_summary()
            print(f"Tools used: {summary['total_actions']}")
           
            return {
                "question": question,
                "answer": output,
                "execution_summary": summary,
                "success": True
            }
       
        except Exception as e:
            print(f"<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/274c.png" alt="❌" class="wp-smiley" /> Error: {str(e)}")
            return {
                "question": question,
                "error": str(e),
                "success": False
            }
    
    
    test_questions = [
        "How many r's are in strawberry?",
        "Calculate the compound interest on $1000 at 5% for 3 years",
        "Analyze the word frequency in the sentence: 'The quick brown fox jumps over the lazy dog'",
        "What's the fibonacci sequence up to the 10th number?"
    ]
    
    
    print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f680.png" alt="🚀" class="wp-smiley" /> Advanced Gemini Agent with Riza - Ready!")
    print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f510.png" alt="🔐" class="wp-smiley" /> API keys configured securely")
    print("Testing with sample questions...n")
    
    
    results = []
    for question in test_questions:
        result = ask_question(question)
        results.append(result)
        print("n" + "="*80 + "n")
    
    
    print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f4c8.png" alt="📈" class="wp-smiley" /> FINAL SUMMARY:")
    successful = sum(1 for r in results if r["success"])
    print(f"Successfully processed: {successful}/{len(results)} questions")

    Finally, we define a helper function, ask_question(), that sends a user query to the agent executor, prints the question header, captures the agent’s response (or error), and then outputs a brief execution summary (showing how many tool calls were made). It then supplies a list of sample questions, covering counting characters, computing compound interest, analyzing word frequency, and generating a Fibonacci sequence, and iterates through them, invoking the agent on each and collecting the results. After running all tests, it prints a concise “FINAL SUMMARY” indicating how many queries were processed successfully, confirming that your Advanced Gemini + Riza agent is up and running in Colab.

    In conclusion, by centering the architecture on Riza’s secure execution environment, we’ve created an AI agent that generates insightful responses via Gemini while also running arbitrary Python code in a fully sandboxed, monitored context. The integration of Riza’s ExecPython tool ensures that every computation, from advanced numerical routines to dynamic text analyses, is executed with rigorous security and transparency. With LangChain orchestrating tool calls and a memory buffer maintaining context, we now have a modular framework ready for real-world tasks such as automated data processing, research prototyping, or educational demos.


    Check out the Notebook. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 99k+ ML SubReddit and Subscribe to our Newsletter.

    The post Develop a Multi-Tool AI Agent with Secure Python Execution using Riza and Gemini appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleMoving Towards a Stable JavaScript API (New Changes in 0.80)
    Next Article How Do LLMs Really Reason? A Framework to Separate Logic from Knowledge

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    September 3, 2025
    Machine Learning

    Announcing the new cluster creation experience for Amazon SageMaker HyperPod

    September 3, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CVE-2025-5734 – TOTOLINK X15 HTTP POST Request Handler Buffer Overflow

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-30102 – Dell PowerScale OneFS Out-of-Bounds Write Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2023-35817 – DevExpress AsyncDownloader SSRF

    Common Vulnerabilities and Exposures (CVEs)

    Power Your LLM Training and Evaluation with the New SageMaker AI Generative AI Tools

    Machine Learning

    Highlights

    News & Updates

    Microsoft’s AI CEO says Copilot will evolve into a “real friend” with a permanent identity — an ideal utopia of digital patina

    July 25, 2025

    Microsoft’s AI CEO, Mustafa Suleyman, recently discussed how Microsoft views and develops AI, regulation, competition…

    Rocky Linux 10: Arriva il Supporto Ufficiale per l’Architettura RISC-V

    May 22, 2025

    Assessing why Ruby on Rails is better than JavaScript : A Strategic Decision for B2B

    August 21, 2025

    Citrix Warns of Privilege Escalation Vulnerability in Windows Virtual Delivery Agent (CVE-2025-6759)

    July 9, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.