Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Error’d: You Talkin’ to Me?

      September 20, 2025

      The Psychology Of Trust In AI: A Guide To Measuring And Designing For User Confidence

      September 20, 2025

      This week in AI updates: OpenAI Codex updates, Claude integration in Xcode 26, and more (September 19, 2025)

      September 20, 2025

      Report: The major factors driving employee disengagement in 2025

      September 20, 2025

      DistroWatch Weekly, Issue 1140

      September 21, 2025

      Distribution Release: DietPi 9.17

      September 21, 2025

      Development Release: Zorin OS 18 Beta

      September 19, 2025

      Distribution Release: IPFire 2.29 Core 197

      September 19, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      @ts-ignore is almost always the worst option

      September 22, 2025
      Recent

      @ts-ignore is almost always the worst option

      September 22, 2025

      MutativeJS v1.3.0 is out with massive performance gains

      September 22, 2025

      Student Performance Prediction System using Python Machine Learning (ML)

      September 21, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      DistroWatch Weekly, Issue 1140

      September 21, 2025
      Recent

      DistroWatch Weekly, Issue 1140

      September 21, 2025

      Distribution Release: DietPi 9.17

      September 21, 2025

      Hyprland Made Easy: Preconfigured Beautiful Distros

      September 20, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»A Coding Implementation to Advanced LangGraph Multi-Agent Research Pipeline for Automated Insights Generation

    A Coding Implementation to Advanced LangGraph Multi-Agent Research Pipeline for Automated Insights Generation

    August 7, 2025

    We build an advanced LangGraph multi-agent system that leverages Google’s free-tier Gemini model for end-to-end research workflows. In this tutorial, we start by installing the necessary libraries, LangGraph, LangChain-Google-GenAI, and LangChain-Core, then walk through defining a structured state, simulating research and analysis tools, and wiring up three specialized agents: Research, Analysis, and Report. Along the way, we show how to simulate web searches, perform data analysis, and orchestrate messages between agents to produce a polished executive report. Check out the Full Codes here.

    Copy CodeCopiedUse a different Browser
    !pip install -q langgraph langchain-google-genai langchain-core
    
    
    import os
    from typing import TypedDict, Annotated, List, Dict, Any
    from langgraph.graph import StateGraph, END
    from langchain_google_genai import ChatGoogleGenerativeAI
    from langchain_core.messages import BaseMessage, HumanMessage, AIMessage
    import operator
    import json
    
    
    
    
    os.environ["GOOGLE_API_KEY"] = "Use Your Own API Key"
    
    
    class AgentState(TypedDict):
       messages: Annotated[List[BaseMessage], operator.add]
       current_agent: str
       research_data: dict
       analysis_complete: bool
       final_report: str
    
    
    llm = ChatGoogleGenerativeAI(model="gemini-1.5-flash", temperature=0.7)

    We install the LangGraph and LangChain-Google-GenAI packages and import the core modules we need to orchestrate our multi-agent workflow. We set our Google API key, define the AgentState TypedDict to structure messages and workflow state, and initialize the Gemini-1.5-Flash model with a 0.7 temperature for balanced responses. Check out the Full Codes here.

    Copy CodeCopiedUse a different Browser
    def simulate_web_search(query: str) -> str:
       """Simulated web search - replace with real API in production"""
       return f"Search results for '{query}': Found relevant information about {query} including recent developments, expert opinions, and statistical data."
    
    
    def simulate_data_analysis(data: str) -> str:
       """Simulated data analysis tool"""
       return f"Analysis complete: Key insights from the data include emerging trends, statistical patterns, and actionable recommendations."
    
    
    def research_agent(state: AgentState) -> AgentState:
       """Agent that researches a given topic"""
       messages = state["messages"]
       last_message = messages[-1].content
      
       search_results = simulate_web_search(last_message)
      
       prompt = f"""You are a research agent. Based on the query: "{last_message}"
      
       Here are the search results: {search_results}
      
       Conduct thorough research and gather relevant information. Provide structured findings with:
       1. Key facts and data points
       2. Current trends and developments 
       3. Expert opinions and insights
       4. Relevant statistics
      
       Be comprehensive and analytical in your research summary."""
      
       response = llm.invoke([HumanMessage(content=prompt)])
      
       research_data = {
           "topic": last_message,
           "findings": response.content,
           "search_results": search_results,
           "sources": ["academic_papers", "industry_reports", "expert_analyses"],
           "confidence": 0.88,
           "timestamp": "2024-research-session"
       }
      
       return {
           "messages": state["messages"] + [AIMessage(content=f"Research completed on '{last_message}': {response.content}")],
           "current_agent": "analysis",
           "research_data": research_data,
           "analysis_complete": False,
           "final_report": ""
       }

    We define simulate_web_search and simulate_data_analysis as placeholder tools that mock retrieving and analyzing information, then implement research_agent to invoke these simulations, prompt Gemini for a structured research summary, and update our workflow state with the findings. We encapsulate the entire research phase in a single function that advances the agent to the analysis stage once the simulated search and structured LLM output are complete. Check out the Full Codes here.

    Copy CodeCopiedUse a different Browser
    def analysis_agent(state: AgentState) -> AgentState:
       """Agent that analyzes research data and extracts insights"""
       research_data = state["research_data"]
      
       analysis_results = simulate_data_analysis(research_data.get('findings', ''))
      
       prompt = f"""You are an analysis agent. Analyze this research data in depth:
      
       Topic: {research_data.get('topic', 'Unknown')}
       Research Findings: {research_data.get('findings', 'No findings')}
       Analysis Results: {analysis_results}
      
       Provide deep insights including:
       1. Pattern identification and trend analysis
       2. Comparative analysis with industry standards
       3. Risk assessment and opportunities 
       4. Strategic implications
       5. Actionable recommendations with priority levels
      
       Be analytical and provide evidence-based insights."""
      
       response = llm.invoke([HumanMessage(content=prompt)])
      
       return {
           "messages": state["messages"] + [AIMessage(content=f"Analysis completed: {response.content}")],
           "current_agent": "report",
           "research_data": state["research_data"],
           "analysis_complete": True,
           "final_report": ""
       }
    
    
    
    
    def report_agent(state: AgentState) -> AgentState:
       """Agent that generates final comprehensive reports"""
       research_data = state["research_data"]
      
       analysis_message = None
       for msg in reversed(state["messages"]):
           if isinstance(msg, AIMessage) and "Analysis completed:" in msg.content:
               analysis_message = msg.content.replace("Analysis completed: ", "")
               break
      
       prompt = f"""You are a professional report generation agent. Create a comprehensive executive report based on:
      
       <img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f50d.png" alt="🔍" class="wp-smiley" /> Research Topic: {research_data.get('topic')}
       <img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f4ca.png" alt="📊" class="wp-smiley" /> Research Findings: {research_data.get('findings')}
       <img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f9e0.png" alt="🧠" class="wp-smiley" /> Analysis Results: {analysis_message or 'Analysis pending'}
      
       Generate a well-structured, professional report with these sections:
      
       ## EXECUTIVE SUMMARY  
       ## KEY RESEARCH FINDINGS 
       [Detail the most important discoveries and data points]
      
       ## ANALYTICAL INSIGHTS
       [Present deep analysis, patterns, and trends identified]
      
       ## STRATEGIC RECOMMENDATIONS
       [Provide actionable recommendations with priority levels]
      
       ## RISK ASSESSMENT & OPPORTUNITIES
       [Identify potential risks and opportunities]
      
       ## CONCLUSION & NEXT STEPS
       [Summarize and suggest follow-up actions]
      
       Make the report professional, data-driven, and actionable."""
      
       response = llm.invoke([HumanMessage(content=prompt)])
      
       return {
           "messages": state["messages"] + [AIMessage(content=f"<img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f4c4.png" alt="📄" class="wp-smiley" /> FINAL REPORT GENERATED:nn{response.content}")],
           "current_agent": "complete",
           "research_data": state["research_data"],
           "analysis_complete": True,
           "final_report": response.content
       }

    We implement analysis_agent to take the simulated research findings, run them through our mock data analysis tool, prompt Gemini to produce in-depth insights and strategic recommendations, then transition the workflow to the report stage. We built report_agent to extract the latest analysis and craft a structured executive report via Gemini, with sections ranging from summary to next steps. We then mark the workflow as complete by storing the final report in the state. Check out the Full Codes here.

    Copy CodeCopiedUse a different Browser
    def should_continue(state: AgentState) -> str:
       """Determine which agent should run next based on current state"""
       current_agent = state.get("current_agent", "research")
      
       if current_agent == "research":
           return "analysis"
       elif current_agent == "analysis":
           return "report"
       elif current_agent == "report":
           return END
       else:
           return END
    
    
    workflow = StateGraph(AgentState)
    
    
    workflow.add_node("research", research_agent)
    workflow.add_node("analysis", analysis_agent)
    workflow.add_node("report", report_agent)
    
    
    workflow.add_conditional_edges(
       "research",
       should_continue,
       {"analysis": "analysis", END: END}
    )
    
    
    workflow.add_conditional_edges(
       "analysis",
       should_continue,
       {"report": "report", END: END}
    )
    
    
    workflow.add_conditional_edges(
       "report",
       should_continue,
       {END: END}
    )
    
    
    workflow.set_entry_point("research")
    
    
    app = workflow.compile()
    
    
    def run_research_assistant(query: str):
       """Run the complete research workflow"""
       initial_state = {
           "messages": [HumanMessage(content=query)],
           "current_agent": "research",
           "research_data": {},
           "analysis_complete": False,
           "final_report": ""
       }
      
       print(f"<img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f50d.png" alt="🔍" class="wp-smiley" /> Starting Multi-Agent Research on: '{query}'")
       print("=" * 60)
      
       current_state = initial_state
      
       print("<img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f916.png" alt="🤖" class="wp-smiley" /> Research Agent: Gathering information...")
       current_state = research_agent(current_state)
       print("<img src="https://s.w.org/images/core/emoji/16.0.1/72x72/2705.png" alt="✅" class="wp-smiley" /> Research phase completed!n")
      
       print("<img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f9e0.png" alt="🧠" class="wp-smiley" /> Analysis Agent: Analyzing findings...")
       current_state = analysis_agent(current_state)
       print("<img src="https://s.w.org/images/core/emoji/16.0.1/72x72/2705.png" alt="✅" class="wp-smiley" /> Analysis phase completed!n")
      
       print("<img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f4ca.png" alt="📊" class="wp-smiley" /> Report Agent: Generating comprehensive report...")
       final_state = report_agent(current_state)
       print("<img src="https://s.w.org/images/core/emoji/16.0.1/72x72/2705.png" alt="✅" class="wp-smiley" /> Report generation completed!n")
      
       print("=" * 60)
       print("<img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f3af.png" alt="🎯" class="wp-smiley" /> MULTI-AGENT WORKFLOW COMPLETED SUCCESSFULLY!")
       print("=" * 60)
      
       final_report = final_state['final_report']
       print(f"n<img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f4cb.png" alt="📋" class="wp-smiley" /> COMPREHENSIVE RESEARCH REPORT:n")
       print(final_report)
      
       return final_state

    We construct a StateGraph, add our three agents as nodes with conditional edges dictated by should_continue, set the entry point to “research,” and compile the graph into an executable workflow. We then define run_research_assistant() to initialize the state, sequentially invoke each agent, research, analysis, and report, print status updates, and return the final report. Check out the Full Codes here.

    Copy CodeCopiedUse a different Browser
    if __name__ == "__main__":
       print("<img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f680.png" alt="🚀" class="wp-smiley" /> Advanced LangGraph Multi-Agent System Ready!")
       print("<img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f527.png" alt="🔧" class="wp-smiley" /> Remember to set your GOOGLE_API_KEY!")
      
       example_queries = [
           "Impact of renewable energy on global markets",
           "Future of remote work post-pandemic"
       ]
      
       print(f"n<img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f4a1.png" alt="💡" class="wp-smiley" /> Example queries you can try:")
       for i, query in enumerate(example_queries, 1):
           print(f"  {i}. {query}")
      
       print(f"n<img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f3af.png" alt="🎯" class="wp-smiley" /> Usage: run_research_assistant('Your research question here')")
      
       result = run_research_assistant("What are emerging trends in sustainable technology?")

    We define the entry point that kicks off our multi-agent system, displaying a readiness message, example queries, and reminding us to set the Google API key. We showcase sample prompts to demonstrate how to interact with the research assistant and then execute a test run on “emerging trends in sustainable technology,” printing the end-to-end workflow output.

    In conclusion, we reflect on how this modular setup empowers us to rapidly prototype complex workflows. Each agent encapsulates a distinct phase of intelligence gathering, interpretation, and delivery, allowing us to swap in real APIs or extend the pipeline with new tools as our needs evolve. We encourage you to experiment with custom tools, adjust the state structure, and explore alternate LLMs. This framework is designed to grow with your research and product goals. As we iterate, we continually refine our agents’ prompts and capabilities, ensuring that our multi-agent system remains both robust and adaptable to any domain.


    Check out the Full Codes here. Feel free to check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

    The post A Coding Implementation to Advanced LangGraph Multi-Agent Research Pipeline for Automated Insights Generation appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleAdaptive Knowledge Distillation for Device-Directed Speech Detection
    Next Article OpenAI Just Released GPT-5: The Smartest, Fastest, and Most Useful OpenAI Model

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    September 3, 2025
    Machine Learning

    Announcing the new cluster creation experience for Amazon SageMaker HyperPod

    September 3, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CVE-2025-4585 – WordPress IRM Newsroom Stored Cross-Site Scripting Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Beyond prompt crafting: How to be a better partner for your AI pair programmer

    News & Updates

    Windows 11 Canary Build 27928 Is Here With Battery Icon Rollback & Settings Migration

    Operating Systems

    CVE-2025-48147 – CryptoCloud Crypto Payment Gateway Missing Authorization Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    Development

    Salesforce CRM: Redefining Business Integration in 2025

    April 5, 2025

    Salesforce continues to lead the CRM landscape in 2025, offering businesses unparalleled capabilities for system…

    CVE-2025-4561 – KingFor KFOX Arbitrary File Upload Vulnerability

    May 12, 2025

    CVE-2025-49009 – Facebook Para Facebook Auth Token Information Disclosure

    June 5, 2025

    Microsoft threatens Windows 10 users to upgrade to be on the “right side of risk”

    June 5, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.