Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      This week in AI updates: Mistral’s new Le Chat features, ChatGPT updates, and more (September 5, 2025)

      September 6, 2025

      Designing For TV: Principles, Patterns And Practical Guidance (Part 2)

      September 5, 2025

      Neo4j introduces new graph architecture that allows operational and analytics workloads to be run together

      September 5, 2025

      Beyond the benchmarks: Understanding the coding personalities of different LLMs

      September 5, 2025

      Hitachi Energy Pledges $1B to Strengthen US Grid, Build Largest Transformer Plant in Virginia

      September 5, 2025

      How to debug a web app with Playwright MCP and GitHub Copilot

      September 5, 2025

      Between Strategy and Story: Thierry Chopain’s Creative Path

      September 5, 2025

      What You Need to Know About CSS Color Interpolation

      September 5, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Why browsers throttle JavaScript timers (and what to do about it)

      September 6, 2025
      Recent

      Why browsers throttle JavaScript timers (and what to do about it)

      September 6, 2025

      How to create Google Gemini AI component in Total.js Flow

      September 6, 2025

      Drupal 11’s AI Features: What They Actually Mean for Your Team

      September 5, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Harnessing GitOps on Linux for Seamless, Git-First Infrastructure Management

      September 6, 2025
      Recent

      Harnessing GitOps on Linux for Seamless, Git-First Infrastructure Management

      September 6, 2025

      How DevOps Teams Are Redefining Reliability with NixOS and OSTree-Powered Linux

      September 5, 2025

      Distribution Release: Linux Mint 22.2

      September 4, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»How to Build Autonomous Agents using Prompt Chaining with AI Primitives (No Frameworks)

    How to Build Autonomous Agents using Prompt Chaining with AI Primitives (No Frameworks)

    April 21, 2025

    Autonomous agents might sound complex, but they don’t have to be. These are AI systems that can make decisions and take actions on their own to achieve a goal – usually by using LLMs, various tools, and memory to reason through a task.

    You can build powerful agentic systems without heavyweight frameworks or orchestration engines. One of the simplest and most effective ways to do that is to use Langbase agentic architectures (built with AI primitives that don’t require a framework to ship scalable AI agentic systems).

    In this article, we’ll dive into one of Langbase’s agentic architectures: prompt chaining. We’ll look at why it’s useful and how to implement it by building a prompt chaining agent.

    Table of Contents

    1. Prerequisites

    2. AI primitives (agentic architecture)

    3. What is prompt chaining?

    4. Prompt chaining architecture

    5. Langbase SDK

    6. Building a prompt chaining agent using Langbase Pipes

      • Step 1: Setup your project

      • Step 2: Get Langbase API Key

      • Step 3: Add LLM API keys

      • Step 4: Add logic in prompt-chaining.ts file

      • Step 5: Run the file

    7. The result

    Prerequisites

    Before we begin creating a prompt chaining agent, you’ll need to have the following setup and tools ready to go.

    In this tutorial, I’ll be using the following tech stack:

    • Langbase – the platform to build and deploy your serverless AI agents.

    • Langbase SDK – a TypeScript AI SDK, designed to work with JavaScript, TypeScript, Node.js, Next.js, React, and the like.

    • OpenAI – to get the LLM key for the preferred model.

    You’ll also need to:

    • Sign up on Langbase to get access to the API key.

    • Sign up on OpenAI to generate the LLM key for the model you want to use (for this demo, I’ll be using the openai:gpt-4o-mini model). You can generate the key here.

    AI Primitives (Agentic Architecture)

    An AI primitive level approach means building AI systems using the most basic building blocks – without relying on heavy abstractions, orchestration engines, or full-blown frameworks.

    Langbase Pipe and Memory agents serve as these building blocks.

    Pipe agents on Langbase are different from other agents. They are serverless AI agents with agentic tools that can work with any language or framework. Pipe agents are easily deployable, and with just one API they let you connect 250+ LLMs to any data to build any developer API workflow.

    Langbase memory agents (long-term memory solution) are designed to acquire, process, retain, and retrieve information seamlessly. They dynamically attach private data to any LLM, enabling context-aware responses in real time and reducing hallucinations. Memory, when connected to a pipe agent, becomes a memory agent.

    With these building blocks (AI primitives) you can build entire agentic workflows. For this, Langbase agentic architectures serves as a boilerplate in building, deploying, and scaling autonomous agents.

    Let’s look at one of the agentic architectures: prompt chaining.

    What is Prompt Chaining?

    Prompt chaining is an agent architecture where a task is broken down into a sequence of prompts. Each step passes its output to the next, enabling the LLM to handle more complex workflows with higher accuracy.

    This is particularly useful for structured tasks like:

    • Document summarization and analysis

    • Multi-step content generation

    • Data transformation and cleanup

    • Content validation and refinement

    Rather than relying on a single prompt to do everything, you split the work into focused steps. This makes it easier to debug, improves output quality, and introduces natural “checkpoints” in your AI workflow.

    Prompt Chaining Architecture

    Here’s a reference architecture explaining the workflow:

    AD_4nXchmBDXvU8DXnQu7EjqKoSUTdxQ__KsTZemZ9yaTGpeCAMUc1RX_Swby9NOtxXwONFdKGPrjFcjVZhQmQoKe1eu2nceFWGLaPA8bpu-JYB7rh4ChJmExLRRWJzjB4686HjUsP_t?key=l4b_IFG3ufUXGX7WLcs4Dknq

    This diagram is a visual reference for how prompt chaining can be used to build a lightweight agentic system using just LLM calls and conditional logic – without any heavyweight frameworks.

    Here’s a breakdown of what’s happening in the flow:

    1. In → LLM Call
    • Takes the initial input and runs the first LLM call.

    • Produces Output 1.

    1. Gate
    • Evaluates Output 1 to decide the next step.

    • Acts as a conditional checkpoint (for example, success/failure, intent validation, confidence threshold).

    1. If Gate passes:
    • Proceeds to LLM Call 2 with Output 1 as input.

    • LLM Call 2 produces Output 2.

    • Output 2 goes into LLM Call 3, which generates the final result.

    • Final output flows into the Out.

    1. If Gate fails:
    • The flow terminates early at Exit.

    • Skips further LLM calls, saving compute and avoiding invalid outputs.

    Langbase SDK

    The Langbase SDK makes it easy to build powerful AI agents using TypeScript. It gives you everything you need to work with any LLM, connect your own embedding models, manage document memory, and build AI agents that can reason and respond.

    The SDK is designed to work with Node.js, Next.js, React, or any modern JavaScript stack. You can use it to upload documents, create semantic memory, and run AI workflows (called Pipes agents) with just a few lines of code.

    Langbase is an API-first AI platform, and its TypeScript SDK smooths out the experience – making it easy to get started without dealing with infrastructure. Just drop in your API key, write your logic, and you’re good to go.

    Now that you know about Langbase SDK, let’s start building the prompt chaining agent.

    Building a Prompt Chaining Agent using Langbase Pipes

    Let’s walk through a real prompt chaining agentic system built using Langbase Pipe agents (serverless AI agents with unified APIs for every LLM). For this, we’ll be setting up a basic Node.js project.

    We’ll be implementing a sequential product marketing content pipeline that transforms a raw product description into polished marketing copy through three stages (that is, the creation of three Pipe agents):

    First Stage (Summary Agent):

    • Takes a raw product description

    • Condenses it into two concise sentences

    • Has a quality gate that checks if the summary is detailed enough (at least 10 words)

    Second Stage (Features Agent):

    • Takes the summary from stage 1

    • Extracts and formats key product features as bullet points

    Final Stage (Marketing Copy Agent):

    • Takes the bullet points from stage 2

    • Generates refined marketing copy for the product

    All stages will be using the OpenAI 4o-mini model through the Langbase SDK. The best part is that you can use different LLM models for each stage/Pipe agent creation as well.

    What makes this interesting is its pipeline approach. Each stage builds upon the output of the previous stage, with a quality check after the summary stage to ensure the pipeline maintains high standards.

    Let’s begin with the creation of this prompt chaining agentic system.

    Step 1: Setup Your Project

    I’ll be building a basic Node.js app in TypeScript that uses the Langbase SDK to create a scalable prompt chaining agentic system. It will work without any framework, following an AI primitive level approach.

    To get started with that, create a new directory for your project and navigate to it:

    mkdir agentic-architecture && <span class="hljs-built_in">cd</span> agentic-architecture
    

    Then initialize a Node.js project and create a TypeScript file by running this command in your terminal:

    npm init -y && touch prompt-chaining.ts
    

    The prompt-chaining.ts file will contain code of all the agent creations in it.

    After this, we will be using the Langbase SDK to create the agents and dotenv to manage environment variables. So, let’s install these dependencies.

    npm i langbase dotenv
    

    Step 2: Get Langbase API Key

    Every request you send to Langbase needs an API key. You can generate API keys from the Langbase studio by following these steps:

    1. Switch to your user or org account.

    2. From the sidebar, click on the Settings menu.

    3. In the developer settings section, click on the Langbase API keys link.

    4. From here you can create a new API key or manage existing ones.

    For more details, check out the Langbase API keys documentation.

    After generating the API key, create an .env file in the root of your project and add your Langbase API key in it:

    LANGBASE_API_KEY=xxxxxxxxx
    

    Replace xxxxxxxxx with your Langbase API key.

    Step 3: Add LLM API keys

    Once you have the Langbase API key, you’ll be needing the LLM key as well to run the RAG agent. If you have set up LLM API keys in your profile, the AI memory and agent pipe will automatically use them. Otherwise navigate to the LLM API keys page and add keys for different providers like OpenAI, Anthropic, and so on.

    Follow these steps to add the LLM keys:

    1. Add LLM API keys in your account using Langbase studio

    2. Switch to your user or org account.

    3. From the sidebar, click on the Settings menu.

    4. In the developer settings section, click on the LLM API keys link.

    5. From here you can add LLM API keys for different providers like OpenAI, TogetherAI, Anthropic, and so on.

    Step 4: Add logic in prompt-chaining.ts file

    In the prompt-chaining.ts file you created in Step 1, add the following code:

    <span class="hljs-keyword">import</span> dotenv <span class="hljs-keyword">from</span> <span class="hljs-string">'dotenv'</span>;
    <span class="hljs-keyword">import</span> { Langbase } <span class="hljs-keyword">from</span> <span class="hljs-string">'langbase'</span>;
    
    
    dotenv.config();
    
    
    <span class="hljs-keyword">const</span> langbase = <span class="hljs-keyword">new</span> Langbase({
       apiKey: process.env.LANGBASE_API_KEY!
    });
    
    
    <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">main</span>(<span class="hljs-params">inputText: <span class="hljs-built_in">string</span></span>) </span>{
       <span class="hljs-comment">// Prompt chaining steps</span>
       <span class="hljs-keyword">const</span> steps = [
           {
               name: <span class="hljs-string">`summary-agent-<span class="hljs-subst">${<span class="hljs-built_in">Date</span>.now()}</span>`</span>,
               model: <span class="hljs-string">'openai:gpt-4o-mini'</span>,
               description:
                   <span class="hljs-string">'summarize the product description into two concise sentences'</span>,
               prompt: <span class="hljs-string">`Please summarize the following product description into two concise
               sentences:n`</span>
           },
           {
               name: <span class="hljs-string">`features-agent-<span class="hljs-subst">${<span class="hljs-built_in">Date</span>.now()}</span>`</span>,
               model: <span class="hljs-string">'openai:gpt-4o-mini'</span>,
               description: <span class="hljs-string">'extract key product features as bullet points'</span>,
               prompt: <span class="hljs-string">`Based on the following summary, list the key product features as
               bullet points:n`</span>
           },
           {
               name: <span class="hljs-string">`marketing-copy-agent-<span class="hljs-subst">${<span class="hljs-built_in">Date</span>.now()}</span>`</span>,
               model: <span class="hljs-string">'openai:gpt-4o-mini'</span>,
               description:
                   <span class="hljs-string">'generate a polished marketing copy using the bullet points'</span>,
               prompt: <span class="hljs-string">`Using the following bullet points of product features, generate a
               compelling and refined marketing copy for the product, be precise:n`</span>
           }
       ];
    
    
       <span class="hljs-comment">//  Create the pipe agents</span>
       <span class="hljs-keyword">await</span> <span class="hljs-built_in">Promise</span>.all(
           steps.map(<span class="hljs-function"><span class="hljs-params">step</span> =></span>
               langbase.pipes.create({
                   name: step.name,
                   model: step.model,
                   messages: [
                       {
                           role: <span class="hljs-string">'system'</span>,
                           content: <span class="hljs-string">`You are a helpful assistant that can <span class="hljs-subst">${step.description}</span>.`</span>
                       }
                   ]
               })
           )
       );
    
    
       <span class="hljs-comment">// Initialize the data with the raw input.</span>
       <span class="hljs-keyword">let</span> data = inputText;
    
    
       <span class="hljs-keyword">try</span> {
           <span class="hljs-comment">// Process each step in the workflow sequentially.</span>
           <span class="hljs-keyword">for</span> (<span class="hljs-keyword">const</span> step <span class="hljs-keyword">of</span> steps) {
               <span class="hljs-comment">// Call the LLM for the current step.</span>
               <span class="hljs-keyword">const</span> response = <span class="hljs-keyword">await</span> langbase.pipes.run({
                   stream: <span class="hljs-literal">false</span>,
                   name: step.name,
                   messages: [{ role: <span class="hljs-string">'user'</span>, content: <span class="hljs-string">`<span class="hljs-subst">${step.prompt}</span> <span class="hljs-subst">${data}</span>`</span> }]
               });
    
    
               data = response.completion;
    
    
               <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Step: <span class="hljs-subst">${step.name}</span> nn Response: <span class="hljs-subst">${data}</span>`</span>);
    
    
               <span class="hljs-comment">// Gate on summary agent output to ensure it is not too brief.</span>
               <span class="hljs-comment">// If summary is less than 10 words, throw an error to stop the workflow.</span>
               <span class="hljs-keyword">if</span> (step.name === <span class="hljs-string">'summary-agent'</span> && data.split(<span class="hljs-string">' '</span>).length < <span class="hljs-number">10</span>) {
                   <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(
                       <span class="hljs-string">'Gate triggered for summary agent. Summary is too brief. Exiting workflow.'</span>
                   );
                   <span class="hljs-keyword">return</span>;
               }
           }
       } <span class="hljs-keyword">catch</span> (error) {
           <span class="hljs-built_in">console</span>.error(<span class="hljs-string">'Error in main workflow:'</span>, error);
       }
    
    
       <span class="hljs-comment">// The final refined marketing copy</span>
       <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Final Refined Product Marketing Copy:'</span>, data);
    }
    
    
    <span class="hljs-keyword">const</span> inputText = <span class="hljs-string">`Our new smartwatch is a versatile device featuring a high-resolution display,
    long-lasting battery life,fitness tracking, and smartphone connectivity. It's designed for
    everyday use and is water-resistant. With cutting-edge sensors and a sleek design, it's
    perfect for tech-savvy individuals.`</span>;
    
    
    main(inputText);
    

    Here’s a breakdown of the above code:

    Setup and initialization:

    • dotenv loads env variables from the .env file for secure API key access.

    • Langbase is imported from the SDK to interact with the API.

    • A Langbase client instance is created using your API key.

    Define the AI steps (prompt chain):

    • Three AI agents (steps) are defined for a pipeline:

      1. Summarization Agent: Summarizes the input product description into 2 sentences.

      2. Feature Extraction Agent: Extracts key features from the summary as bullet points.

      3. Marketing Copy Agent: Turns bullet points into polished marketing copy.

    • Each agent uses openai:gpt-4o-mini as the LLM.

    Create Langbase Pipes (agents):

    • Langbase pipes are created for each step using langbase.pipes.create(...).

    • Each pipe has a unique name (timestamped) and a system message guiding its purpose.

    Run the workflow (sequential processing):

    • Input text flows through each step one by one:

      • The output of one step becomes the input for the next.

      • Pipes are run using langbase.pipes.run(...).

    • Intermediate outputs are logged after each step.

    Validation check (gatekeeping):

    • If the summary output is too short (less than 10 words), the workflow stops with an error.

    Final Output:

    • After all steps, the final result is a refined marketing copy printed to the console.

    For this article, we’re using a demo smartwatch product description to view the result in the inputText field.

    Step 5: Run the file

    To run the prompt-chaining.ts file to view the results, you need to:

    • Add TypeScript as a dependency

    • Add a script to run TypeScript files

    • Add a TypeScript configuration file

    For it lets first install pnpm by running this command in your terminal:

    pnpm install
    

    Then in your terminal again, run this command to add relevant dependencies and configuration files:

    pnpm add -D typescript ts-node @types/node
    

    After that, create a TypeScript configuration file tsconfig.json:

    pnpm <span class="hljs-built_in">exec</span> tsc --init
    

    And update the package.json to add the relevant script. This is what your package.json should look like after updating:

    {
     <span class="hljs-attr">"name"</span>: <span class="hljs-string">"agentic-architectures"</span>,
     <span class="hljs-attr">"version"</span>: <span class="hljs-string">"1.0.0"</span>,
     <span class="hljs-attr">"main"</span>: <span class="hljs-string">"index.js"</span>,
     <span class="hljs-attr">"scripts"</span>: {
       <span class="hljs-attr">"test"</span>: <span class="hljs-string">"echo "Error: no test specified" && exit 1"</span>,
       <span class="hljs-attr">"prompt-chaining"</span>: <span class="hljs-string">"ts-node prompt-chaining.ts"</span>
     },
     <span class="hljs-attr">"keywords"</span>: [],
     <span class="hljs-attr">"author"</span>: <span class="hljs-string">""</span>,
     <span class="hljs-attr">"license"</span>: <span class="hljs-string">"ISC"</span>,
     <span class="hljs-attr">"description"</span>: <span class="hljs-string">""</span>,
     <span class="hljs-attr">"dependencies"</span>: {
       <span class="hljs-attr">"dotenv"</span>: <span class="hljs-string">"^16.5.0"</span>,
       <span class="hljs-attr">"langbase"</span>: <span class="hljs-string">"^1.1.55"</span>
     },
     <span class="hljs-attr">"devDependencies"</span>: {
       <span class="hljs-attr">"@types/node"</span>: <span class="hljs-string">"^22.14.1"</span>,
       <span class="hljs-attr">"ts-node"</span>: <span class="hljs-string">"^10.9.2"</span>,
       <span class="hljs-attr">"typescript"</span>: <span class="hljs-string">"^5.8.3"</span>
     }
    }
    

    Now let’s run the project by pnpm run prompt-chaining

    The Result

    After running the project, you’ll see the result of the example smartwatch product description in your console as follows:

    Step: summarize-description
    Response: This smartwatch combines fitness tracking and smartphone connectivity with a high-resolution display and long-lasting battery. Designed <span class="hljs-keyword">for</span> everyday use with a sleek, water-resistant build, it<span class="hljs-string">'s ideal for tech enthusiasts.
    
    Step: extract-features
    Response: Okay, here are the key product features extracted from the summary:
    
    Fitness Tracking
    Smartphone Connectivity
    High-Resolution Display
    Long-Lasting Battery
    Sleek Design
    Water-Resistant Build
    Designed for Everyday Use
    Step: refine-marketing-copy
    Response: ## Elevate Your Everyday with Seamless Connectivity and Unrivaled Performance.
    
    Experience the perfect fusion of style and functionality with our revolutionary device, designed to seamlessly integrate into your active lifestyle. Stay motivated and informed with comprehensive Fitness Tracking, while effortlessly staying connected via Smartphone Connectivity.
    
    Immerse yourself in vibrant clarity with the stunning High-Resolution Display, and power through your day without interruption thanks to the Long-Lasting Battery. Encased in a Sleek Design, this device is as stylish as it is practical.
    
    Built to withstand the rigors of daily life, the Water-Resistant Build ensures worry-free wear, rain or shine. Engineered for comfort and performance, this device is Designed for Everyday Use, empowering you to live your best life, effortlessly.</span>
    

    This is how you can build a prompt chaining agentic system with AI primitives (no framework) using the Langbase SDK and Langbase agentic architectures.

    Thank you for reading!

    Connect with me by 🙌:

    • Subscribing to my YouTube Channel. If you are willing to learn about AI and agents.

    • Subscribing to my free newsletter “The Agentic Engineer” where I share all the latest AI and agents news/trends/jobs and much more.

    • Follow me on X (Twitter).

    Source: freeCodeCamp Programming Tutorials: Python, JavaScript, Git & More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleHow to Export Your Database in Django
    Next Article Learn REST API Principles by Building an Express App

    Related Posts

    Development

    How to focus on building your skills when everything’s so distracting with Ania Kubów [Podcast #187]

    September 6, 2025
    Development

    Introducing freeCodeCamp Daily Python and JavaScript Challenges – Solve a New Programming Puzzle Every Day

    September 6, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CVE-2025-30743 – Oracle Lease and Finance Management HTTP Internal Operations Unauthorized Access and Data Manipulation

    Common Vulnerabilities and Exposures (CVEs)

    The Gap Strikes Back: Now Stylable

    News & Updates

    Rilasciato KDE Plasma 6.5: Introduzione degli Angoli Inferiori Arrotondati e Altre Novità

    Linux

    Sony WH-1000XM6 vs. Bose QuietComfort Ultra: Which headphones should you buy?

    News & Updates

    Highlights

    CVE-2025-54720 – SteelThemes Nest Addons SQL Injection

    August 28, 2025

    CVE ID : CVE-2025-54720

    Published : Aug. 28, 2025, 1:16 p.m. | 13 hours, 14 minutes ago

    Description : Improper Neutralization of Special Elements used in an SQL Command (‘SQL Injection’) vulnerability in SteelThemes Nest Addons allows SQL Injection. This issue affects Nest Addons: from n/a through 1.6.3.

    Severity: 9.3 | CRITICAL

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    CVE-2025-2146 – Canon Printers Buffer Overflow Vulnerability

    May 25, 2025

    5 Common Mistakes When Creating Design Specs

    June 5, 2025

    CVE-2025-48874 – Apache HTTP Server Insecure Deserialization

    May 30, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.