Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Designing With AI, Not Around It: Practical Advanced Techniques For Product Design Use Cases

      August 11, 2025

      Why Companies Are Investing in AI-Powered React.js Development Services in 2025

      August 11, 2025

      The coming AI smartphone: Redefining personal tech

      August 11, 2025

      Modern React animation libraries: Real examples for engaging UIs

      August 11, 2025

      How Debian 13’s little improvements add up to the distro’s surprisingly big leap forward

      August 11, 2025

      Why xAI is giving you ‘limited’ free access to Grok 4

      August 11, 2025

      How Apple may revamp Siri to a voice assistant I’d actually use (and ditch Gemini for)

      August 11, 2025

      I jump-started a bus from the 1930s with this power bank – here’s the verdict

      August 11, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Laravel’s UsePolicy Attribute: Explicit Authorization Control

      August 11, 2025
      Recent

      Laravel’s UsePolicy Attribute: Explicit Authorization Control

      August 11, 2025

      The Laravel Way to Build AI Agents That Actually Work

      August 11, 2025

      The Laravel Way to Build AI Agents That Actually Work

      August 11, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft sued over killing support for Windows 10

      August 11, 2025
      Recent

      Microsoft sued over killing support for Windows 10

      August 11, 2025

      Grok 4 rolled out for free-tier users worldwide, with some limits

      August 11, 2025

      Firefox AI slammed for hogging CPU and draining battery

      August 11, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Building a Secure and Memory-Enabled Cipher Workflow for AI Agents with Dynamic LLM Selection and API Integration

    Building a Secure and Memory-Enabled Cipher Workflow for AI Agents with Dynamic LLM Selection and API Integration

    August 11, 2025

    In this tutorial, we walk through building a compact but fully functional Cipher-based workflow. We start by securely capturing our Gemini API key in the Colab UI without exposing it in code. We then implement a dynamic LLM selection function that can automatically switch between OpenAI, Gemini, or Anthropic based on which API key is available. The setup phase ensures Node.js and the Cipher CLI are installed, after which we programmatically generate a cipher.yml configuration to enable a memory agent with long-term recall. We create helper functions to run Cipher commands directly from Python, store key project decisions as persistent memories, retrieve them on demand, and finally spin up Cipher in API mode for external integration. Check out the FULL CODES here.

    Copy CodeCopiedUse a different Browser
    import os, getpass
    os.environ["GEMINI_API_KEY"] = getpass.getpass("Enter your Gemini API key: ").strip()
    
    
    import subprocess, tempfile, pathlib, textwrap, time, requests, shlex
    
    
    def choose_llm():
       if os.getenv("OPENAI_API_KEY"):
           return "openai", "gpt-4o-mini", "OPENAI_API_KEY"
       if os.getenv("GEMINI_API_KEY"):
           return "gemini", "gemini-2.5-flash", "GEMINI_API_KEY"
       if os.getenv("ANTHROPIC_API_KEY"):
           return "anthropic", "claude-3-5-haiku-20241022", "ANTHROPIC_API_KEY"
       raise RuntimeError("Set one API key before running.")

    We start by securely entering our Gemini API key using getpass so it stays hidden in the Colab UI. We then define a choose_llm() function that checks our environment variables and automatically selects the appropriate LLM provider, model, and key based on what is available. Check out the FULL CODES here.

    Copy CodeCopiedUse a different Browser
    def run(cmd, check=True, env=None):
       print("▸", cmd)
       p = subprocess.run(cmd, shell=True, text=True, capture_output=True, env=env)
       if p.stdout: print(p.stdout)
       if p.stderr: print(p.stderr)
       if check and p.returncode != 0:
           raise RuntimeError(f"Command failed: {cmd}")
       return p

    We create a run() helper function that executes shell commands, prints both stdout and stderr for visibility, and raises an error if the command fails when check is enabled, making our workflow execution more transparent and reliable. Check out the FULL CODES here.

    Copy CodeCopiedUse a different Browser
    def ensure_node_and_cipher():
       run("sudo apt-get update -y && sudo apt-get install -y nodejs npm", check=False)
       run("npm install -g @byterover/cipher")

    We define ensure_node_and_cipher() to install Node.js, npm, and the Cipher CLI globally, ensuring our environment has all the necessary dependencies before running any Cipher-related commands. Check out the FULL CODES here.

    Copy CodeCopiedUse a different Browser
    def write_cipher_yml(workdir, provider, model, key_env):
       cfg = """
    llm:
     provider: {provider}
     model: {model}
     apiKey: ${key_env}
    systemPrompt:
     enabled: true
     content: |
       You are an AI programming assistant with long-term memory of prior decisions.
    embedding:
     disabled: true
    mcpServers:
     filesystem:
       type: stdio
       command: npx
       args: ['-y','@modelcontextprotocol/server-filesystem','.']
    """.format(provider=provider, model=model, key_env=key_env)
    
    
       (workdir / "memAgent").mkdir(parents=True, exist_ok=True)
       (workdir / "memAgent" / "cipher.yml").write_text(cfg.strip() + "n")

    We implement write_cipher_yml() to generate a cipher.yml configuration file inside a memAgent folder, setting the chosen LLM provider, model, and API key, enabling a system prompt with long-term memory, and registering a filesystem MCP server for file operations. Check out the FULL CODES here.

    Copy CodeCopiedUse a different Browser
    def cipher_once(text, env=None, cwd=None):
       cmd = f'cipher {shlex.quote(text)}'
       p = subprocess.run(cmd, shell=True, text=True, capture_output=True, env=env, cwd=cwd)
       print("Cipher says:n", p.stdout or p.stderr)
       return p.stdout.strip() or p.stderr.strip()

    We define cipher_once() to run a single Cipher CLI command with the provided text, capture and display its output, and return the response, allowing us to interact with Cipher programmatically from Python. Check out the FULL CODES here.

    Copy CodeCopiedUse a different Browser
    def start_api(env, cwd):
       proc = subprocess.Popen("cipher --mode api", shell=True, env=env, cwd=cwd,
                               stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=True)
       for _ in range(30):
           try:
               r = requests.get("http://127.0.0.1:3000/health", timeout=2)
               if r.ok:
                   print("API /health:", r.text)
                   break
           except: pass
           time.sleep(1)
       return proc

    We create start_api() to launch Cipher in API mode as a subprocess, then repeatedly poll its /health endpoint until it responds, ensuring the API server is ready before proceeding. Check out the FULL CODES here.

    Copy CodeCopiedUse a different Browser
    def main():
       provider, model, key_env = choose_llm()
       ensure_node_and_cipher()
       workdir = pathlib.Path(tempfile.mkdtemp(prefix="cipher_demo_"))
       write_cipher_yml(workdir, provider, model, key_env)
       env = os.environ.copy()
    
    
       cipher_once("Store decision: use pydantic for config validation; pytest fixtures for testing.", env, str(workdir))
       cipher_once("Remember: follow conventional commits; enforce black + isort in CI.", env, str(workdir))
    
    
       cipher_once("What did we standardize for config validation and Python formatting?", env, str(workdir))
    
    
       api_proc = start_api(env, str(workdir))
       time.sleep(3)
       api_proc.terminate()
    
    
    if __name__ == "__main__":
       main()

    In main(), we select the LLM provider, install dependencies, and create a temporary working directory with a cipher.yml configuration. We then store key project decisions in Cipher’s memory, query them back, and finally start the Cipher API server briefly before shutting it down, demonstrating both CLI and API-based interactions.

    In conclusion, we have a working Cipher environment that securely manages API keys, selects the right LLM provider automatically, and configures a memory-enabled agent entirely through Python automation. Our implementation includes decision logging, memory retrieval, and a live API endpoint, all orchestrated in a Notebook/Colab-friendly workflow. This makes the setup reusable for other AI-assisted development pipelines, allowing us to store and query project knowledge programmatically while keeping the environment lightweight and easy to redeploy.


    Check out the FULL CODES here. Feel free to check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

    🇬 Star us on GitHub
    🇷 Join our ML Subreddit
    🇸 Sponsor us

    The post Building a Secure and Memory-Enabled Cipher Workflow for AI Agents with Dynamic LLM Selection and API Integration appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleOptimal Corpus Aware Training for Neural Machine Translation
    Next Article NuMind AI Releases NuMarkdown-8B-Thinking: A Reasoning Breakthrough in OCR and Document-to-Markdown Conversion

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    August 11, 2025
    Machine Learning

    Building an Advanced Portfolio Analysis and Market Intelligence Tool with OpenBB

    August 11, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CVE-2025-46633 – Tenda RX2 Pro Information Leak

    Common Vulnerabilities and Exposures (CVEs)

    A timeline of JavaScript’s history

    Development

    CVE-2025-46576 – GoldenDB Database Permission Bypass Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Identity Security Has an Automation Problem—And It’s Bigger Than You Think

    Development

    Highlights

    I tested the viral Roborock vacuum with a mechanical arm – here’s my verdict after a month

    May 20, 2025

    The Roborock Saros Z70 gave me a glimpse of what the future of smart homes…

    A Step-by-Step Coding Guide to Defining Custom Model Context Protocol (MCP) Server and Client Tools with FastMCP and Integrating Them into Google Gemini 2.0’s Function‑Calling Workflow

    April 21, 2025

    CVE-2025-49509 – Roland Beaussant Audio Editor & Recorder Missing Authorization Vulnerability

    June 10, 2025

    CVE-2025-32889 – goTenna Hardcoded Verification Token Vulnerability

    May 1, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.