Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      This week in AI dev tools: Gemini 2.5 Pro and Flash GA, GitHub Copilot Spaces, and more (June 20, 2025)

      June 20, 2025

      Gemini 2.5 Pro and Flash are generally available and Gemini 2.5 Flash-Lite preview is announced

      June 19, 2025

      CSS Cascade Layers Vs. BEM Vs. Utility Classes: Specificity Control

      June 19, 2025

      IBM launches new integration to help unify AI security and governance

      June 18, 2025

      I used Lenovo’s latest dual-screen OLED laptop for a month and it wouldn’t be my first choice — here’s why

      June 22, 2025

      Here’s how I fixed a dead Steam Deck screen — with Valve proving they still have the best customer service in gaming

      June 22, 2025

      Borderlands 4 drops stunning new story trailer

      June 22, 2025

      DistroWatch Weekly, Issue 1127

      June 22, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Exploring Lakebase: Databricks’ Next-Gen AI-Native OLTP Database

      June 22, 2025
      Recent

      Exploring Lakebase: Databricks’ Next-Gen AI-Native OLTP Database

      June 22, 2025

      Understanding JavaScript Promise

      June 22, 2025

      Lakeflow: Revolutionizing SCD2 Pipelines with Change Data Capture (CDC)

      June 21, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      I used Lenovo’s latest dual-screen OLED laptop for a month and it wouldn’t be my first choice — here’s why

      June 22, 2025
      Recent

      I used Lenovo’s latest dual-screen OLED laptop for a month and it wouldn’t be my first choice — here’s why

      June 22, 2025

      Here’s how I fixed a dead Steam Deck screen — with Valve proving they still have the best customer service in gaming

      June 22, 2025

      Borderlands 4 drops stunning new story trailer

      June 22, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Solving LLM Hallucinations in Conversational, Customer-Facing Use Cases

    Solving LLM Hallucinations in Conversational, Customer-Facing Use Cases

    June 23, 2025

    Or: Why “Can we turn off generation” might be the smartest question in generative AI

    Not long ago, I found myself in a meeting with technical leaders from a large enterprise. We were discussing Parlant as a solution for building fluent yet tightly controlled conversational agents. The conversation was going well—until someone asked a question that completely caught me off guard:

    “Can we use Parlant while turning off the generation part?”

    At first, I honestly thought it was a misunderstanding. A generative AI agent… without the generation? It sounded paradoxical.

    But I paused. And the more I considered it, the more the question started to make sense.

    The High Stakes of Customer-Facing AI​

    These teams weren’t playing around with demos. Their AI agents were destined for production—interfacing directly with millions of users per month. In that kind of environment, even a 0.01% error rate isn’t acceptable. One in ten thousand bad interactions is one too many when the outcome could be compliance failures, legal risk, or brand damage.

    At this scale, “pretty good” isn’t good enough. And while LLMs have come a long way, their free-form generation still introduces uncertainty—hallucinations, unintended tone, and factual drift.

    So no, the question wasn’t absurd. It was actually pivotal.

    A Shift in Perspective​

    Later that night, I kept thinking about it. The question made more sense than I had initially realized, because these organizations weren’t lacking resources or expertise.

    In fact, they had full-time Conversation Designers on staff. These are professionals trained in designing agentic behaviors, crafting interactions, and writing responses that align perfectly with brand voice and legal requirements, and get customers to actually engage with the AI — which turns out to be no easy task in practice!

    So they weren’t asking to turn off generation out of fear—they were asking to turn it off because they wanted—and were able—to take control into their own hands.

    That’s when it hit me: we’ve been misframing what “generative AI agents” actually are.

    They’re not necessarily about open-ended token-by-token generation. They’re about being adaptive: responding to inputs in context, with intelligence. Whether those responses come directly, token-by-token, from an LLM, or from a curated response bank, doesn’t actually matter. What matters is whether they’re appropriate: compliant, contextual, clear, and useful.

    The Hidden Key to the Hallucination Problem​

    Everyone is looking for a fix to hallucinations. Here’s a radical thought: we think it’s already here.

    Conversation Designers.

    Having conversation designers on your team—as many enterprises already do—you’re not just mitigating output hallucinations, you’re actually primed to eliminate them completely.

    They also bring clarity into the customer interaction. Intentionality. An engaging voice. And, they create more effective interactions than foundation LLMs can, because LLMs (on their own) still don’t sound quite right in customer-facing scenarios.

    So instead of trying to retrofit generative systems with band-aids, I realized: Why not bake this into Parlant from the ground up? After all, Parlant is all about design authority and control. It’s about giving the right people the tools to shape how AI behaves in the world. This was a perfect match—especially for these enterprise use cases which had so much to gain from adaptive conversations, if only they could trust them with real customers.

    From Insight to Product: Utterance Matching​

    That was the breakthrough moment that led us to build Utterance Templates into Parlant.

    Utterance Templates let designers provide fluid, context-aware templates for agent responses: responses that feel natural but are fully vetted, versioned, and governed. It’s a structured way to maintain LLM-like adaptability while keeping a grip on what’s actually said.

    Under the hood, utterances templates work in a 3-stage process:

    1. The agent drafts a fluid message based on the current situational awareness (interaction, guidelines, tool results, etc.)
    2. Based on the draft message, it matches the closest utterance template found in your utterance store
    3. The engine renders the matched utterance template (which is in Jinja2 format), using tool-provided variable substitutions where applicable

    We immediately knew this would work perfectly with Parlant’s hybrid model: one that gives software developers the tools to build reliable agents, while letting business and interaction experts define how those agents behave. And the guys at that particular enterprise immediately knew it would work, too.

    Conclusion: Empower the Right People​

    The future of conversational AI isn’t about removing people from the loop. It’s about empowering the right people to shape and continuously improve what AI says and how it says it.

    With Parlant, the answer can be: the people who know your brand, your customers, and your responsibilities best.

    And so the only thing that turned out to be absurd was my initial response. Turning off—or at least heavily controlling—generation in customer-facing interactions: that wasn’t absurd. It’s most likely just how it should be. At least in our view!


    Disclaimer: The views and opinions expressed in this guest article are those of the author and do not necessarily reflect the official policy or position of Marktechpost.

    The post Solving LLM Hallucinations in Conversational, Customer-Facing Use Cases appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleVERINA: Evaluating LLMs on End-to-End Verifiable Code Generation with Formal Proofs
    Next Article How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    June 23, 2025
    Machine Learning

    VERINA: Evaluating LLMs on End-to-End Verifiable Code Generation with Formal Proofs

    June 23, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    OpenBubbles is a cross-platform app ecosystem

    Linux

    CVE-2025-3897 – “WordPress EUCookieLaw Plugin Arbitrary File Read Vulnerability”

    Common Vulnerabilities and Exposures (CVEs)

    Pixelpusher is a drone-based team combat game

    Linux

    Microsoft Patches Four Critical Azure and Power Apps Vulnerabilities, Including CVSS 10 Privilege Escalation

    Security

    Highlights

    Think GeoGuessr is fun? Try using ChatGPT to guess locations in your photos

    April 18, 2025

    ChatGPT can ‘read’ your photos for location clues – even without embedded GPS or EXIF…

    CLIP-UP: A Simple and Efficient Mixture-of-Experts CLIP Training Recipe with Sparse Upcycling

    May 27, 2025

    CVE-2025-4647 – Centreon Web Cross-Site Scripting (XSS)

    May 13, 2025

    Researchers from Tsinghua and ModelBest Release Ultra-FineWeb: A Trillion-Token Dataset Enhancing LLM Accuracy Across Benchmarks

    May 15, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.