Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Coded Smorgasbord: High Strung

      September 26, 2025

      Chainguard launches trusted collection of verified JavaScript libraries

      September 26, 2025

      CData launches Connect AI to provide agents access to enterprise data sources

      September 26, 2025

      PostgreSQL 18 adds asynchronous I/O to improve performance

      September 26, 2025

      Distribution Release: Neptune 9.0

      September 25, 2025

      Distribution Release: Kali Linux 2025.3

      September 23, 2025

      Distribution Release: SysLinuxOS 13

      September 23, 2025

      Development Release: MX Linux 25 Beta 1

      September 22, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      PHP 8.5.0 RC 1 available for testing

      September 26, 2025
      Recent

      PHP 8.5.0 RC 1 available for testing

      September 26, 2025

      Terraform Code Generator Using Ollama and CodeGemma

      September 26, 2025

      Beyond Denial: How AI Concierge Services Can Transform Healthcare from Reactive to Proactive

      September 25, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Distribution Release: Neptune 9.0

      September 25, 2025
      Recent

      Distribution Release: Neptune 9.0

      September 25, 2025

      FOSS Weekly #25.39: Kill Switch Phones, LMDE 7, Zorin OS 18 Beta, Polybar, Apt History and More Linux Stuff

      September 25, 2025

      Distribution Release: Kali Linux 2025.3

      September 23, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Solving LLM Hallucinations in Conversational, Customer-Facing Use Cases

    Solving LLM Hallucinations in Conversational, Customer-Facing Use Cases

    June 23, 2025

    Or: Why “Can we turn off generation” might be the smartest question in generative AI

    Not long ago, I found myself in a meeting with technical leaders from a large enterprise. We were discussing Parlant as a solution for building fluent yet tightly controlled conversational agents. The conversation was going well—until someone asked a question that completely caught me off guard:

    “Can we use Parlant while turning off the generation part?”

    At first, I honestly thought it was a misunderstanding. A generative AI agent… without the generation? It sounded paradoxical.

    But I paused. And the more I considered it, the more the question started to make sense.

    The High Stakes of Customer-Facing AI​

    These teams weren’t playing around with demos. Their AI agents were destined for production—interfacing directly with millions of users per month. In that kind of environment, even a 0.01% error rate isn’t acceptable. One in ten thousand bad interactions is one too many when the outcome could be compliance failures, legal risk, or brand damage.

    At this scale, “pretty good” isn’t good enough. And while LLMs have come a long way, their free-form generation still introduces uncertainty—hallucinations, unintended tone, and factual drift.

    So no, the question wasn’t absurd. It was actually pivotal.

    A Shift in Perspective​

    Later that night, I kept thinking about it. The question made more sense than I had initially realized, because these organizations weren’t lacking resources or expertise.

    In fact, they had full-time Conversation Designers on staff. These are professionals trained in designing agentic behaviors, crafting interactions, and writing responses that align perfectly with brand voice and legal requirements, and get customers to actually engage with the AI — which turns out to be no easy task in practice!

    So they weren’t asking to turn off generation out of fear—they were asking to turn it off because they wanted—and were able—to take control into their own hands.

    That’s when it hit me: we’ve been misframing what “generative AI agents” actually are.

    They’re not necessarily about open-ended token-by-token generation. They’re about being adaptive: responding to inputs in context, with intelligence. Whether those responses come directly, token-by-token, from an LLM, or from a curated response bank, doesn’t actually matter. What matters is whether they’re appropriate: compliant, contextual, clear, and useful.

    The Hidden Key to the Hallucination Problem​

    Everyone is looking for a fix to hallucinations. Here’s a radical thought: we think it’s already here.

    Conversation Designers.

    Having conversation designers on your team—as many enterprises already do—you’re not just mitigating output hallucinations, you’re actually primed to eliminate them completely.

    They also bring clarity into the customer interaction. Intentionality. An engaging voice. And, they create more effective interactions than foundation LLMs can, because LLMs (on their own) still don’t sound quite right in customer-facing scenarios.

    So instead of trying to retrofit generative systems with band-aids, I realized: Why not bake this into Parlant from the ground up? After all, Parlant is all about design authority and control. It’s about giving the right people the tools to shape how AI behaves in the world. This was a perfect match—especially for these enterprise use cases which had so much to gain from adaptive conversations, if only they could trust them with real customers.

    From Insight to Product: Utterance Matching​

    That was the breakthrough moment that led us to build Utterance Templates into Parlant.

    Utterance Templates let designers provide fluid, context-aware templates for agent responses: responses that feel natural but are fully vetted, versioned, and governed. It’s a structured way to maintain LLM-like adaptability while keeping a grip on what’s actually said.

    Under the hood, utterances templates work in a 3-stage process:

    1. The agent drafts a fluid message based on the current situational awareness (interaction, guidelines, tool results, etc.)
    2. Based on the draft message, it matches the closest utterance template found in your utterance store
    3. The engine renders the matched utterance template (which is in Jinja2 format), using tool-provided variable substitutions where applicable

    We immediately knew this would work perfectly with Parlant’s hybrid model: one that gives software developers the tools to build reliable agents, while letting business and interaction experts define how those agents behave. And the guys at that particular enterprise immediately knew it would work, too.

    Conclusion: Empower the Right People​

    The future of conversational AI isn’t about removing people from the loop. It’s about empowering the right people to shape and continuously improve what AI says and how it says it.

    With Parlant, the answer can be: the people who know your brand, your customers, and your responsibilities best.

    And so the only thing that turned out to be absurd was my initial response. Turning off—or at least heavily controlling—generation in customer-facing interactions: that wasn’t absurd. It’s most likely just how it should be. At least in our view!


    Disclaimer: The views and opinions expressed in this guest article are those of the author and do not necessarily reflect the official policy or position of Marktechpost.

    The post Solving LLM Hallucinations in Conversational, Customer-Facing Use Cases appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleVERINA: Evaluating LLMs on End-to-End Verifiable Code Generation with Formal Proofs
    Next Article How Much Does It Cost to Build a White-Label Enterprise Performance Management Software?

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    September 3, 2025
    Machine Learning

    Announcing the new cluster creation experience for Amazon SageMaker HyperPod

    September 3, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Exploring JavaScript ES2025 Edition

    Development

    Long-running execution flows now supported in Amazon Bedrock Flows in public preview

    Machine Learning

    Build an intelligent eDiscovery solution using Amazon Bedrock Agents

    Machine Learning

    Top Hosting Platforms for Indie Hackers

    Development

    Highlights

    CVE-2025-53637 – Meshtastic Code Injection Vulnerability

    July 10, 2025

    CVE ID : CVE-2025-53637

    Published : July 10, 2025, 10:15 p.m. | 24 minutes ago

    Description : Meshtastic is an open source mesh networking solution. The main_matrix.yml GitHub Action is triggered by the pull_request_target event, which has extensive permissions, and can be initiated by an attacker who forked the repository and created a pull request. In the shell code execution part, user-controlled input is interpolated unsafely into the code. If this were to be exploited, attackers could inject unauthorized code into the repository. This vulnerability is fixed in 2.6.6.

    Severity: 4.1 | MEDIUM

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    Smart Cache Package for Laravel

    August 29, 2025

    OpenAI increases ChatGPT user protections following wrongful death lawsuit

    August 29, 2025

    Finally, a sleek gaming laptop I can take to the office (without sacrificing power)

    August 1, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.