Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      The Power Of The Intl API: A Definitive Guide To Browser-Native Internationalization

      August 8, 2025

      This week in AI dev tools: GPT-5, Claude Opus 4.1, and more (August 8, 2025)

      August 8, 2025

      Elastic simplifies log analytics for SREs and developers with launch of Log Essentials

      August 7, 2025

      OpenAI launches GPT-5

      August 7, 2025

      5 ways business leaders can transform workplace culture – and it starts by listening

      August 8, 2025

      My 4 favorite image editing apps on Linux – and two are free Photoshop alternatives

      August 8, 2025

      How Google’s Genie 3 could change AI video – and let you build your own interactive worlds

      August 8, 2025

      How you’re charging your tablet is slowly killing it – 3 methods to avoid (and the right way)

      August 8, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Establishing Consistent Data Foundations with Laravel’s Database Population System

      August 8, 2025
      Recent

      Establishing Consistent Data Foundations with Laravel’s Database Population System

      August 8, 2025

      Generate Postman Collections from Laravel Routes

      August 8, 2025

      This Week in Laravel: Free Laravel Idea, Laracon News, and More

      August 8, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Lenovo Legion Go 2 vs Legion Go — How Do These Gaming Handhelds Compare Based on Rumored Specs?

      August 8, 2025
      Recent

      Lenovo Legion Go 2 vs Legion Go — How Do These Gaming Handhelds Compare Based on Rumored Specs?

      August 8, 2025

      9 Default Settings in Windows 11 You Didn’t Know Could Affect Performance and Privacy

      August 8, 2025

      DICE Responds to Battlefield 6 Community: Key Updates on Map Flow and Class Mechanics

      August 8, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Solving LLM Hallucinations in Conversational, Customer-Facing Use Cases

    Solving LLM Hallucinations in Conversational, Customer-Facing Use Cases

    June 23, 2025

    Or: Why “Can we turn off generation” might be the smartest question in generative AI

    Not long ago, I found myself in a meeting with technical leaders from a large enterprise. We were discussing Parlant as a solution for building fluent yet tightly controlled conversational agents. The conversation was going well—until someone asked a question that completely caught me off guard:

    “Can we use Parlant while turning off the generation part?”

    At first, I honestly thought it was a misunderstanding. A generative AI agent… without the generation? It sounded paradoxical.

    But I paused. And the more I considered it, the more the question started to make sense.

    The High Stakes of Customer-Facing AI​

    These teams weren’t playing around with demos. Their AI agents were destined for production—interfacing directly with millions of users per month. In that kind of environment, even a 0.01% error rate isn’t acceptable. One in ten thousand bad interactions is one too many when the outcome could be compliance failures, legal risk, or brand damage.

    At this scale, “pretty good” isn’t good enough. And while LLMs have come a long way, their free-form generation still introduces uncertainty—hallucinations, unintended tone, and factual drift.

    So no, the question wasn’t absurd. It was actually pivotal.

    A Shift in Perspective​

    Later that night, I kept thinking about it. The question made more sense than I had initially realized, because these organizations weren’t lacking resources or expertise.

    In fact, they had full-time Conversation Designers on staff. These are professionals trained in designing agentic behaviors, crafting interactions, and writing responses that align perfectly with brand voice and legal requirements, and get customers to actually engage with the AI — which turns out to be no easy task in practice!

    So they weren’t asking to turn off generation out of fear—they were asking to turn it off because they wanted—and were able—to take control into their own hands.

    That’s when it hit me: we’ve been misframing what “generative AI agents” actually are.

    They’re not necessarily about open-ended token-by-token generation. They’re about being adaptive: responding to inputs in context, with intelligence. Whether those responses come directly, token-by-token, from an LLM, or from a curated response bank, doesn’t actually matter. What matters is whether they’re appropriate: compliant, contextual, clear, and useful.

    The Hidden Key to the Hallucination Problem​

    Everyone is looking for a fix to hallucinations. Here’s a radical thought: we think it’s already here.

    Conversation Designers.

    Having conversation designers on your team—as many enterprises already do—you’re not just mitigating output hallucinations, you’re actually primed to eliminate them completely.

    They also bring clarity into the customer interaction. Intentionality. An engaging voice. And, they create more effective interactions than foundation LLMs can, because LLMs (on their own) still don’t sound quite right in customer-facing scenarios.

    So instead of trying to retrofit generative systems with band-aids, I realized: Why not bake this into Parlant from the ground up? After all, Parlant is all about design authority and control. It’s about giving the right people the tools to shape how AI behaves in the world. This was a perfect match—especially for these enterprise use cases which had so much to gain from adaptive conversations, if only they could trust them with real customers.

    From Insight to Product: Utterance Matching​

    That was the breakthrough moment that led us to build Utterance Templates into Parlant.

    Utterance Templates let designers provide fluid, context-aware templates for agent responses: responses that feel natural but are fully vetted, versioned, and governed. It’s a structured way to maintain LLM-like adaptability while keeping a grip on what’s actually said.

    Under the hood, utterances templates work in a 3-stage process:

    1. The agent drafts a fluid message based on the current situational awareness (interaction, guidelines, tool results, etc.)
    2. Based on the draft message, it matches the closest utterance template found in your utterance store
    3. The engine renders the matched utterance template (which is in Jinja2 format), using tool-provided variable substitutions where applicable

    We immediately knew this would work perfectly with Parlant’s hybrid model: one that gives software developers the tools to build reliable agents, while letting business and interaction experts define how those agents behave. And the guys at that particular enterprise immediately knew it would work, too.

    Conclusion: Empower the Right People​

    The future of conversational AI isn’t about removing people from the loop. It’s about empowering the right people to shape and continuously improve what AI says and how it says it.

    With Parlant, the answer can be: the people who know your brand, your customers, and your responsibilities best.

    And so the only thing that turned out to be absurd was my initial response. Turning off—or at least heavily controlling—generation in customer-facing interactions: that wasn’t absurd. It’s most likely just how it should be. At least in our view!


    Disclaimer: The views and opinions expressed in this guest article are those of the author and do not necessarily reflect the official policy or position of Marktechpost.

    The post Solving LLM Hallucinations in Conversational, Customer-Facing Use Cases appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleVERINA: Evaluating LLMs on End-to-End Verifiable Code Generation with Formal Proofs
    Next Article How Much Does It Cost to Build a White-Label Enterprise Performance Management Software?

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    August 8, 2025
    Machine Learning

    Your LLM Knows the Future: Uncovering Its Multi-Token Prediction Potential

    August 8, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    OpenAI wants ChatGPT to be your ‘super assistant’ – what that means

    News & Updates

    How to Change Power and Sleep Settings in Windows 11

    Operating Systems

    CVE-2025-4158 – PCMan FTP Server Buffer Overflow Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Best early Prime Day Kindle deals: My 6 favorite sales live now

    News & Updates

    Highlights

    CVE-2025-44952 – Open5GS PFCP Buffer Overflow

    June 18, 2025

    CVE ID : CVE-2025-44952

    Published : June 18, 2025, 4:15 p.m. | 45 minutes ago

    Description : A missing length check in `ogs_pfcp_subnet_add` function from PFCP library, used by both smf and upf in open5gs 2.7.2 and earlier, allows a local attacker to cause a Buffer Overflow by changing the `session.dnn` field with a value with length greater than 101.

    Severity: 0.0 | NA

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    CVE-2025-43016 – JetBrains Rider Unvalidated Archive Unpacking Vulnerability

    April 25, 2025

    CVE-2025-7145 – ThreatSonar Anti-Ransomware OS Command Injection Vulnerability

    July 7, 2025

    CVE-2025-47701 – Drupal Restrict Route by IP CSRF

    May 14, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.