Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      The Power Of The Intl API: A Definitive Guide To Browser-Native Internationalization

      August 8, 2025

      This week in AI dev tools: GPT-5, Claude Opus 4.1, and more (August 8, 2025)

      August 8, 2025

      Elastic simplifies log analytics for SREs and developers with launch of Log Essentials

      August 7, 2025

      OpenAI launches GPT-5

      August 7, 2025

      5 ways business leaders can transform workplace culture – and it starts by listening

      August 8, 2025

      My 4 favorite image editing apps on Linux – and two are free Photoshop alternatives

      August 8, 2025

      How Google’s Genie 3 could change AI video – and let you build your own interactive worlds

      August 8, 2025

      How you’re charging your tablet is slowly killing it – 3 methods to avoid (and the right way)

      August 8, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Establishing Consistent Data Foundations with Laravel’s Database Population System

      August 8, 2025
      Recent

      Establishing Consistent Data Foundations with Laravel’s Database Population System

      August 8, 2025

      Generate Postman Collections from Laravel Routes

      August 8, 2025

      This Week in Laravel: Free Laravel Idea, Laracon News, and More

      August 8, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Lenovo Legion Go 2 vs Legion Go — How Do These Gaming Handhelds Compare Based on Rumored Specs?

      August 8, 2025
      Recent

      Lenovo Legion Go 2 vs Legion Go — How Do These Gaming Handhelds Compare Based on Rumored Specs?

      August 8, 2025

      9 Default Settings in Windows 11 You Didn’t Know Could Affect Performance and Privacy

      August 8, 2025

      DICE Responds to Battlefield 6 Community: Key Updates on Map Flow and Class Mechanics

      August 8, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»A Closer Look at the AI Assistant of Oracle Analytics

    A Closer Look at the AI Assistant of Oracle Analytics

    May 9, 2025

    Asking questions about data has been part of Oracle Analytics through the homepage search bar for several years now. It did that with Natural Language Processing (NLP) to respond to questions with various automatically generated visualizations. What has been introduced since late 2024 is the capability to leverage Large Language Models (LLM) to respond to user questions and commands from within a Workbook. This brings a much-enhanced experience, thanks to the evolution of language processing from classic NLP models to LLMs. The newer feature is the AI Assistant, and while it was earlier only available to larger OAC deployments, with the May 2025 update, it has now been made available to all OAC instances!

    If you’re considering a solution that leverages Gen AI for data analytics, the AI Assistant is a good fit for enterprise-wide deployments. I will explain why.

    • Leverages an enterprise semantic layer: What I like most about how AI Assistant works is that it reuses the same data model and metadata that are already in place and caters for various types of reporting and analytical needs. AI Assistant adds another channel for user interaction with data, without the risks of data and metadata redundancy. As a result, no matter whether creating reports manually or leveraging AI, everyone across the organization remains consistent in using the same KPI definitions, the same entity relationships and the same dimensional rollup structures for reporting.
    • Data Governance: This is along the same lines as my first point, but I want to stress the importance of controls when it comes to bringing the power of LLMs to data. There are many ways of leveraging Gen AI with data and some are native to the data management platforms themselves. However, implementing Gen AI data querying solutions directly within the data layer requires a closer look at security aspects of the implementation. Who will be able to get answers on certain topics? And if the topic is applicable to the one asking, how much information are they allowed to know?

    The AI Assistant simply follows the same object and row level security controls that are enforced by the semantic data model.

    • What about agility? Yes, governed analytics is very important. But how can people innovate and explore more effective solutions to business challenges without the ability to interact with the data that comes along with these challenges. The AI Assistant works not only with the common enterprise data model, but with individually prepared data sets as well. As a result, the same AI interface caters to questions asked about both enterprise data as well as departmental or individualized data sets.
    • Tunability and Flexibility: Enabling the AI Assistant for organizational data, while relatively an easy task, does allow for a tailored setup. The purpose of tuning the setup is to increase the levels of reliability and accuracy. The flexibility comes into play when directing the LLM on what information to take into consideration when generating responses. And this can be done through a fine-tuning mechanism of designating which data entities and/or fields of data within these entities, can be considered.
    • Support for data indexing, in addition to metadata: When tuning the AI Assistant setup, three options are available to pick from, down to the field level: Don’t Index, Index Metadata Only, and Index. With the Index option, we can include information about the actual data in a particular field so the AI Assistant is aware of that information. This can be useful, for example, for a Project Type field so the LLM is informed of the various possible values for Project Type. Consequently, the AI Assistant provides more relevant responses to questions that include specific project types as part of the prompt.
    • Which LLM to use? LLMs continue to evolve, and it seems that there will always be a better, more efficient and more accurate LLM to switch to. Oracle has made the setup for the AI Assistant open, to an extent, in that it can accommodate external LLMs, besides the built-in LLM that is deployed and managed by Oracle. At this time, if not using the built-in LLM, we have the option of using an Open AI model via the Open AI API. Why may you want to use the built-in LLM vs an Open AI model?
      • The embedded LLM is focused on the analytical data that is part of your environment. So it’s more accurate in that it is less prone to hallucinations. However, this approach doesn’t provide flexibility in terms of access to external knowledge.
      • External LLMs include public knowledge (depending on what knowledge an LLM is trained on) in addition to the analytical data that is specific to your environment. This normally allows AI Assistant to have better responses when the questions asked are broad and require public knowledge to tie into the specific data elements housed in one system. Think for example about geographical facts, statistics, weather, business corporations’ information, etc. These are public information and can help in responding to analytical questions within the context of an organization’s data.
      • If the intent is to use an LLM but avoid the inclusion of external knowledge when generating responses, there is the option to restrict the LLM so it limits responses based on organizational data only. This approach leverages the reasoning capabilities of models without compromising the source of information for the responses.
    • The Human Factor: AI Assistant factors in the human aspect of leveraging LLMs for analytics. Having a conversation with data through natural language is to the most part straight forward when dealing with less complex data sets. This is because, in the case, the responses are more deterministic. As the data model gets more complex, there will be more opportunities for misunderstanding and missed connections between what’s on one’s mind versus an AI generated response, let alone a visual one. This is why the AI Assistant has the capability for an end user to adjust the responses to better align with their preferences, without reiterating prompts and elongated back and forth conversations. These adjustments can be easily applied with button clicks, for example to change a visual appearance or change/add a filter or column, all within a chat window. And whatever visualizations the AI Assistant produces, can be added to a dashboard for further adjustments and future reference.

    In the next post, I will mention a few things to watch out for when implementing AI Assistant. I will also demo what it looks like to use AI Assistant for project management.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleTrack Metrics Effortlessly with Laravel’s Context Increment and Decrement Methods
    Next Article Preparing for AI? Here’s How PIM Gets Your Data in Shape

    Related Posts

    Development

    Establishing Consistent Data Foundations with Laravel’s Database Population System

    August 8, 2025
    Development

    Generate Postman Collections from Laravel Routes

    August 8, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CVE-2025-43925 – Unicom Focal Point Data Encryption Key Hardcoded Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-31965 – HCL BigFix Remote Control Server WebUI Information Disclosure Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-48241 – Verge3D Cross-site Scripting (XSS)

    Common Vulnerabilities and Exposures (CVEs)

    How One Path Traversal in Grafana Unleashed XSS, Open Redirect and SSRF (CVE-2025–4123)

    Security

    Highlights

    CVE-2025-6816 – HDF5 Heap-Based Buffer Overflow Vulnerability

    June 28, 2025

    CVE ID : CVE-2025-6816

    Published : June 28, 2025, 8:15 a.m. | 3 hours, 1 minute ago

    Description : A vulnerability classified as problematic was found in HDF5 1.14.6. This vulnerability affects the function H5O__fsinfo_encode of the file /src/H5Ofsinfo.c. The manipulation leads to heap-based buffer overflow. It is possible to launch the attack on the local host. The exploit has been disclosed to the public and may be used.

    Severity: 3.3 | LOW

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    TagStudio is an impressive tag-based photo and file organization program

    April 30, 2025

    When to choose GitHub-Hosted runners or self-hosted runners with GitHub Actions

    April 15, 2025

    CVE-2025-5170 – LliSoft MTA Maita Training System SQL Injection Vulnerability

    May 26, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.