Asking questions about data has been part of Oracle Analytics through the homepage search bar for several years now. It did that with Natural Language Processing (NLP) to respond to questions with various automatically generated visualizations. What has been introduced since late 2024 is the capability to leverage Large Language Models (LLM) to respond to user questions and commands from within a Workbook. This brings a much-enhanced experience, thanks to the evolution of language processing from classic NLP models to LLMs. The newer feature is the AI Assistant, and while it was earlier only available to larger OAC deployments, with the May 2025 update, it has now been made available to all OAC instances!
If you’re considering a solution that leverages Gen AI for data analytics, the AI Assistant is a good fit for enterprise-wide deployments. I will explain why.
- Leverages an enterprise semantic layer: What I like most about how AI Assistant works is that it reuses the same data model and metadata that are already in place and caters for various types of reporting and analytical needs. AI Assistant adds another channel for user interaction with data, without the risks of data and metadata redundancy. As a result, no matter whether creating reports manually or leveraging AI, everyone across the organization remains consistent in using the same KPI definitions, the same entity relationships and the same dimensional rollup structures for reporting.
- Data Governance: This is along the same lines as my first point, but I want to stress the importance of controls when it comes to bringing the power of LLMs to data. There are many ways of leveraging Gen AI with data and some are native to the data management platforms themselves. However, implementing Gen AI data querying solutions directly within the data layer requires a closer look at security aspects of the implementation. Who will be able to get answers on certain topics? And if the topic is applicable to the one asking, how much information are they allowed to know?
The AI Assistant simply follows the same object and row level security controls that are enforced by the semantic data model.
- What about agility? Yes, governed analytics is very important. But how can people innovate and explore more effective solutions to business challenges without the ability to interact with the data that comes along with these challenges. The AI Assistant works not only with the common enterprise data model, but with individually prepared data sets as well. As a result, the same AI interface caters to questions asked about both enterprise data as well as departmental or individualized data sets.
- Tunability and Flexibility: Enabling the AI Assistant for organizational data, while relatively an easy task, does allow for a tailored setup. The purpose of tuning the setup is to increase the levels of reliability and accuracy. The flexibility comes into play when directing the LLM on what information to take into consideration when generating responses. And this can be done through a fine-tuning mechanism of designating which data entities and/or fields of data within these entities, can be considered.
- Support for data indexing, in addition to metadata: When tuning the AI Assistant setup, three options are available to pick from, down to the field level: Don’t Index, Index Metadata Only, and Index. With the Index option, we can include information about the actual data in a particular field so the AI Assistant is aware of that information. This can be useful, for example, for a Project Type field so the LLM is informed of the various possible values for Project Type. Consequently, the AI Assistant provides more relevant responses to questions that include specific project types as part of the prompt.
- Which LLM to use? LLMs continue to evolve, and it seems that there will always be a better, more efficient and more accurate LLM to switch to. Oracle has made the setup for the AI Assistant open, to an extent, in that it can accommodate external LLMs, besides the built-in LLM that is deployed and managed by Oracle. At this time, if not using the built-in LLM, we have the option of using an Open AI model via the Open AI API. Why may you want to use the built-in LLM vs an Open AI model?
- The embedded LLM is focused on the analytical data that is part of your environment. So it’s more accurate in that it is less prone to hallucinations. However, this approach doesn’t provide flexibility in terms of access to external knowledge.
- External LLMs include public knowledge (depending on what knowledge an LLM is trained on) in addition to the analytical data that is specific to your environment. This normally allows AI Assistant to have better responses when the questions asked are broad and require public knowledge to tie into the specific data elements housed in one system. Think for example about geographical facts, statistics, weather, business corporations’ information, etc. These are public information and can help in responding to analytical questions within the context of an organization’s data.
- If the intent is to use an LLM but avoid the inclusion of external knowledge when generating responses, there is the option to restrict the LLM so it limits responses based on organizational data only. This approach leverages the reasoning capabilities of models without compromising the source of information for the responses.
- The Human Factor: AI Assistant factors in the human aspect of leveraging LLMs for analytics. Having a conversation with data through natural language is to the most part straight forward when dealing with less complex data sets. This is because, in the case, the responses are more deterministic. As the data model gets more complex, there will be more opportunities for misunderstanding and missed connections between what’s on one’s mind versus an AI generated response, let alone a visual one. This is why the AI Assistant has the capability for an end user to adjust the responses to better align with their preferences, without reiterating prompts and elongated back and forth conversations. These adjustments can be easily applied with button clicks, for example to change a visual appearance or change/add a filter or column, all within a chat window. And whatever visualizations the AI Assistant produces, can be added to a dashboard for further adjustments and future reference.
In the next post, I will mention a few things to watch out for when implementing AI Assistant. I will also demo what it looks like to use AI Assistant for project management.
Source: Read MoreÂ