Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sentry launches MCP monitoring tool

      August 14, 2025

      10 Benefits of Hiring a React.js Development Company (2025–2026 Edition)

      August 13, 2025

      From Line To Layout: How Past Experiences Shape Your Design Career

      August 13, 2025

      Hire React.js Developers in the US: How to Choose the Right Team for Your Needs

      August 13, 2025

      I’ve tested every Samsung Galaxy phone in 2025 – here’s the model I’d recommend on sale

      August 14, 2025

      Google Photos just put all its best editing tools a tap away – here’s the shortcut

      August 14, 2025

      Claude can teach you how to code now, and more – how to try it

      August 14, 2025

      One of the best work laptops I’ve tested has MacBook written all over it (but it’s even better)

      August 14, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Controlling Execution Flow with Laravel’s Sleep Helper

      August 14, 2025
      Recent

      Controlling Execution Flow with Laravel’s Sleep Helper

      August 14, 2025

      Generate Secure Temporary Share Links for Files in Laravel

      August 14, 2025

      This Week in Laravel: Filament 4, Laravel Boost, and Junie Review

      August 14, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      KDE Plasma 6 on Wayland: the Payoff for Years of Plumbing

      August 14, 2025
      Recent

      KDE Plasma 6 on Wayland: the Payoff for Years of Plumbing

      August 14, 2025

      FOSS Weekly #25.33: Debian 13 Released, Torvalds vs RISC-V, Arch’s New Tool, GNOME Perfection and More Linux Stuff

      August 14, 2025

      Ultimate ChatGPT-5 Prompt Guide: 52 Ideas for Any Task

      August 14, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Citations with Amazon Nova understanding models

    Citations with Amazon Nova understanding models

    August 14, 2025

    Large language models (LLMs) have become increasingly prevalent across both consumer and enterprise applications. However, their tendency to “hallucinate” information and deliver incorrect answers with seeming confidence has created a trust problem. Think of LLMs as you would a human expert: we typically trust experts who can back up their claims with references and walk us through their reasoning process. The same principle applies to LLMs – they become more trustworthy when they can demonstrate their thought process and cite reliable sources for their information. Fortunately, with proper prompting, LLMs can be instructed to provide these citations, making their outputs more verifiable and dependable.

    In this post, we demonstrate how to prompt Amazon Nova understanding models to cite sources in responses. Further, we will also walk through how we can evaluate the responses (and citations) for accuracy.

    What are citations and why are they useful? 

    Citations are references to sources that indicate where specific information, ideas, or concepts in a work originated. Citations play a crucial role in addressing the following issues, enhancing the credibility, usability, and ethical grounding of LLM-based applications.

    1. Ensuring factual accuracy: LLMs are prone to “hallucinations,” where they generate plausible but incorrect information. Citations allow users to verify claims by tracing them back to reliable sources, improving factual correctness and reducing misinformation risks.
    2. Building trust and transparency: Citations foster trust in AI-generated content so users can cross-check information and understand its origins. This transparency is vital for applications in research, healthcare, law, and education.
    3. Supporting ethical practices: Citing sources ensures proper attribution to original authors, respecting intellectual property rights and scholarly contributions. It prevents plagiarism and promotes ethical AI use.
    4. Enhancing usability: Citations improve user experience by providing a pathway to explore related materials. Features like inline citations or bibliographies help users find relevant sources easily.
    5. Addressing Limitations of LLMs: LLMs often fabricate references due to their inability to access real-time data or remember training sources accurately. Retrieval augmented generation (RAG) systems and citation tools mitigate this issue by grounding responses in external data.
    6. Professional and academic standards: In academic contexts, citations are indispensable for replicating research methods and validating findings. AI-generated outputs must adhere to these standards to maintain scholarly integrity.

    Citations with Amazon Nova models

    Amazon Nova, launched in Dec 2024, is a new generation of foundation models that deliver frontier intelligence and industry leading price performance, available on Amazon Bedrock. Amazon Nova models include four understanding models (Nova Micro, Nova Lite, Nova Pro and Nova Premier), two creative content generation models (Nova Canvas and Nova Reel), and one speech-to-speech model (Nova Sonic). Through seamless integration with Amazon Bedrock, developers can build and scale generative AI applications with Amazon Nova foundation models.

    Citations for the Amazon Nova understanding models can be achieved by crafting prompts where we instruct the model to cite its sources and indicate the response format. To illustrate this, we’ve picked an example where we ask questions to Nova Pro about Amazon shareholder letters. We will include the shareholder letter in the prompt as context and ask Nova Pro to answer questions and include citations from the letter(s).

    Here’s an example prompt that we constructed for Amazon Nova Pro following best practices for prompt engineering for Amazon Nova.

    Note the output format that we included in the prompt to distinguish the actual answers from the citations.

    System prompt 

    ##Instruction
    You are a QA agent. You answer questions based on the context provided. 
    You will answer the question and also include exact excerpts from the context and 
    quote them as quotes. If you are unable to answer the question based on the context, 
    your response should be “I’m sorry I cannot answer based on the context”.
    
    ##Examples: 
    Question: What factors contributed to the growth of Amazon 
    Quotes: [1] Our
    vision for Kindle is every book ever printed in any language, all available in less 
    than 60 seconds.
    Publishers—including all the major publishers—have embraced Kindle, and we’re thankful 
    for that. From a
    publisher’s point of view, there are a lot of advantages to Kindle. Books never go out 
    of print, and they never go
    out of stock. Nor is there ever waste from over-printing. Most important, Kindle makes 
    it more convenient for
    readers to buy more books. Anytime you make something simpler and lower friction, you 
    get more of it.
    Answer: Inovation with Kindle and publisher collaboration contributed to the growth of 
    Amazon [1] 
    
    ##Output Format 
    Quotes: 
    [1] ....
    [2] ...
    Answer:
    

    User Prompt

    Question: What contributed to net sales?
    Context: <Amazon 2009 shareholder letter... document not shown for brevity> 

    Here’s the response from Nova Pro for the above prompt

    Quotes: 
    [1] "Net sales increased 28% year-over-year to $24.51 billion in 2009." 
    [2] "More customers are taking advantage of Amazon Prime, with worldwide memberships 
    up significantly over last year." 
    [3] "We added 21 new product categories around the world in 2009..." 
    [4] "It was a busy year for our shoes business. In November we acquired Zappos..." 
    [5] "Sales of products by third party sellers on our websites represented 30% of unit
     sales in 2009." 
    [6] "Amazon Web Services continued its rapid pace of innovation, launching many new 
    services and features..." 
    [7] "The U.S. Kindle Store now has more than 460,000 books..."
    
    Answer: Several factors contributed to the increase in net sales for Amazon in 2009. 
    These include the significant growth in Amazon Prime memberships [2], the addition 
    of 21 new product categories globally [3], the acquisition of Zappos which enhanced 
    the shoes business [4], and the strong performance of third-party sellers on Amazon's 
    platforms [5]. Additionally, the innovation and expansion of Amazon Web Services (AWS) 
    attracted more customers [6], and the growth of the Kindle Store with a larger selection 
    of books and content [7] also played a role. These combined efforts resulted in a 28% 
    year-over-year increase in net sales to $24.51 billion [1].

    As you can see Nova Pro is following our instructions and providing the answer along with the citations. We’ve verified the quotes are indeed present in the 2009 shareholder letter.

    Here’s another user prompt (with the same system prompt as above) along with the model’s response

    User Prompt:

    Question: How does Bezos describe Amazon's approach to failure, and how does he 
    tie it to innovation?
    Context: <Amazon 2006 shareholder letter> 

    Model response

    Quotes: 
    
    [1] "We won’t always choose right, and we won’t always succeed. But we will be 
    choosy, and we will work hard and patiently."
    
    [2] "In my view, Amazon’s culture is unusually supportive of small businesses 
    with big potential, and I believe that’s a source of competitive advantage."
    
    [3] "The culture demands that these new businesses be high potential and that 
    they be innovative and differentiated, but it does not demand that they be large 
    on the day that they are born."
    
    Answer: 
    
    Bezos describes Amazon's approach to failure as one that is inherently tied to 
    innovation and experimentation. He acknowledges that not every venture will be 
    successful, stating, "We won’t always choose right, and we won’t always succeed" 
    [1]. However, he emphasizes the importance of being selective and persistent, 
    working hard and patiently despite potential failures. 
    
    Bezos ties this approach to innovation by highlighting Amazon's supportive culture 
    towards small, high-potential businesses. He notes, "In my view, Amazon’s culture 
    is unusually supportive of small businesses with big potential, and I believe that’s 
    a source of competitive advantage" [2]. This culture allows for the nurturing of 
    innovative ideas, even if they start small. He further explains that the company’s 
    culture demands innovation and differentiation in new businesses, but does not 
    require them to be large from the outset [3]. This patient and experimental approach 
    fosters an environment where failure is seen as a part of the innovation process, 
    rather than a deterrent.

    Evaluating citations 

    While citations are good, it’s important to evaluate that the model is following our instructions and including the citation verbatim from the context and not making up the citations.

    To evaluate the citations at scale, we used another LLM to judge the responses from Amazon Nova Pro. We used the LLM-as-a-judge technique in Amazon Bedrock evaluations and evaluated 10 different prompts. LLM-as-a-judge on Amazon Bedrock Model Evaluation provides a comprehensive, end-to-end solution for assessing and optimizing AI model performance. This automated process uses the power of LLMs to evaluate responses across multiple metric categories (such as correctness, completeness, harmfulness, helpfulness and more) offering insights that can significantly improve your AI applications.

    We prepared the input dataset for evaluation. The input dataset is a jsonl file containing our prompts that we want to evaluate. Each line in the jsonl file must include key-value pairs. Here are the required and optional fields for the input dataset:

    • prompt (required): This key indicates the input for various tasks. It can be used for general text generation where the model needs to provide a response, question-answering tasks where the model must answer a specific question, text summarization tasks where the model needs to summarize a given text, or classification tasks where the model must categorize the provided text.
    • referenceResponse (optional – used for specific metrics with ground truth): This key contains the ground truth or correct response. It serves as the reference point against which the model’s responses will be evaluated if it is provided.
    • category (optional): This key is used to generate evaluation scores reported by category, helping organize and segment evaluation results for better analysis.

    Here’s an example jsonl file for evaluating our prompts (full jsonl file not shown for brevity).

    {
    	"prompt": "##Model Instructions You are a QA agent. You answer questions 
    based on the context provided. You will answer the question and also include exact 
    excerpts from the context and quote them as quotes. n ##Examples: nQuestion: What 
    factors contributed to the growth of AmazonnQuotes: [1] Ourvision for Kindle is 
    every book ever printed in any language, all available in less than 60 seconds. 
    Publishers—including all the major publishers—have embraced Kindle, and we're thankful 
    for that. From a publisher’s point of view, there are a lot of advantages to Kindle. 
    Books never go out of print, and they never go out of stock. Nor is there ever waste 
    from over-printing. Most important, Kindle makes it more convenient for readers to buy 
    more books. Anytime you make something simpler and lower friction, you get more of it.n 
    Answer: Inovation with Kindle and publisher collaboration contributed to the growth of 
    Amazon [1]nn ##Output FormatnQuotes: [1] ....n[2] ...nn Answer: nnQuestion:How 
    does Bezos describe Amazon's approach to failure, and how does he tie it to innovation?n 
    Context: <Amazon shareholder letter…. Not included here for brevity”
    }
    {
    	"prompt":……..
    }

    We then started a model evaluation job using the Bedrock API with Anthropic Claude 3.5 Sonnet v1 as the evaluator/judge model. We have open sourced our code on the AWS Samples GitHub.

    We evaluated our prompts and responses for the following built-in metrics

    1. Helpfulness
    2. Correctness
    3. Professional style and tone
    4. Faithfulness
    5. Completeness
    6. Coherence
    7. Following instructions
    8. Relevance
    9. Readability
    10. Harmfuless

    Here’s the result summary of our evaluation. As you can see, Nova Pro had a 0.78 score on coherence and faithfulness and 0.67 on correctness. The high scores indicate that Nova Pro’s responses were holistic, useful, complete and accurate while being coherent as evaluated by Claude 3.5 Sonnet.

    Conclusion

    In this post, we walked through how we can prompt Amazon Nova understanding models to cite sources from the context through simple instructions. Amazon Nova’s capability to include citations in its responses demonstrates a practical approach to implementing this feature, showcasing how simple instructions can lead to more reliable and trustworthy AI interactions. The evaluation of these citations, using an LLM-as-a-judge technique, further underscores the importance of assessing the quality and faithfulness of AI-generated responses. To learn more about prompting for Amazon Nova models please visit this prompt library. You can learn more about Amazon Bedrock evaluations on the AWS website.


    About the authors

    Sunita Koppar is a Senior Specialist Solutions Architect in Generative AI and Machine Learning at AWS, where she partners with customers across diverse industries to design solutions, build proof-of-concepts, and drive measurable business outcomes. Beyond her professional role, she is deeply passionate about learning and teaching Sanskrit, actively engaging with student communities to help them upskill and grow.

    Veda Raman is a Senior Specialist Solutions Architect for generative AI and machine learning at AWS. Veda works with customers to help them architect efficient, secure, and scalable machine learning applications. Veda specializes in generative AI services like Amazon Bedrock and Amazon SageMaker.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleEmpowering students with disabilities: University Startups’ generative AI solution for personalized student pathways
    Next Article How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    August 14, 2025
    Machine Learning

    Empowering students with disabilities: University Startups’ generative AI solution for personalized student pathways

    August 14, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Fast forward TMS

    Web Development

    Windows 11’s BSOD isn’t going anywhere. It’s now black, faster, and confusing

    Operating Systems

    CVE-2025-4190 – WordPress CSV Mass Importer File Upload Privilege Escalation Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    NativePHP for Mobile v1 — Launching May 2

    Development

    Highlights

    CVE-2025-48925 – TeleMessage MD5 Hashing Authentication Bypass

    May 28, 2025

    CVE ID : CVE-2025-48925

    Published : May 28, 2025, 5:15 p.m. | 22 minutes ago

    Description : The TeleMessage service through 2025-05-05 relies on the client side (e.g., the TM SGNL app) to do MD5 hashing, and then accepts the hash as the authentication credential, as exploited in the wild in May 2025.

    Severity: 4.3 | MEDIUM

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    Responding to Client Feedback

    April 29, 2025

    Best July 4th TV deals 2025: My favorite early sales save you up to $2,800

    June 27, 2025

    Gemini in Google Workspace: Your New AI-Powered Collaborator

    July 16, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.