Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 13, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 13, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 13, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 13, 2025

      This $4 Steam Deck game includes the most-played classics from my childhood — and it will save you paper

      May 13, 2025

      Microsoft shares rare look at radical Windows 11 Start menu designs it explored before settling on the least interesting one of the bunch

      May 13, 2025

      NVIDIA’s new GPU driver adds DOOM: The Dark Ages support and improves DLSS in Microsoft Flight Simulator 2024

      May 13, 2025

      How to install and use Ollama to run AI LLMs on your Windows 11 PC

      May 13, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Community News: Latest PECL Releases (05.13.2025)

      May 13, 2025
      Recent

      Community News: Latest PECL Releases (05.13.2025)

      May 13, 2025

      How We Use Epic Branches. Without Breaking Our Flow.

      May 13, 2025

      I think the ergonomics of generators is growing on me.

      May 13, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      This $4 Steam Deck game includes the most-played classics from my childhood — and it will save you paper

      May 13, 2025
      Recent

      This $4 Steam Deck game includes the most-played classics from my childhood — and it will save you paper

      May 13, 2025

      Microsoft shares rare look at radical Windows 11 Start menu designs it explored before settling on the least interesting one of the bunch

      May 13, 2025

      NVIDIA’s new GPU driver adds DOOM: The Dark Ages support and improves DLSS in Microsoft Flight Simulator 2024

      May 13, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Some Commonly Used Advanced Prompt Engineering Techniques Explained Using Simple Human Analogies

    Some Commonly Used Advanced Prompt Engineering Techniques Explained Using Simple Human Analogies

    June 22, 2024

    In the developing field of Artificial Intelligence (AI), the ability to think quickly has become increasingly significant. The necessity of communicating with AI models efficiently becomes critical as these models get more complex. In this article we will explain a number of sophisticated prompt engineering strategies, simplifying these difficult ideas through straightforward human metaphors. The techniques and their examples have been discussed to see how they resemble human approaches to problem-solving.

    Chaining Methods

    Analogy: Solving a problem step-by-step.

    Chaining techniques are similar to solving an issue one step at a time. Chaining techniques include directing the AI via a systematic procedure, much like people solve problems by decomposing them into a sequence of steps. Examples are –  Zero-shot and Few-shot CoT.

    Zero-shot Chain-of-Thought 

    When Zero-shot chain-of-thought (CoT) prompting is used, Large Language Models (LLMs) demonstrate remarkable reasoning skills in situations where no previous examples are provided. In Zero-shot CoT prompting, the AI is given no prior examples and is expected to generate a logical sequence of steps to arrive at the solution.

    Few-shot Chain-of-Thought

    By giving a limited number of input-output examples, few-shot prompting efficiently directs AI models and enables the AI to discover patterns without a large amount of training data. Few-shot CoT works well for jobs where the model has to have some context but still has to be able to respond with some degree of flexibility. By providing a few instances, the model gains an understanding of the intended methodology and gains the ability to apply analogous reasoning to unique situations, hence augmenting its capacity to produce precise and contextually relevant solutions with minimal input.

    Decomposition-Based Methods

    Analogy: Breaking a complex problem into smaller sub-problems.

    Methods based on decomposition mimic how people reduce complicated issues to smaller, more manageable components. This method not only simplifies the problem to solve but also enables a more in-depth and methodical analysis of every element. Examples are –  Least-to-Most Prompting and Question Decomposition,

    Least-to-Most Prompting

    The dilemma of easy-to-hard generalization is addressed by least-to-most prompting, which divides complex problems into simpler subproblems. The subproblems are handled sequentially, with the solutions to one subproblem assisting in the solution of the next. Results from experiments on symbolic manipulation, compositional generalization, and mathematical reasoning tasks show that models can generalize to more complex problems than those in the prompts with the least-to-most prompting.  

    Question Decomposition

    Question decomposition divides complicated questions into more manageable subquestions, thereby increasing the faithfulness of reasoning produced by the model. By requiring the model to respond to subquestions in distinct contexts, this technique improves the logic’s precision and dependability. Improving the transparency and authenticity of the reasoning process tackles the problem of confirming safety and accuracy in big language models. By concentrating on simpler subquestions, the model can produce more accurate and contextually relevant replies. This is important for difficult jobs that call for in-depth and nuanced responses.

    Path Aggregation Methods

    Analogy: Generating multiple options to solve a problem and choosing the best one.

    Path aggregation techniques are similar to brainstorming sessions in which several ideas are developed and the best one is chosen. This method makes use of AI’s capacity to consider numerous options and find the best one. Examples are Graph of Thoughts and Tree of Thoughts.

    Graph of Thoughts (GoT)

    Graph of Thoughts models data as an arbitrary graph to enhance prompting capabilities. In GoT, vertices are information units, sometimes known as LLM thoughts, and edges are the dependencies among these vertices. This framework makes it possible to combine different LLM ideas to produce synergistic results, strengthening ideas through feedback loops. 

    Tree of Thoughts (ToT)

    The Tree of Thoughts (ToT) is intended for difficult activities requiring forward-thinking planning. ToT preserves a hierarchical tree of ideas, in which every idea is a logical language sequence that acts as a measure before tackling an issue. Using these intermediary concepts, the AI assesses its own progress and uses search methods such as breadth-first and depth-first search to look for answers methodically. This methodical technique ensures a comprehensive study of potential outcomes and improves the AI’s ability to solve problems by allowing for deliberate reasoning and backtracking.

    Reasoning-Based Methods

    Analogy: For all sub-tasks, reasoning and verifying if they were performed correctly.

    Reasoning-based approaches stress the need to not only produce solutions but also confirm their accuracy. This method is comparable to how people check their work for accuracy and consistency by hand. Examples include  CoVe and Self-Consistency.

    Chain of Verification (CoVe)

    An LLM-generated response is used in the Chain of Verification to evaluate itself through a structured series of inquiries. First, a baseline response is produced. The model then prepares verification questions to evaluate how accurate the first response was. After that, these queries are methodically addressed, sometimes with the help of outside resources for confirmation. CoVe improves the accuracy of AI outputs by improving preliminary answers and correcting errors via self-verification.

    Self-consistency

    Asking a model the same question more than once and accepting the majority response as the final response is known as self-consistency. This method improves the effectiveness of CoT prompting by coming after it. Self-consistency guarantees a more dependable and accurate response by producing several chains of thought for the same stimulus and selecting the most prevalent response.

    External Knowledge Methods

    Analogy: Using external tools and knowledge to complete a task.

    Similar to how humans frequently use outside resources to deepen their understanding and find better solutions to issues, external knowledge approaches provide AI access to additional data or resources. Examples are the Consortium of Knowledge (CoK) and Automatic Reasoning and Tool-use (ART).

    Consortium of Knowledge (CoK)

    Building structured Evidence Triples (CoK-ET) from a knowledge base is a Consortium of Knowledge (CoK) technique used to support reasoning. CoK accesses pertinent material using a retrieval tool, which enriches the AI’s responses with context. In order to guarantee factual truth and faithfulness, the method incorporates a two-factor verification process. By merging human-inspected and enriched annotated data, CoK lowers LLM hallucinations and is essential for in-context learning. Because of its increased openness and dependability, this approach is appropriate for applications that demand high accuracy and contextual relevance.

    Automatic Reasoning and Tool-use (ART)

    ART solves complicated tasks by utilizing external tools in conjunction with intermediate reasoning stages. It selects multi-step reasoning examples from a task library and employs frozen LLMs to produce reasoning steps as a program. In order to incorporate outputs from external tools, ART pauses generation during execution and then resumes. 

    Note: This article was inspired by this LinkedIn post.

    The post Some Commonly Used Advanced Prompt Engineering Techniques Explained Using Simple Human Analogies appeared first on MarkTechPost.

    Source: Read More 

    Hostinger
    Facebook Twitter Reddit Email Copy Link
    Previous ArticleMeet BigCodeBench by BigCode: The New Gold Standard for Evaluating Large Language Models on Real-World Coding Tasks
    Next Article RABBITS: A Specialized Dataset and Leaderboard to Aid in Evaluating LLM Performance in Healthcare

    Related Posts

    Security

    Nmap 7.96 Launches with Lightning-Fast DNS and 612 Scripts

    May 13, 2025
    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-3744 – Nomad Sentinel Policy Bypass

    May 13, 2025
    Leave A Reply Cancel Reply

    Hostinger

    Continue Reading

    CVE-2025-2595 – CODESYS Visualization Forced Browsing Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Kodeco Podcast: Kotlin Symbol Processing – Podcast V2, S3 E4 [FREE]

    Learning Resources

    Server-side error occurred while processing Appium-server

    Development

    Ladakh Social Welfare Department Data Hit By Alleged Cyberattack

    Development

    Highlights

    Development

    Exploring Parameter-Efficient Fine-Tuning Strategies for Large Language Models

    May 1, 2024

    Large Language Models (LLMs) signify a revolutionary leap in numerous application domains, facilitating impressive accomplishments…

    北汽车联网云平台基于阿里云数据库MongoDB版打造稳固底层核心,盘活车端数据以提升用戶体验

    February 19, 2025

    Microsoft Researchers Introduce ARTIST: A Reinforcement Learning Framework That Equips LLMs with Agentic Reasoning and Dynamic Tool Use

    May 10, 2025

    How to choose a Web Design Agency for your Project (in 2025)

    January 9, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.