Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Slack’s AI search now works across an organization’s entire knowledge base

      July 17, 2025

      In-House vs Outsourcing for React.js Development: Understand What Is Best for Your Enterprise

      July 17, 2025

      Tiny Screens, Big Impact: The Forgotten Art Of Developing Web Apps For Feature Phones

      July 16, 2025

      Kong AI Gateway 3.11 introduces new method for reducing token costs

      July 16, 2025

      Got ChatGPT Plus? You can record and summarize meetings on a Mac now – here’s how

      July 17, 2025

      I put this buzzworthy 2-in-1 robot vacuum to work in my house – here’s how it fared

      July 17, 2025

      AI agents will change work and society in internet-sized ways, says AWS VP

      July 17, 2025

      This slick gadget is like a Swiss Army Knife for my keys (and fully trackable)

      July 17, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The details of TC39’s last meeting

      July 17, 2025
      Recent

      The details of TC39’s last meeting

      July 17, 2025

      Notes Android App Using SQLite

      July 17, 2025

      How to Get Security Patches for Legacy Unsupported Node.js Versions

      July 17, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft says it won’t change Windows 11’s system tray design after users feedback

      July 17, 2025
      Recent

      Microsoft says it won’t change Windows 11’s system tray design after users feedback

      July 17, 2025

      How Rust’s Debut in the Linux Kernel is Shoring Up System Stability

      July 17, 2025

      Microsoft is on track to become the second $4 trillion company by market cap, following NVIDIA — and mass layoffs

      July 17, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Tech & Work»The essential role of ‘human testers’ in leveraging generative AI for software testing

    The essential role of ‘human testers’ in leveraging generative AI for software testing

    April 14, 2025

    Ever since Generative AI captured public attention, there’s been no shortage of speculation about the future of tech jobs. Might these models displace entire roles, rendering some job categories obsolete? The thought of being replaced by AI can be unsettling. Yet, when it comes to software development and testing, generative AI is better suited to be a partner than a threat — an assistant poised to enhance human capabilities rather than replace them.

    While generative AI has the potential to increase productivity and quality if used responsibly, the inverse is true if used irresponsibly. That responsibility hinges on humans maintaining control — both in directing the AI and in evaluating its outputs. Responsible AI supervision often requires domain expertise to be able to recognize errors and hazards in the AI’s output. In skilled hands, AI can be a powerful amplifier; but in the hands of people without sufficient understanding, it can just as easily misled, potentially resulting in undesirable outcomes.

    Generative AI’s Limitations: The Need for Critical Thinking

    Generative AI’s ability to swiftly produce code snippets, test cases, and documentation has led many to regard it as an extraordinary tool capable of human feats. Yet, despite these apparent displays of “intelligence,” generative AI does not truly think. Instead, it operates on a predictive basis, choosing the next most likely word or action based on patterns in its training data. This approach often leads to “hallucinations,” where the system provides plausible sounding but inaccurate or misleading output.

    Because it’s bound by the prompt it is given and the data on which it was trained, generative AI can miss crucial details, make incorrect assumptions, and perpetuate existing biases. It also lacks genuine creativity as it simply recognizes, replicates, and randomizes learned patterns to generate output. Furthermore, while it excels at generating human-like text, proficiency in replicating patterns in language is not the same as domain expertise; AI may appear confident while delivering fundamentally flawed recommendations. This risk is magnified by the opaque nature of models, making their internal reasoning processes difficult to understand and their errors harder to detect.

    Ultimately, AI’s limitations underscore the importance of human oversight. Software makers and testers must recognize the technology’s inherent constraints, leveraging it as a helpful assistant rather than a standalone authority. By guiding them with contextualized critical thinking and specialized expertise, and by scrutinizing and correcting their outputs, human software practitioners can harness the benefits of generative AI while mitigating its shortcomings.

    Quality Software Requires Human Ingenuity

    Although automation can streamline many testing tasks, the broader discipline of software testing is fundamentally anchored in human judgment and expertise. After all, testing is aimed at helping deliver quality software to people. Skilled testers draw on both explicit and tacit knowledge to verify capabilities and track down potential problems. Even when using automation to extend their reach, human testers combine their knowledge, skill, experience, curiosity, and creativity to effectively test their products.

    Machines can execute test suites at high speed, but they lack the discernment to design, prioritize, and interpret tests in the context of their potential users or shifting business priorities. Human testers combine insights about the product, the project, and the people involved, balancing technical considerations and business objectives while accounting for regulatory and social implications.

    Generative AI does not fundamentally alter the nature of testing. While AI can suggest test ideas and relieve testers from repetitive tasks in ways that other automation cannot, it lacks the contextual awareness and critical thinking necessary to sufficiently evaluate software functionality, safety, security, performance, and user experience. Responsible use of generative AI in testing requires human oversight by testers who direct and check the AI. Since generative AI relies on what it was trained on and how it was prompted, human expertise remains indispensable for applying context, intent, and real-world constraints. When guided wisely, generative AI can empower skilled testers to more effectively and efficiently test their products without replacing human ingenuity.

     The Symbiotic Relationship Between Humans and AI

    The intersection of AI and human expertise has never been more promising in the world of software testing. By functioning as a supportive collaborator under the direction and correction of a skilled tester, AI can offer suggestions and perform tedious tasks — helping make testing faster, more thorough, and better attuned to people’s needs. A blend of human insight and AI-driven efficiency is the future of software testing.

    In this sense, the human plays the part of a musical conductor, interpreting the score (the requirements, both explicit and implicit) and guiding the AI to perform in a way that fits the venue (the software’s context and constraints), all while providing continuous direction and correction. Far from rendering testers obsolete, generative AI encourages us to broaden our skills. In effect, it invites testers to become more adept conductors, orchestrating AI-driven solutions that resonate with their audience, rather than focusing on a single instrument.

    Ultimately, the rise of AI in testing should not be viewed as a threat, but rather as an opportunity to elevate the testing discipline. By combining artificial intelligence with human creativity, contextual awareness, and ethical oversight, testers can help ensure that software systems are delivered with better quality, safety, and user satisfaction.

     

    The post The essential role of ‘human testers’ in leveraging generative AI for software testing appeared first on SD Times.

    Source: Read More 

    news
    Facebook Twitter Reddit Email Copy Link
    Previous ArticleChallenges and strategies of migrating a high-throughput relational database
    Next Article Beekeeping – recording damage caused by a brown bear

    Related Posts

    Tech & Work

    Slack’s AI search now works across an organization’s entire knowledge base

    July 17, 2025
    Tech & Work

    In-House vs Outsourcing for React.js Development: Understand What Is Best for Your Enterprise

    July 17, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Microsoft’s Layoffs Hit Xbox Hard; Studio Closures, Game Cancellations Confirmed

    Operating Systems

    India’s Crypto Adoption Journey: From Skepticism to Leadership

    Development

    Advanced fine-tuning methods on Amazon SageMaker AI

    Machine Learning

    Windows 11 closing in on top spot as most used desktop OS

    Operating Systems

    Highlights

    CVE-2025-49006 – Keycloak Wasp OAuth Authentication ID Case Sensitivity Vulnerability

    June 9, 2025

    CVE ID : CVE-2025-49006

    Published : June 9, 2025, 1:15 p.m. | 2 hours, 25 minutes ago

    Description : Wasp (Web Application Specification) is a Rails-like framework for React, Node.js, and Prisma. Prior to version 0.16.6, Wasp authentication has a vulnerability in the OAuth authentication implementation (affecting only Keycloak with a specific config). Wasp currently lowercases OAuth user IDs before storing / fetching them. This behavior violates OAuth and OpenID Connect specifications and can result in user impersonation, account collisions, and privilege escalation. In practice, out of the OAuth providers that Wasp auth supports, only Keycloak is affected. Keycloak uses a lowercase UUID by default, but users can configure it to be case sensitive, making it affected. Google, GitHub, and Discord use numerical IDs, making them not affected. Users should update their Wasp version to `0.16.6` which has a fix for the problematic behavior. Users using Keycloak can work around the issue by not using a case sensitive user ID in their realm configuration.

    Severity: 0.0 | NA

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    CVE-2025-4793 – PHPGurukul Online Course Registration SQL Injection Vulnerability

    May 16, 2025

    New Pixel 9a update limits its battery to extend its life – how it works

    April 1, 2025

    Google Proposes New Browser Security: Your Local Network, Your Permission!

    June 5, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.