Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 16, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 16, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 16, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 16, 2025

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025

      Minecraft licensing robbed us of this controversial NFL schedule release video

      May 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The power of generators

      May 16, 2025
      Recent

      The power of generators

      May 16, 2025

      Simplify Factory Associations with Laravel’s UseFactory Attribute

      May 16, 2025

      This Week in Laravel: React Native, PhpStorm Junie, and more

      May 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025
      Recent

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»FairProof: An AI System that Uses Zero-Knowledge Proofs to Publicly Verify the Fairness of a Model while Maintaining Confidentiality

    FairProof: An AI System that Uses Zero-Knowledge Proofs to Publicly Verify the Fairness of a Model while Maintaining Confidentiality

    May 24, 2024

    The proliferation of machine learning (ML) models in high-stakes societal applications has sparked concerns regarding fairness and transparency. Instances of biased decision-making have led to a growing distrust among consumers who are subject to ML-based decisions. 

    To address this challenge and increase consumer trust, technology that enables public verification of the fairness properties of these models is urgently needed. However, legal and privacy constraints often prevent organizations from disclosing their models, hindering verification and potentially leading to unfair behavior such as model swapping.

    In response to these challenges, a system called FairProof has been proposed by researchers from Stanford and UCSD. It consists of a fairness certification algorithm and a cryptographic protocol. The algorithm evaluates the model’s fairness at a specific data point using a metric known as local Individual Fairness (IF). 

    Their approach allows for personalized certificates to be issued to individual customers, making it suitable for customer-facing organizations. Importantly, the algorithm is designed to be agnostic to the training pipeline, ensuring its applicability across various models and datasets.

    Certifying local IF is achieved by leveraging techniques from the robustness literature while ensuring compatibility with Zero-Knowledge Proofs (ZKPs) to maintain model confidentiality. ZKPs enable the verification of statements about private data, such as fairness certificates, without revealing the underlying model weights. 

    To make the process computationally efficient, a specialized ZKP protocol is implemented, strategically reducing the computational overhead through offline computations and optimization of sub-functionalities.

    Furthermore, model uniformity is ensured through cryptographic commitments, where organizations publicly commit to their model weights while keeping them confidential. Their approach, widely studied in ML security literature, provides a means to maintain transparency and accountability while safeguarding sensitive model information.

    By combining fairness certification with cryptographic protocols, FairProof offers a comprehensive solution to address fairness and transparency concerns in ML-based decision-making, fostering greater trust among consumers and stakeholders alike.

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.

    If you like our work, you will love our newsletter..

    Don’t Forget to join our 42k+ ML SubReddit

    Delighted to receive a Best Paper Award for my latest work — FairProof : Confidential and Certifiable Fairness for Neural Networks (https://t.co/Q9RvmWQhJ1)
    — at the Privacy-ILR Workshop @iclr_conf ! (my 1st
    Will also be presented @icmlconf
    Slides : https://t.co/YBDq6FbAhQ pic.twitter.com/3l3bVUSacr

    — Chhavi Yadav (@chhaviyadav_) May 11, 2024

    The post FairProof: An AI System that Uses Zero-Knowledge Proofs to Publicly Verify the Fairness of a Model while Maintaining Confidentiality appeared first on MarkTechPost.

    Source: Read More 

    Hostinger
    Facebook Twitter Reddit Email Copy Link
    Previous ArticleDIAMOND (DIffusion as a Model of Environment Dreams): A Reinforcement Learning Agent Trained in a Diffusion World Model
    Next Article Microsoft Introduces Phi Silica: A 3.3 Billion Parameter AI Model Transforming Efficiency and Performance in Personal Computing

    Related Posts

    Security

    Nmap 7.96 Launches with Lightning-Fast DNS and 612 Scripts

    May 16, 2025
    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-47916 – Invision Community Themeeditor Remote Code Execution

    May 16, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    Top ChatGPT Courses in 2024

    Development

    CVE-2025-37817 – Linux kernel Double Free in Chameleon Driver

    Common Vulnerabilities and Exposures (CVEs)

    Automate SQL Server discovery and assessment to accelerate migration to AWS

    Databases

    Updated Debian 12: 12.7 released

    Linux

    Highlights

    The first open-source, privacy-focused voice assistant for the home is here

    December 20, 2024

    Home Assistant just announced the launch of a new open-source voice assistant with hardware that…

    How to Share Lightning Report Folders in Salesforce: A Complete Guide

    December 7, 2024

    Top Statistics Books to Read in 2024

    April 12, 2024

    CVE-2025-43556 – Animate Integer Overflow or Wraparound Vulnerability (Arbitrary Code Execution)

    May 13, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.