Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      15 Essential Skills to Look for When Hiring Node.js Developers for Enterprise Projects (2025-2026)

      August 4, 2025

      African training program creates developers with cloud-native skills

      August 4, 2025

      React.js for SaaS Platforms: How Top Development Teams Help Startups Launch Faster

      August 3, 2025

      Upwork Freelancers vs Dedicated React.js Teams: What’s Better for Your Project in 2025?

      August 1, 2025

      LastPass can now warn or block logins to shadow SaaS apps – here’s how

      August 4, 2025

      Get up to a year of Adobe Creative Cloud access for 40% off

      August 4, 2025

      Got 6 hours? This free AI training from Google and Goodwill can boost your resume today

      August 4, 2025

      Why I recommend this budget phone with a paper-like screen over ‘minimalist’ devices

      August 4, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Laravel Boost, your AI coding starter kit

      August 4, 2025
      Recent

      Laravel Boost, your AI coding starter kit

      August 4, 2025

      Using GitHub Copilot in VS Code

      August 4, 2025

      Optimizely Mission Control – Part I

      August 4, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Top 20 kubectl Commands Every Kubernetes Beginner Must Know

      August 4, 2025
      Recent

      Top 20 kubectl Commands Every Kubernetes Beginner Must Know

      August 4, 2025

      Microsoft’s record stock run collides with Nadella’s admission that 15,000 layoffs still ‘hurt’

      August 4, 2025

      Microsoft and Adobe Power Up Fantasy Premier League Fans with AI – Here’s How

      August 4, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»How to Use the SHAP-IQ Package to Uncover and Visualize Feature Interactions in Machine Learning Models Using Shapley Interaction Indices (SII)

    How to Use the SHAP-IQ Package to Uncover and Visualize Feature Interactions in Machine Learning Models Using Shapley Interaction Indices (SII)

    August 2, 2025

    In this tutorial, we explore how to use the SHAP-IQ package to uncover and visualize feature interactions in machine learning models using Shapley Interaction Indices (SII), building on the foundation of traditional Shapley values.

    Shapley values are great for explaining individual feature contributions in AI models but fail to capture feature interactions. Shapley interactions go a step further by separating individual effects from interactions, offering deeper insights—like how longitude and latitude together influence house prices. In this tutorial, we’ll get started with the shapiq package to compute and explore these Shapley interactions for any model. Check out the Full Codes here

    Installing the dependencies

    Copy CodeCopiedUse a different Browser
    !pip install shapiq overrides scikit-learn pandas numpy

    Data Loading and Pre-processing

    In this tutorial, we’ll use the Bike Sharing dataset from OpenML. After loading the data, we’ll split it into training and testing sets to prepare it for model training and evaluation. Check out the Full Codes here

    Copy CodeCopiedUse a different Browser
    import shapiq
    from sklearn.ensemble import RandomForestRegressor
    from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
    from sklearn.model_selection import train_test_split
    import numpy as np
    
    # Load data
    X, y = shapiq.load_bike_sharing(to_numpy=True)
    
    # Split into training and testing
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

    Model Training and Performance Evaluation

    Copy CodeCopiedUse a different Browser
    # Train model
    model = RandomForestRegressor()
    model.fit(X_train, y_train)
    
    # Predict
    y_pred = model.predict(X_test)
    
    # Evaluate
    mae = mean_absolute_error(y_test, y_pred)
    rmse = np.sqrt(mean_squared_error(y_test, y_pred))
    r2 = r2_score(y_test, y_pred)
    
    print(f"R² Score: {r2:.4f}")
    print(f"Mean Absolute Error: {mae:.4f}")
    print(f"Root Mean Squared Error: {rmse:.4f}")

    Setting up an Explainer

    We set up a TabularExplainer using the shapiq package to compute Shapley interaction values based on the k-SII (k-order Shapley Interaction Index) method. By specifying max_order=4, we allow the explainer to consider interactions of up to 4 features simultaneously, enabling deeper insights into how groups of features collectively impact model predictions. Check out the Full Codes here

    Copy CodeCopiedUse a different Browser
    # set up an explainer with k-SII interaction values up to order 4
    explainer = shapiq.TabularExplainer(
        model=model,
        data=X,
        index="k-SII",
        max_order=4
    )

    Explaining a Local Instance

    We select a specific test instance (index 100) to generate local explanations. The code prints the true and predicted values for this instance, followed by a breakdown of its feature values. This helps us understand the exact inputs passed to the model and sets the context for interpreting the Shapley interaction explanations that follow. Check out the Full Codes here

    Copy CodeCopiedUse a different Browser
    from tqdm.asyncio import tqdm
    # create explanations for different orders
    feature_names = list(df[0].columns)  # get the feature names
    n_features = len(feature_names)
    
    # select a local instance to be explained
    instance_id = 100
    x_explain = X_test[instance_id]
    y_true = y_test[instance_id]
    y_pred = model.predict(x_explain.reshape(1, -1))[0]
    print(f"Instance {instance_id}, True Value: {y_true}, Predicted Value: {y_pred}")
    for i, feature in enumerate(feature_names):
        print(f"{feature}: {x_explain[i]}")

    Analyzing Interaction Values

    We use the explainer.explain() method to compute Shapley interaction values for a specific data instance (X[100]) with a budget of 256 model evaluations. This returns an InteractionValues object, which captures how individual features and their combinations influence the model’s output. The max_order=4 means we consider interactions involving up to 4 features. Check out the Full Codes here

    Copy CodeCopiedUse a different Browser
    interaction_values = explainer.explain(X[100], budget=256)
    # analyse interaction values
    print(interaction_values)

    First-Order Interaction Values

    To keep things simple, we compute first-order interaction values—i.e., standard Shapley values that capture only individual feature contributions (no interactions).

    By setting max_order=1 in the TreeExplainer, we’re saying:

    “Tell me how much each feature individually contributes to the prediction, without considering any interaction effects.”

    These values are known as standard Shapley values. For each feature, it estimates the average marginal contribution to the prediction across all possible permutations of feature inclusion. Check out the Full Codes here

    Copy CodeCopiedUse a different Browser
    feature_names = list(df[0].columns)
    explainer = shapiq.TreeExplainer(model=model, max_order=1, index="SV")
    si_order = explainer.explain(x=x_explain)
    si_order

    Plotting a Waterfall chart

    A Waterfall chart visually breaks down a model’s prediction into individual feature contributions. It starts from the baseline prediction and adds/subtracts each feature’s Shapley value to reach the final predicted output.

    In our case, we’ll use the output of TreeExplainer with max_order=1 (i.e., individual contributions only) to visualize the contribution of each feature. Check out the Full Codes here

    Copy CodeCopiedUse a different Browser
    si_order.plot_waterfall(feature_names=feature_names, show=True)

    In our case, the baseline value (i.e., the model’s expected output without any feature information) is 190.717.

    As we add the contributions from individual features (order-1 Shapley values), we can observe how each one pushes the prediction up or pulls it down:

    • Features like Weather and Humidity have a positive contribution, increasing the prediction above the baseline.
    • Features like Temperature and Year have a strong negative impact, pulling the prediction down by −35.4 and −45, respectively.

    Overall, the Waterfall chart helps us understand which features are driving the prediction, and in which direction—providing valuable insight into the model’s decision-making.


    Check out the Full Codes here. Feel free to check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

    The post How to Use the SHAP-IQ Package to Uncover and Visualize Feature Interactions in Machine Learning Models Using Shapley Interaction Indices (SII) appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleMIT Researchers Develop Methods to Control Transformer Sensitivity with Provable Lipschitz Bounds and Muon
    Next Article A Coding Guide to Build Intelligent Multi-Agent Systems with the PEER Pattern

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    August 4, 2025
    Machine Learning

    Ambisonics Super-Resolution Using A Waveform-Domain Neural Network

    August 4, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CVE-2025-5276 – MCP Markdownify Server SSRF

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-4032 – InclusionAI AWorld Os Command Injection Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Ubuntu 24.10 Support Ends July 10th – Upgrade Soon

    Linux

    AWS aims to be your one-stop-shop for AI agents from Anthropic, IBM, Perplexity, and others

    News & Updates

    Highlights

    CVE-2025-39412 – Averta Master Slider Unauthenticated Access Vulnerability

    May 19, 2025

    CVE ID : CVE-2025-39412

    Published : May 19, 2025, 6:15 p.m. | 33 minutes ago

    Description : Missing Authorization vulnerability in Averta Master Slider.This issue affects Master Slider: from n/a through 3.10.8.

    Severity: 4.3 | MEDIUM

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    CVE-2025-31232 – Apple macOS Sensitive Data Access Vulnerability

    May 12, 2025

    CVE-2025-4425 – Lenovo Power Manager Remote Code Execution Vulnerability

    July 30, 2025

    Microsoft details how to fix Windows 10/11 blue screen of death (BSOD) issue

    May 13, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.