Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      How To Prevent WordPress SQL Injection Attacks

      June 11, 2025

      Creating The “Moving Highlight” Navigation Bar With JavaScript And CSS

      June 11, 2025

      Databricks adds new tools like Lakebase, Lakeflow Designer, and Agent Bricks to better support building AI apps and agents in the enterprise

      June 11, 2025

      Zencoder launches end-to-end UI testing agent

      June 11, 2025

      OpenAI CEO Sam Altman claims “ChatGPT is already more powerful than any human who has ever lived”

      June 11, 2025

      Apple Intelligence delay: A clash of two architectures and trivial AI features fell short of standards and expectations

      June 11, 2025

      Ambrosia Sky is a gorgeous science-fiction game that’s all about death, and I can’t wait to play more

      June 11, 2025

      3 secrets of PowerToys on Windows 11 that you’ll wish you already knew

      June 11, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      [EcjoJS Meta] Content discussion

      June 11, 2025
      Recent

      [EcjoJS Meta] Content discussion

      June 11, 2025

      Accessibility, Inclusive Design, and Universal Design Work Together

      June 11, 2025

      An “Inconceivable” Conversation With Dr. Pete Cornwell on Simple vs. Agentic AI

      June 11, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      OpenAI CEO Sam Altman claims “ChatGPT is already more powerful than any human who has ever lived”

      June 11, 2025
      Recent

      OpenAI CEO Sam Altman claims “ChatGPT is already more powerful than any human who has ever lived”

      June 11, 2025

      Apple Intelligence delay: A clash of two architectures and trivial AI features fell short of standards and expectations

      June 11, 2025

      Ambrosia Sky is a gorgeous science-fiction game that’s all about death, and I can’t wait to play more

      June 11, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Artificial Intelligence»3D modeling you can feel

    3D modeling you can feel

    April 22, 2025

    Essential for many industries ranging from Hollywood computer-generated imagery to product design, 3D modeling tools often use text or image prompts to dictate different aspects of visual appearance, like color and form. As much as this makes sense as a first point of contact, these systems are still limited in their realism due to their neglect of something central to the human experience: touch.

    Fundamental to the uniqueness of physical objects are their tactile properties, such as roughness, bumpiness, or the feel of materials like wood or stone. Existing modeling methods often require advanced computer-aided design expertise and rarely support tactile feedback that can be crucial for how we perceive and interact with the physical world.

    With that in mind, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have created a new system for stylizing 3D models using image prompts, effectively replicating both visual appearance and tactile properties.

    The CSAIL team’s “TactStyle” tool allows creators to stylize 3D models based on images while also incorporating the expected tactile properties of the textures. TactStyle separates visual and geometric stylization, enabling the replication of both visual and tactile properties from a single image input.

    PhD student Faraz Faruqi, lead author of a new paper on the project, says that TactStyle could have far-reaching applications, extending from home decor and personal accessories to tactile learning tools. TactStyle enables users to download a base design — such as a headphone stand from Thingiverse — and customize it with the styles and textures they desire. In education, learners can explore diverse textures from around the world without leaving the classroom, while in product design, rapid prototyping becomes easier as designers quickly print multiple iterations to refine tactile qualities.

    “You could imagine using this sort of system for common objects, such as phone stands and earbud cases, to enable more complex textures and enhance tactile feedback in a variety of ways,” says Faruqi, who co-wrote the paper alongside MIT Associate Professor Stefanie Mueller, leader of the Human-Computer Interaction (HCI) Engineering Group at CSAIL. “You can create tactile educational tools to demonstrate a range of different concepts in fields such as biology, geometry, and topography.”

    Traditional methods for replicating textures involve using specialized tactile sensors — such as GelSight, developed at MIT — that physically touch an object to capture its surface microgeometry as a “heightfield.” But this requires having a physical object or its recorded surface for replication. TactStyle allows users to replicate the surface microgeometry by leveraging generative AI to generate a heightfield directly from an image of the texture.

    On top of that, for platforms like the 3D printing repository Thingiverse, it’s difficult to take individual designs and customize them. Indeed, if a user lacks sufficient technical background, changing a design manually runs the risk of actually “breaking” it so that it can’t be printed anymore. All of these factors spurred Faruqi to wonder about building a tool that enables customization of downloadable models on a high level, but that also preserves functionality.

    In experiments, TactStyle showed significant improvements over traditional stylization methods by generating accurate correlations between a texture’s visual image and its heightfield. This enables the replication of tactile properties directly from an image. One psychophysical experiment showed that users perceive TactStyle’s generated textures as similar to both the expected tactile properties from visual input and the tactile features of the original texture, leading to a unified tactile and visual experience.

    TactStyle leverages a preexisting method, called “Style2Fab,” to modify the model’s color channels to match the input image’s visual style. Users first provide an image of the desired texture, and then a fine-tuned variational autoencoder is used to translate the input image into a corresponding heightfield. This heightfield is then applied to modify the model’s geometry to create the tactile properties.

    The color and geometry stylization modules work in tandem, stylizing both the visual and tactile properties of the 3D model from a single image input. Faruqi says that the core innovation lies in the geometry stylization module, which uses a fine-tuned diffusion model to generate heightfields from texture images — something previous stylization frameworks do not accurately replicate.

    Looking ahead, Faruqi says the team aims to extend TactStyle to generate novel 3D models using generative AI with embedded textures. This requires exploring exactly the sort of pipeline needed to replicate both the form and function of the 3D models being fabricated. They also plan to investigate “visuo-haptic mismatches” to create novel experiences with materials that defy conventional expectations, like something that appears to be made of marble but feels like it’s made of wood.

    Faruqi and Mueller co-authored the new paper alongside PhD students Maxine Perroni-Scharf and Yunyi Zhu, visiting undergraduate student Jaskaran Singh Walia, visiting masters student Shuyue Feng, and assistant professor Donald Degraen of the Human Interface Technology (HIT) Lab NZ in New Zealand.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleWindows 10 removes Start menu jump lists (file list) for tiles in April 2025 Update
    Next Article Norma Kamali is transforming the future of fashion with AI

    Related Posts

    Artificial Intelligence

    Last Week in AI #302 – QwQ 32B, OpenAI injunction refused, Alexa Plus

    June 11, 2025
    Artificial Intelligence

    LWiAI Podcast #202 – Qwen-32B, Anthropic’s $3.5 billion, LLM Cognitive Behaviors

    June 11, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CVE-2025-47166 – Microsoft Office SharePoint Remote Code Execution

    Common Vulnerabilities and Exposures (CVEs)

    Laravel 12 Starter Kits: Definite Guide Which to Choose

    Development

    OpenAI brings GPT-4.1 to ChatGPT but drops an older model

    Operating Systems

    CVE-2025-49440 – Vuong Nguyen WP Security Master CSRF Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    Linux

    ALT Linux 11.0 Workstation: la distribuzione GNU/Linux russa con GNOME e tecnologie all’avanguardia

    May 2, 2025

    ALT Linux è una storica distribuzione GNU/Linux sviluppata in modo indipendente e particolarmente diffusa nell’area…

    CVE-2025-43961 – Fujifilm LibRaw Out-of-Bounds Read Vulnerability

    April 20, 2025

    I tested Samsung’s QD-OLED 4K monitor for gaming and work – and it was equally practical

    April 23, 2025

    CVE-2025-48135 – Aptivada for WP Cross-Site Scripting

    May 16, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.