Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 16, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 16, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 16, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 16, 2025

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025

      Minecraft licensing robbed us of this controversial NFL schedule release video

      May 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The power of generators

      May 16, 2025
      Recent

      The power of generators

      May 16, 2025

      Simplify Factory Associations with Laravel’s UseFactory Attribute

      May 16, 2025

      This Week in Laravel: React Native, PhpStorm Junie, and more

      May 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025
      Recent

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Artificial Intelligence»3 Questions: Inverting the problem of design

    3 Questions: Inverting the problem of design

    November 12, 2024

    The process of computational design in mechanical engineering often begins with a problem or a goal, followed by an assessment of literature, resources, and systems available to address the issue. The Design Computation and Digital Engineering (DeCoDE) Lab at MIT instead explores the bounds of what is possible.

    Working with the MIT-IBM Watson AI Lab, the group’s lead, ABS Career Development Assistant Professor Faez Ahmed, and graduate student Amin Heyrani Nobari in the Department of Mechanical Engineering are combining machine learning and generative AI techniques, physical modeling, and engineering principles to tackle design challenges and enhance the creation of mechanical systems. One of their projects, Linkages, investigates ways planar bars and joints can be connected to trace curved paths. Here, Ahmed and Nobari describe their recent work. 

    Q: How is your team considering approaching mechanical engineering questions from the standpoint of observations?

    Ahmed: The question we have been thinking about is: How can generative AI be used in engineering applications? A key challenge there is incorporating precision into generative AI models. Now, in the specific work that we have been exploring there, we are using this idea of self-supervised contrastive learning approaches, where effectively we are learning these linkage and curve representations of design, or what the design looks like, and how it works.

    This ties very closely with the idea of automated discovery: Can we actually discover new products with AI algorithms? Another comment on the broader picture: one of the key ideas, specifically with linkages, but broadly around generative AI and large language models — all of these are the same family of models that we are looking at, and precision really plays a big role in all of them. So, the learnings we have from these types of models, where you have, in some form of data-driven learning assisted by engineering simulators and joint embeddings of design, and performance — they can potentially translate to other engineering domains also. What we are showing is a proof of concept. Then people can take it and design ships and aircraft, and precise image generation problems, and so on.

    In the case of linkages, your design looks like a set of bars and how they are connected. How it works is basically the path they would transcribe as they move, and we learn these joint representations. So, there’s your primary input — somebody will come and draw some path — and you’re trying to generate a mechanism that can trace that. That enables us to solve the problem in a much more precise way and significantly faster, at 28 times less error (more accurate) and 20 times faster than prior state-of-the-art approaches. 

    Q: Tell me about the linkages method and how it compares to other similar methods.

    Nobari: The contrastive learning happens between the mechanisms, which are represented as graphs, so basically, each joint will be a node in a graph and the node will include some features. The features are the position, the space, and the type of joints, it can be that they’re fixed joints or free joints.

    We have an architecture that takes into account some of the basic underlying things when it comes to the description of the kinematics of a mechanism, but it’s essentially a graph neural network that computes embeddings for these mechanism graphs. Then, we have another model that takes as inputs these curves and creates an embedding for that, and we connect these two different modalities using contrastive learning.

    Then, this contrastive learning framework that we train is used to find new mechanisms, but obviously we care about precision as well. On top of any candidate mechanisms that are identified, we also have an additional optimization step, where these mechanisms that are identified will be further optimized to get as close as possible to those target curves.

    If you’ve got the combinatorial part right, and you’re quite close to where you need to be to get to the target curve that you have, you can do the direct gradient-based optimization and adjust the position of the joints to get super-precise performance on it. That’s a very important aspect of it to work.

    These are the examples of the letters of alphabet, but these are very hard to achieve traditionally with existing methods. Other machine learning based methods are often not even able to do this kind of thing because they are only trained on four bars or six bars, which are very small mechanisms. But what we’ve been able to show is that even with relatively small number of joints, you can get very close to those curves.

    Before this, we didn’t know what the limits of design capabilities were with a single linkage mechanism. It’s a very hard question to know. Can you really write the letter M, right? No one has ever done that, and the mechanism is so complex and so rare that it’s finding a needle in the haystack. But with this method, we show that it is possible.

    We’ve looked into using off-the-shelf generative models for graphs. Generally, generative models for graphs are very difficult to train, and they’re usually not very effective, especially when it comes to mixing continuous variables that have very high sensitivity to what the actual kinematics of a mechanism will be. At the same time, you have all these different ways of combining joints and linkages. These models simply just cannot generate effectively.

    The complexity of the problem, I think, is more obvious when you look at how people approach it with optimization. With optimization, this becomes a mixed-integer, nonlinear problem. Using some simple bi-level optimizations or even simplifying the problem down, they basically create approximations of all the functions, so that they can use mixed-integer conic programming to approach the problem. The combinatorial space combined with the continuous space is so big that they can basically go up to seven joints. Beyond that, it becomes extremely difficult, and it takes two days to create one mechanism for one specific target. If you were to do this exhaustively, it would be very difficult to actually cover the entire design space. This is where you can’t just throw deep learning at it without trying to be a little more clever about how you do that.

    The state-of-the-art deep learning-based approaches use reinforcement learning. They — given a target curve — start building these mechanisms more or less randomly, basically a Monte Carlo optimization type of approach. The measure for this is directly comparing the curve that a mechanism traces and the target curves that are input to the model, and we show that our model performs like 28 times better than that. It’s 75 seconds for our approach, and the reinforcement learning-based approach takes 45 minutes. The optimization approach, you run it for more than 24 hours, and it doesn’t converge.

    I think we have reached the point where we have a very robust proof of concept with the linkage mechanisms. It’s a complicated enough problem that we can see conventional optimization and conventional deep learning alone are not enough.

    Q: What’s the bigger picture behind the need to develop techniques like linkages that allow for the future of human-AI co-design?

    Ahmed: The most obvious one is design of machines and mechanical systems, which is what we’ve already shown. Having said that, I think a key contribution of this work is that it’s a discrete and continuous space that we are learning. So, if you think about the linkages that are out there and how the linkages are connected to each other, that’s a discrete space. Either you are connected or not connected: 0 and 1, but where each node is, is a continuous space that can vary — you can be anywhere in the space. Learning for these discrete and continuous spaces is an extremely challenging problem. Most of the machine learning we see, like in computer vision, it’s only continuous, or language is mostly discrete. By showing this discrete and continuous system, I think the key idea generalizes to many engineering applications from meta-materials to complex networks, to other types of structures, and so on.

    There are steps that we are thinking about immediately, and a natural question is around more complex mechanical systems and more physics, like, you start adding different forms of elastic behavior. Then, you can also think about different types of components. We are also thinking about how precision in large language models can be incorporated, and some of the learnings will transfer there. We’re thinking about making these models generative. Right now, they are, in some sense, retrieving mechanisms and then optimizing from a dataset, while generative models will generate these methods. We are also exploring that end-to-end learning, where the optimization is not needed.

    Nobari: There are a few places in mechanical engineering where they’re used, and there’s very common applications of systems for this kind of inverse kinematic synthesis, where this would be useful. A couple of those that come into mind are, for example, in car suspension systems, where you want a specific motion path for your overall suspension mechanism. Usually, they model that in 2D with planner models of the overall suspension mechanism.

    I think that the next step, and what is ultimately going to be very useful, is demonstrating the same framework or a similar framework for other complicated problems that involve combinatory and continuous values.

    These problems include one of the things that I’ve been looking into: compliant mechanisms. For example, when you have the mechanics of continual — instead of these discrete — rigid linkages, you would have a distribution of materials and motion, and one part of the material deforms the rest of the material to give you a different kind of motion.

    With compliant mechanisms, there’s a bunch of different places they’re used, sometimes in precision machines for fixture mechanisms, where you want a specific piece that is held in place, using a mechanism that fixtures it, which can do it consistently and with very high precision. If you could automate a lot of that with this kind of framework, it would be very useful.

    These are all difficult problems that involve both combinatorial design variables and continuous design variables. I think that we are very close to that, and ultimately that will be the final stage.

    This work was supported, in part, by the MIT-IBM Watson AI Lab.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleLWiAI Podcast #188 – ChatGPT+Search, OpenAI+AMD, SimpleQA, Ï€0
    Next Article 8 Common CSV Import Errors in NetSuite – and How You Can Avoid Them

    Related Posts

    Machine Learning

    LLMs Struggle with Real Conversations: Microsoft and Salesforce Researchers Reveal a 39% Performance Drop in Multi-Turn Underspecified Tasks

    May 17, 2025
    Machine Learning

    This AI paper from DeepSeek-AI Explores How DeepSeek-V3 Delivers High-Performance Language Modeling by Minimizing Hardware Overhead and Maximizing Computational Efficiency

    May 17, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    Understanding Conditional Types and Type Inference in TypeScript

    Development

    Load testing asynchronous systems

    Development

    Indonesia’s Civil Aviation Data Breached? Hacker Claims Access to Employees, Flight Data

    Development

    Master Event Emitter in JavaScript

    Development

    Highlights

    Understanding Laravel’s Context Capabilities : Loading Contextual Data

    June 27, 2024

    You can load contextual data anywhere, but it’s usually best to load what you need…

    This iOS 18 feature shares your photos with Apple for analysis. Should you be worried?

    January 6, 2025

    Avowed: Tips and tricks to manage your essence

    February 13, 2025

    Pharmacies for All – The Importance of Universal Design

    February 3, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.