Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      How To Prevent WordPress SQL Injection Attacks

      June 13, 2025

      Java never goes out of style: Celebrating 30 years of the language

      June 12, 2025

      OpenAI o3-pro available in the API, BrowserStack adds Playwright support for real iOS devices, and more – Daily News Digest

      June 12, 2025

      Creating The “Moving Highlight” Navigation Bar With JavaScript And CSS

      June 11, 2025

      Microsoft Copilot’s own default configuration exposed users to the first-ever “zero-click” AI attack, but there was no data breach

      June 13, 2025

      Sam Altman says “OpenAI was forced to do a lot of unnatural things” to meet the Ghibli memes demand surge

      June 13, 2025

      5 things we didn’t get from the Xbox Games Showcase, because Xbox obviously hates me personally

      June 13, 2025

      Minecraft Vibrant Visuals finally has a release date and it’s dropping with the Happy Ghasts

      June 13, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      QAQ-QQ-AI-QUEST

      June 13, 2025
      Recent

      QAQ-QQ-AI-QUEST

      June 13, 2025

      JS Dark Arts: Abusing prototypes and the Result type

      June 13, 2025

      Helpful Git Aliases To Maximize Developer Productivity

      June 13, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft Copilot’s own default configuration exposed users to the first-ever “zero-click” AI attack, but there was no data breach

      June 13, 2025
      Recent

      Microsoft Copilot’s own default configuration exposed users to the first-ever “zero-click” AI attack, but there was no data breach

      June 13, 2025

      Sam Altman says “OpenAI was forced to do a lot of unnatural things” to meet the Ghibli memes demand surge

      June 13, 2025

      5 things we didn’t get from the Xbox Games Showcase, because Xbox obviously hates me personally

      June 13, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Artificial Intelligence»Robotic helper making mistakes? Just nudge it in the right direction

    Robotic helper making mistakes? Just nudge it in the right direction

    June 10, 2025

    Imagine that a robot is helping you clean the dishes. You ask it to grab a soapy bowl out of the sink, but its gripper slightly misses the mark.

    Using a new framework developed by MIT and NVIDIA researchers, you could correct that robot’s behavior with simple interactions. The method would allow you to point to the bowl or trace a trajectory to it on a screen, or simply give the robot’s arm a nudge in the right direction.

    Unlike other methods for correcting robot behavior, this technique does not require users to collect new data and retrain the machine-learning model that powers the robot’s brain. It enables a robot to use intuitive, real-time human feedback to choose a feasible action sequence that gets as close as possible to satisfying the user’s intent.

    When the researchers tested their framework, its success rate was 21 percent higher than an alternative method that did not leverage human interventions.

    In the long run, this framework could enable a user to more easily guide a factory-trained robot to perform a wide variety of household tasks even though the robot has never seen their home or the objects in it.

    “We can’t expect laypeople to perform data collection and fine-tune a neural network model. The consumer will expect the robot to work right out of the box, and if it doesn’t, they would want an intuitive mechanism to customize it. That is the challenge we tackled in this work,” says Felix Yanwei Wang, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this method.

    His co-authors include Lirui Wang PhD ’24 and Yilun Du PhD ’24; senior author Julie Shah, an MIT professor of aeronautics and astronautics and the director of the Interactive Robotics Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL); as well as Balakumar Sundaralingam, Xuning Yang, Yu-Wei Chao, Claudia Perez-D’Arpino PhD ’19, and Dieter Fox of NVIDIA. The research will be presented at the International Conference on Robots and Automation.

    Mitigating misalignment

    Recently, researchers have begun using pre-trained generative AI models to learn a “policy,” or a set of rules, that a robot follows to complete an action. Generative models can solve multiple complex tasks.

    During training, the model only sees feasible robot motions, so it learns to generate valid trajectories for the robot to follow.

    While these trajectories are valid, that doesn’t mean they always align with a user’s intent in the real world. The robot might have been trained to grab boxes off a shelf without knocking them over, but it could fail to reach the box on top of someone’s bookshelf if the shelf is oriented differently than those it saw in training.

    To overcome these failures, engineers typically collect data demonstrating the new task and re-train the generative model, a costly and time-consuming process that requires machine-learning expertise.

    Instead, the MIT researchers wanted to allow users to steer the robot’s behavior during deployment when it makes a mistake.

    But if a human interacts with the robot to correct its behavior, that could inadvertently cause the generative model to choose an invalid action. It might reach the box the user wants, but knock books off the shelf in the process.

    “We want to allow the user to interact with the robot without introducing those kinds of mistakes, so we get a behavior that is much more aligned with user intent during deployment, but that is also valid and feasible,” Wang says.

    Their framework accomplishes this by providing the user with three intuitive ways to correct the robot’s behavior, each of which offers certain advantages.

    First, the user can point to the object they want the robot to manipulate in an interface that shows its camera view. Second, they can trace a trajectory in that interface, allowing them to specify how they want the robot to reach the object. Third, they can physically move the robot’s arm in the direction they want it to follow.

    “When you are mapping a 2D image of the environment to actions in a 3D space, some information is lost. Physically nudging the robot is the most direct way to specifying user intent without losing any of the information,” says Wang.

    Sampling for success

    To ensure these interactions don’t cause the robot to choose an invalid action, such as colliding with other objects, the researchers use a specific sampling procedure. This technique lets the model choose an action from the set of valid actions that most closely aligns with the user’s goal.

    “Rather than just imposing the user’s will, we give the robot an idea of what the user intends but let the sampling procedure oscillate around its own set of learned behaviors,” Wang explains.

    This sampling method enabled the researchers’ framework to outperform the other methods they compared it to during simulations and experiments with a real robot arm in a toy kitchen.

    While their method might not always complete the task right away, it offers users the advantage of being able to immediately correct the robot if they see it doing something wrong, rather than waiting for it to finish and then giving it new instructions.

    Moreover, after a user nudges the robot a few times until it picks up the correct bowl, it could log that corrective action and incorporate it into its behavior through future training. Then, the next day, the robot could pick up the correct bowl without needing a nudge.

    “But the key to that continuous improvement is having a way for the user to interact with the robot, which is what we have shown here,” Wang says.

    In the future, the researchers want to boost the speed of the sampling procedure while maintaining or improving its performance. They also want to experiment with robot policy generation in novel environments.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleBoom Ratatatata: The Future of Music is Here, Sung by Human AI and AGI Robots!
    Next Article Is ChatGPT down for you? You’re not alone – here’s the latest

    Related Posts

    Artificial Intelligence

    Last Week in AI #302 – QwQ 32B, OpenAI injunction refused, Alexa Plus

    June 13, 2025
    Artificial Intelligence

    LWiAI Podcast #202 – Qwen-32B, Anthropic’s $3.5 billion, LLM Cognitive Behaviors

    June 13, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Is DeepSeek AI a “profound threat” to U.S. national security? A report suggests the Chinese startup unlawfully stole OpenAI’s data, too.

    Is DeepSeek AI a “profound threat” to U.S. national security? A report suggests the Chinese startup unlawfully stole OpenAI’s data, too.

    News & Updates

    CVE-2025-5856 – PHPGurukul BP Monitoring Management System SQL Injection Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Obama to propose legislation that protects firms sharing cyberthreat data

    Development

    Apple doesn’t need better AI as much as AI needs Apple to bring its A-game

    News & Updates

    Highlights

    CVE-2025-48075 – Fiber Denial of Service Panic

    May 22, 2025

    CVE ID : CVE-2025-48075

    Published : May 22, 2025, 6:15 p.m. | 36 minutes ago

    Description : Fiber is an Express-inspired web framework written in Go. Starting in version 2.52.6 and prior to version 2.52.7, `fiber.Ctx.BodyParser` can map flat data to nested slices using `key[idx]value` syntax, but when idx is negative, it causes a panic instead of returning an error stating it cannot process the data. Since this data is user-provided, this could lead to denial of service for anyone relying on this `fiber.Ctx.BodyParser` functionality. Version 2.52.7 fixes the issue.

    Severity: 0.0 | NA

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    WhatsApp users angry over “optional” Meta AI that can’t be turned off

    April 24, 2025

    Empowering YouTube creators with generative AI

    May 27, 2025

    Microsoft AI Released Phi-4-Reasoning: A 14B Parameter Open-Weight Reasoning Model that Achieves Strong Performance on Complex Reasoning Tasks

    May 1, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.