32.3 C
Ahmedabad
Monday, October 7, 2024

Robots can learn to do tasks like throwing trash better thanks to Google’s RT-2 AI model


<p>The first vision-language-action (VLA) paradigm for controlling robots developed by Google is called RT-2. Robots may now carry out practical tasks like removing garbage, and by employing this new model, they will be able to learn more like people by “transferring learned concepts to new situations.”</p>
<p>According to the paradigm, robots should be capable of “transferring information to actions.” Robots can now swiftly adapt to new circumstances and settings because to this development. Theoretically, this may make a variety of use cases possible that weren’t before. Google states plainly that “RT-2 can speak robot.”</p>
<p><img decoding=”async” class=” wp-image-101503″ src=”https://www.theindiaprint.com/wp-content/uploads/2023/07/theindiaprint.com-jharkhand-security-is-stepped-up-and-drones-are-used-during-muharram-istockphoto-1096227330-612×612-1.jpg” alt=”” width=”1433″ height=”955″ srcset=”https://www.theindiaprint.com/wp-content/uploads/2023/07/theindiaprint.com-jharkhand-security-is-stepped-up-and-drones-are-used-during-muharram-istockphoto-1096227330-612×612-1.jpg 612w, https://www.theindiaprint.com/wp-content/uploads/2023/07/theindiaprint.com-jharkhand-security-is-stepped-up-and-drones-are-used-during-muharram-istockphoto-1096227330-612×612-1-150×100.jpg 150w” sizes=”(max-width: 1433px) 100vw, 1433px” /></p>
<p>Making robots more helpful, according to Google, has been a “herculean task” because “a robot capable of doing general tasks in the world needs to be able to handle complex, abstract tasks in highly variable environments — especially ones it’s never seen before.” However, recent work in the form of the new RT-2 model from Google enables this to be possible. According to Google, “recent work has enhanced robots’ capacity for reasoning, even enabling them to use chain-of-thought prompting, a method of deconstructing multi-step problems.”</p>
<p>Google used the example of tossing away garbage to demonstrate how the new model works. In the past, training a robot to discard rubbish required many steps: teaching the robot to recognize waste, how to pick it up, and where to place it.</p>
<p>RT-2, on the other hand, does away with this need by “transferring knowledge from a large corpus of web data.” This implies that despite never having been taught to do so, the robot can recognize garbage without any training and can even figure out how to discard it. This is due to the fact that RT-2 can infer the kind of rubbish from its vision-language data. “And consider the abstract nature of garbage; once you consume anything, such as a bag of chips or a banana peel, it becomes garbage. From its vision-language training data, RT-2 can interpret that and complete the task, Google stated.</p>
<p>Robots need to have a deeper knowledge of the environment in which they are functioning, in contrast to chatbots like ChatGPT that are driven by huge language models like Google’s PaLM 2 or OpenAI’s GPT-4. This involves knowing how to pick up items, how to tell distinct objects apart, and how objects fit within a certain environment.</p>
<p> </p>


Related Articles

- Advertisement -
- Advertisement -
- Advertisement -
error: Content is protected !!