World modelling in NextPerception and in Robotics
Author: Heico Sandee (Smart Robotics)
The use-cases in NextPerception are focused at automotive and healthcare use-cases because they have a lot in common: both rely on a massive amount of sensor data to ensure safety and increasingly autonomous behaviour. RGB cameras, 3D cameras, GPS and many other sensors are combined to determine the ‘state’ of the environment and its actors (e.g., a car). All this data is gathered, combined in models and processed further to predict future states, based on possible decisions of an actor (e.g., a car turns left). From these models, operators (e.g., a driver) are advised and next actions of autonomous devices (e.g., a self-driving car) are determined.
A third domain to which this applies is Robotics. Smart Robotics partners in the NextPerception project to explore the use of the developments to this domain. The specific focus on the use of World Modelling which is an abstract representation of the environment (spatial and temporal). A graphical depiction of a robot arm picking items from two bins is given below. Known data is visualized and multiple possible decisions for the next arm motion are presented.


A typical robot picking applications covers the following flow:
- Detect source carrier position and add to World Model
- Segment and classify items from RGB camera with a Deep Learned model
- Add depth data to the model to determine possible grasp poses
- Select and execute best first grasp pose candidate based on world modelavoiding collisions
- Determine grasp success based on robot sensors
World Modelling in Robotics has had the focus on control to realize autonomy. The future extended heading for using the World Model is on safety. For Automotive and Healthcare, it is the other way around; current focus is on safety, whereas next focus is on autonomy. For this reason, it is very interesting to see how both application areas can learn from each other.
With world modelling both create a model of the physical world and perform computations and operations on it, but with their own specific focus:
These described differences between NextPerception and Robotics result in that at this moment it is difficult to already use concrete results between the domains. Focus / need is differently prioritized. However, we expect for the future that these domains will get much close to each other, and that tools and algorithms can be shared. Especially when robots get more between people and need to be safer, and when automotive and healthcare get to the next level of autonomy.
NextPerception at ERF2022.