A robot for learning to program is the ClicBot
June 12, 2021
How robots see
June 16, 2021

Running or enjoying a walk a person does not think about their movements, for us it is commonplace. Moving in a dense flow of people we can simultaneously do several other things at the same time, such as talking on the phone and holding a bag. For humans, such actions are familiar and do not cause difficulties, but not for robots.

Let's understand how a robot sees and moves. This requires the robot to have two components, which are hardware and software. It is only in the cooperation of these two components that we can talk about independent robot movement. Let's start our story with the hardware part.

Mobility robot hardware

Needless to say, a robot needs mechanisms to move around - it's wheels like a promobot or legs like a robot dog. From each wheel of a robot like Promobot comes odometry information (the distance the bot has traveled) so that the robot understands how far it has traveled in one direction or another. The odometry data is entered into its database to orient the robot on a 3D map.

It's much more interesting to find out how the robot sees where it can and can't go. People think that by building a color camera into the robot the task will be solved, but unfortunately this is not the case. After all, the robot is not a human and for it the image from the camera is just a set of pixels.

The robot needs a device that can determine the distance to surrounding objects and the best way to do this is lidar. It emits light in a spectrum invisible to humans and measures the time it takes for the light to reach the object and return, the further away the object is, the longer the light will travel, so the device understands how far away the object is from it.

How robots move
Lidars are gaining popularity in modern tech, with autopilot cars and smartphones gaining these sensors.

Technical vision is costly, so there are alternatives to lidar and that is to get information from sonars, infrared rangefinders and ToF cameras. Which saves money, but the quality of recognition drops. Which could eventually lead to a collision with an object or person.

Now let's move on to the second component - programmatic.

The software part of the robot for locomotion

Getting information from the lidar is the smallest part of the job, it still needs to be processed and this is done by the computer. And as we know it requires programs. While robotics is just gaining momentum, each manufacturer has to write their own software, which is why the cost of quality robots is so high.

Here's a simple example. The robot needs to reach the wall, hence it uses information from the lidar about the distance to the wall and when the distance becomes minimum it will stop, but in real life we have people and moving objects that can move together and across the robot. Along the way, the robot will need to look in all directions and evaluate how to avoid objects in front or behind, calculate object speeds and size. You will need to keep in memory a 3D map of the terrain, objects and their speed of movement, and all calculations to make in real time.

Robots can already be trusted, programmers and engineers are working to create ever more advanced machines that move independently without human intervention. The choice of robots is constantly expanding and their quality is increasing, and there is no doubt that they will achieve independence in the near future.

Leave a Reply

Your email address will not be published. Required fields are marked *