Proposal of a method for real-time detection of obstacle using line laser and camera

Automated guided vehicles (AGVs) have been widely used in factories and warehouses. Functions such as obstacle detection are indispensable for unmanned transport robots. We have developed a new approach for obstacle detection using a line laser and a camera. In this study, we improved the detection process of the system, made it real-time, implemented it on a robot, and verified the measurement accuracy of the system. As a result of measuring the distance to the obstacle and the size of the obstacle, the measurement error was small, within 20 mm, and it was confirmed that the system could detect the obstacle with good accuracy.


Introduction
In recent years, the number of autonomous mobile robots in Japan has been on the rise, and it is expected to continue to increase in the future ( Fig. 1.1).

Figure1.1
Domestic Autonomous Mobile Robot Market Spending Forecast [1] Autonomous mobile robots are used in everyday life, such as drones and cleaning robots. Autonomous mobile robots are also being used in factories and warehouses in the form of unmanned transport robots, and there are high expectations for their use in unmanned operations. Most of the AGV robots currently in use have a predetermined travel route (Fig 1.2), and the layout of a factory or warehouse cannot be easily changed. Even if the robot is restricted from placing cargo on its travel route, there is still the possibility that an obstacle may occur, causing the robot to collide with the obstacle and stopping the robot, resulting in poor transportation efficiency. Therefore, it is necessary to detect obstacles accurately and avoid them appropriately. In factories and warehouses, there are many cases where humans and robots work together, and irregularities may occur due to human movements. Therefore, we are developing a method using a line laser and a camera as a system that can be implemented at low cost and can detect a wide range of obstacles and detect their distance and size. In this study, we improved the detection process of the system, made it real-time, implemented it on an opposing two-wheeled robot, and verified the measurement error under various patterns such as stationary and moving.

Figure1.2
Example of using AGVs [2] 2. Principle In this chapter, we describe the principles used in our research system.

Obstacle detection principle
As shown in Fig. 2.1, a green line laser is irradiated from the top of the robot toward the floor, and when the laser light line hits an obstacle, the obstacle is detected based on the change in the input image. The principle of obstacle detection based on the line laser (green) extracted from the image is explained. Figures 2.2 and 2.3 show the extracted images of the line laser (green) in the presence and absence of obstacles. As shown in Fig.  2.2, when there is no obstacle, the extracted laser light image is an unbroken straight line. As shown in Fig. 2.3, when there is an obstacle, the area where the obstacle exists shows characteristic changes according to the shape of the obstacle. This difference is used to detect the presence or absence of obstacles and to calculate the height and width of obstacles and the distance between obstacles.

Calculating the size and distance of obstacles
In this section, we describe the calculation principle for determining the size (height and width) of an obstacle and the distance to the obstacle from the image after line laser extraction. In order to convert the number of pixels to the actual length, we first need to find the shooting range of the camera. Figure  2.5 shows a diagram of the range of the camera. .6 shows the positional relationship between the camera, line laser, and obstacles in this system. The line laser and camera are placed on the y-axis with respect to the origin, and the line laser is irradiated in the zaxis direction to a point at a distance Zo. When an obstacle whose height is greater than or equal to Hr exists at a distance Zr, the line laser is irradiated at the point P. In this case, on the vertical shooting range with respect to the point at distance Zo, it is equivalent to a line laser at the height Hz0. Therefore, the z-coordinate of the intersection of the two lines, the line (blue) connecting the line laser irradiation position Hz0 on the vertical shooting range based on the point of the camera height Hc and the distance Zo, and the line

System Overview
The system configuration used is shown in Fig. 3.1. The model proposed in this study consists of a line laser linearized by a polarizing lens, a camera, and a computer for image processing and robot control. The line laser is placed at a higher position and the camera is placed at a lower position. The line laser is aimed at the floor in an oblique direction, and the camera captures images parallel to the floor. The line laser is the MLXL series (wavelength: 520 nm), the camera is the Raspberry Pi Camera V2, and the computer is a Raspberry Pi 4. Figure 3.2 shows the specifications of the mobile robot used in this study. As shown in Fig. 3.2, we use the Raspberry Pi as the master and the Arduino Due as the slave to perform I2C communication and send operation commands from the wireless keyboard. The Raspberry Pi requests a change in the motor speed output, the Arduino outputs it through the driver, and the speed is adjusted based on feedback from the encoder. This makes it possible to move the robot at any speed. The Raspberry Pi and Arduino Due are both 3.3V tolerant and can be connected directly, but the motor driver and encoder are 5V tolerant, so a level shift circuit is used to communicate with the Arduino Due.

Real-time obstacle detection accuracy verification
(static state) Even in a static state, the effect of the surrounding light source, such as fluorescent light, on each pixel value changes with time. Therefore, it is not always possible to obtain the same measurement results even when the robot and the obstacle are in the same position. In this section, we examine the scatter of data when the robot and the obstacle are stationary, and the measurement accuracy of the detection system itself.

Experimental Methods
The image of this experiment is shown in Fig. 4.1. In this experiment, an obstacle is brought closer to a stationary robot in 50[mm] steps from a point 800[mm] away, and measurements are taken for 10 seconds at each point (100 data points at a frame rate of 10 fps). The scatter of the data and the error from the true value are compared.

Experimental results
The results obtained in this experiment are shown below.

Discussions
The results show that the distance and height of the obstacle can be detected with an error of 18 [mm] at most compared to the theoretical value, which means that practical obstacle detection is possible. In the width measurement, the error is 150 [mm] at the distance of 800 [mm]. This is thought to be due to the fact that the laser beam irradiated on the obstacle and the laser beam irradiated on the floor appear to be at almost the same height in the image, and thus the initial value was output because it was not recognized as an obstacle. For the other data, the detection is considered to be practical as well as for distance and height. The difference between the maximum and minimum values in the measurement is about 10 [mm], and the influence of the surrounding light source is considered to be relatively small. The error is larger when the distance is closer, but this is due to the fact that the laser beam is irradiated more strongly when the obstacle is closer, causing the laser line to be thicker.

Real-time obstacle detection accuracy verification
(moving state) Autonomous mobile robots need to be able to detect obstacles correctly while moving in order to move safely in factories and warehouses. In this section, the system is operated by a robot moving at a constant speed against an obstacle, and the measurement accuracy of the data when detected in real time is verified.

Experimental Methods
The image of this experiment is shown in Fig. 4.5. In this experiment, the robot approached a stationary obstacle from 800[mm] away at a constant speed (150[mm/s]), and measurements were taken in real time. We compare the error between the time and the reference true value. In order to check the dependence of the measurement results on the shape of the obstacle, we prepared two types of obstacles, one rectangular and the other isosceles triangular.

Experimental results
The results obtained in this experiment are shown below.

Discussions
In this experiment, we verified the accuracy of the measurement while the robot was moving. The results show that in the area where the laser beam irradiated on the obstacle and the laser beam irradiated on the floor appear to be at almost the same height in the image, the obstacle is not detected. Therefore, the initial values of distance, height, and width are output. With the exception of this case, detection can be performed within an error of 20 [mm] regardless of the shape of the obstacle. Based on the above, we believe that practical obstacle detection is possible. When the three parameters (distance, height, and width) are combined into a single graph, the variation of the error in distance and height is common. This is due to the fact that the coordinates of the intersection of two lines are used to calculate the distance and height. As for the width, the error becomes larger as the distance to the obstacle approaches, as in the case of the measurement in the stationary state.

Conclusions
In this study, we aimed to enable the robot to avoid obstacles and run freely. We used a different approach than conventional obstacle detection methods, which using a line laser and a camera, to detect obstacles in real time while the robot is stationary and moving. As for the detection accuracy of our system, the experimental results show that the distance, height, and width can be detected within 20[mm] in real time, except for the area of 40[mm] near the line laser irradiated on the floor. Therefore, it can be said that practical obstacle detection is possible.
In this study, we used the Raspberry Pi Camera V2, but its shooting range as a camera is limited. In the future, the use of cameras with wider angles and lower distortions will enable the detection of a wider range of obstacles. And we plan to increase the number of lasers with different irradiation angles in order to reduce the size of the undetectable area, which could not be compensated by our system.