Development of a Virtual Disaster Evacuation Drill System Using Augmented Reality Technology

In this study, we propose a disaster evacuation drill system using augmented reality (AR) as one of the diversification of evacuation drill which is a typical disaster prevention education in schools. This is a system where you can experience practical training with a sense of reality by displaying superimposed CG images of disasters such as fire and rubble virtually in the place of daily life. By placing an AR marker in the building in advance and shooting a real scene including the marker with a camera, the marker is detected from the camera image and the 3D model of disaster is virtually displayed on the marker. Furthermore, the distance between the marker and the camera is calculated, and if it is too close, a warning message is displayed.


Background
In recent years, the importance of disaster prevention education has been reaffirmed due to the occurrence of large-scale disasters such as earthquake disasters and heavy rainfall. In the event of a disaster, the issue is how safely we take action to protect our lives and how quickly we can recover from the damage. In particular, disaster prevention education for children who can be vulnerable is an urgent issue, and it is required to improve disaster prevention education in schools.
There is evacuation drill as one of the typical disaster prevention education. Evacuation drills generally conducted at elementary and junior high schools have a scenario that assumes in advance the start time, fires and disasters, evacuation routes, disaster scale, and it is customary for training participants to act in accordance with the scenario. However, repeating such a method may lead to lost substances of evacuation drill. As a result, problems arise such as a decrease in the aggressiveness to training and that the purpose and necessity of training are not understood. Also, in the simulator, evacuation drill in an environment different from the place where you usually live is not easy to feel.
To solve these problems, we propose a system using augmented reality (AR) for evacuation drill as one of the diversification of evacuation drill. This system can experience realistic and practical training by virtually superimposing CG images of disasters such as fire and smoke in the daily life. The purpose of this research is to develop a system using markers for indoor evacuation training.
As a prior study, there is development of application for disaster prevention training in outdoor by Center of Education and Research for Disaster Management (CERD). In this previous research, researchers are developing markerless applications that visualize and display information on the screen of a tablet terminal using the location information of training participants. (1)

What is Augmented Reality?
Augmented Reality (AR) is a technology that extends real-world information by superimposing virtual world information on the perceptual information we usually receive from the real environment. While virtual reality (VR: Virtual Reality) replaces reality with an artificially created reality, augmented reality adds or deletes some information to reality. As an application example, there is utilization in the education field. AR is used to make it easier to understand things that are difficult to imagine in 2D, such as buildings, figures, structures of living things, listed in school textbooks, by looking three-dimensionally through the app. In addition, AR is used for various things in familiar places such as company advertisement and medical field.

System overview
An overview of this system is shown in Figure 1.In this system, AR markers are installed at various places in a building, and when the markers are viewed through a camera, they are converted into 3D models such as fires and falling furniture, and are displayed superimposed on real scenery. An example of an AR marker is shown in Figure 2. Place markers with different functions on a predetermined route. Participants attend training while wearing a camera for reading markers, such as a camera function of a terminal such as a smartphone or a head mounted display (HMD). Participants can move in real buildings and can virtually see the possible disaster situation at the time of disaster through cameras. Figure 1 shows how a virtual fire can be seen by looking through the camera the markers placed on the real floor.
Participants are trained to take appropriate action in response to such situations, such as loud notification of the occurrence of a fire or lowering their posture so that they do not smoke, and holding their mouth with a handkerchief. The training method of this system has the characteristic of being realistic and able to experience near actual disaster as compared with the conventional evacuation drill. In addition, it has the merit of being able to visually confirm the situation at the time of a disaster that can not be imagined in everyday life.
The participants will be explained the evacuation training method in advance, and given the task of moving safely as quickly as possible from the inside of the building where the markers are placed to the evacuation site. If a participant takes an inappropriate action, such as approaching a disaster occurrence area, a warning message will be displayed to encourage appropriate evacuation action. This training analyzes the following factors: what kind of remarks they made, which passage they used to evacuate, how long they took time, and the number of warnings. Then, evaluate the evacuation behavior of the participant based on the analysis result and feed back to the participant.
In this system, we adopted a method using markers to introduce augmented reality technology into evacuation drill. The reason is that the markerless method requires information on the space where training is to be performed, such as the size and layout of the place where the 3D model is displayed, and where and where it is placed. This is because there is an advantage that markers can be placed at arbitrary locations and 3D models can be displayed there regardless of the interior or floor plan. This makes it possible to incorporate this system in various places by preparing markers and a camera to read them.

What is ARToolKit?
ARToolKit (2) is a software library for the C / C ++ language that is necessary for developing AR applications. When shooting a landscape including a marker with a camera connected to a PC, the marker can be detected in real time from the input image of the camera, and the position and orientation of the marker can be measured. By using a graphics library such as OpenGL (3) , it is possible to project a three-dimensional CG object on the coordinate system based on the detected marker. In addition, since the transformation matrix from the marker coordinate system to the camera coordinate system can be calculated, the display position of the object can be adjusted.

Flow of program using ARToolKit
The basic operation of ARToolKit is shown below. ① Acquisition of camera image ② Marker detection and pattern recognition ③ Determining the degree of marker match ④ Measurement of marker position and posture ⑤ Acquisition of coordinate transformation matrix ⑥ Composite display of real image and 3D CG When the program is executed, different photographed images can be obtained even if the same three-dimensional space is photographed from the same place, depending on the characteristics such as the focal length and the angle of view of the camera used. Therefore, it is necessary to measure in advance the characteristics of the camera or lens actually used, and this is called camera calibration. In this experiment, we performed camera calibration by the method provided by ARToolKit. Specifically, a flat plate on which dozens of points are printed at equal intervals is photographed with a camera that actually uses it, and the actual distance between two points, the position of the point on the photographed image are input. Perform camera calibration in this way. Figure 3 shows how a sample program for drawing a cube on a marker actually read by a camera is executed.

Marker detection method / calculation of position and attitude
The mechanism for detecting AR markers is as follows. First, an image obtained by shooting with a camera is converted to a binary image of white and black as shown in Fig. 4 by binarization processing. Next, scanning is performed for each area where white or black is gathered, and labeling is performed to assign the same number to connected parts. Find an end point in each closed region, and use this as the first vertex (vertex 1).Starting from there, find a point as following the outline of the closed region, and finally return to vertex 1.Among the contours obtained by this, let vertex 2 be the point furthest from vertex 1.In order to search for vertices 1 and 2 from vertices 1 and 2, first, the contour obtained earlier is divided into two, "vertex 1 → 2" and "vertex 2 → 1".Find the point furthest from the line segment of vertices 1 and 2 from the two contours. Repeating this recursively finds the remaining vertices as well. As a result, a region with four vertices is judged to be a rectangle. Within the AR marker rectangle, there are places where the user can decide on their own design, and this is to differentiate each marker. Pattern matching is performed to identify each marker. The image which simplified the design in the quadrangle found earlier and the data file corresponding to each marker called the pattern file created beforehand are compared, and the error is calculated to identify which marker was read.
When an AR marker is detected, the coordinate transformation matrix between the marker and the camera is determined by finding the rotation matrix and translation vector of the marker from the information of the detected marker. The marker coordinate system takes the positive of the X axis in the right direction, the positive of the Y axis in the upper direction, and the positive of the Z axis in the vertical direction with the marker placed on a plane. The coordinate system of the camera takes the positive X axis in the right direction, the positive Y axis in the downward direction, and the positive Z axis in the direction from the camera to the marker. Figure 5 shows the relationship between the camera coordinate system and the marker coordinate system.

Fig. 5. Camera coordinate system and marker coordinate system
Assuming that the camera coordinate system is [ 1] and the marker coordinate system is [ 1] , coordinate conversion between the marker and the camera is defined as follows.
[ ] = [ is a coordinate transformation matrix that transforms the marker coordinate system into the camera coordinate system, and can be obtained using the ARToolKit's arGetTransMat. The coordinate transformation matrix consists of a rotation matrix 3×3 representing the pose of the marker in the camera coordinate system and the translation vector 3×1 representing the position of the marker origin. (4)

Experimental method
The experiment was performed using two different markers as shown in Fig.6. Each marker displays a 3D model of a different type, and the distance from the marker to the camera is displayed at the top left of the screen. When the distance from the marker to the camera is sufficiently long, the distance is displayed in green. When the experimenter with the camera is too close to the marker (when the training participant is too close to the disaster occurrence location), the color of the character indicating the distance changes to red and a warning message is displayed at the center of the screen Display. In this experiment, we conducted an experiment assuming two patterns of bookshelf fall and fire. The 3D model to be displayed after marker detection was created using software called Metasequoia.

Experimental result
First, using the marker 1, we simulated a situation where the bookshelf fell over. Figure 7 shows how a 3D model of a fallen bookshelf is displayed. The distance between the camera and the marker was measured from the coordinates obtained by detecting the marker, and it was confirmed that it could be displayed on the screen.
(a) Before displaying a 3D model (b) After displaying a 3D model Fig. 7. Simulation of a bookshelf fall Next, we simulated fire using marker 1 and maker2. Figure 8 shows how 3D models of two different colored flames are displayed using two markers. As shown in Fig. 8  (a), two different markers could be identified and different 3D models could be displayed on each. In addition, the distance to the camera was obtained from the coordinates of each detected marker, and could be displayed separately.
From Fig. 8 (b), it was also confirmed that the color of the text changes from green to red when the distance between the camera and the marker is too close, and a warning message is displayed in the middle of the screen.

Conclusions
In this study, we used ARToolKit to reproduce the situation of disaster such as fire and falling of furniture by using multiple markers. In this experiment, multiple markers were read by a camera, the coordinate transformation matrix between the markers and the cameras was acquired from the detected markers, and the 3D model could be displayed by determining the position and orientation of the markers. Moreover, the distance between the marker and the camera could be measured from the obtained coordinates and displayed on the screen in real time. It was also confirmed that the color of the character that displays the distance and the warning also changes depending on the distance to the marker.
The main issues in the future are the following three. ① Consider how to minimize the error in measuring the distance between the marker and the camera. ② In addition to 3D models and warning messages, it also reproduces the sound during disasters to make it more realistic. ③ Improve marker detection accuracy.