Artículo Científico / Scientific Paper |
|
|
|
||
|
pISSN: 1390-650X / eISSN: 1390-860X |
|
Marco A. Luna1, Julio F. Moya1 , Wilbert G. Aguilar2,3,4,*, Vanessa Abad5 |
Abstract |
Resumen |
||
This article proposes the design and implementation of a low-cost vision based navigation mobile robot that tracks pedestrians in real time using an IP camera onboard. The purpose of this prototype is the navigation based on people tracking keeping a safe distance by PID and on-off controllers. For the implementation we evaluate two pedestrian detection algorithms: HOG cascade classifier and LBP cascade classifier off-line and onboard the robot. In addition,we implement a communication system between the robot and the ground station. The metrics of evaluation for the pedestrian detection proposals were precision and sensibility, obtaining better results with HOG. Finally, we evaluate the communication system, computing the delay of the controller response; the results show that the system works properly with a transmission rate of 115200 bauds. |
Este artículo propone el diseño e implementación de un robot móvil con navegación basada en visión, de bajo costo, que sigue la trayectoria de peatones en tiempo real usando una cámara IP a bordo. El propósito de este prototipo es la navegación basada en el seguimiento de personas conservando una distancia segura a través de controladores PID y on - off. Para la puesta en marcha se evalúan dos algoritmosde detección de peatones: cascada de clasificadores HOG y cascada de clasificadores LBP, tanto fuera de línea como a bordo del robot. Adicionalmente, se implantó un sistema de comunicación entre el robot y una estación de tierra. Las métricas de evaluación para las propuestas de detección de personas fueron la precisión y sensibilidad, obteniendo mejores resultados con HOG. Al final, se evaluó el sistema de comunicación, calculando el retraso de la respuesta del controlador. Los resultados mostraron que el sistema trabaja adecuadamente para una tasa de transmisiónde 115200 baudios. |
||
|
|
||
Keywords: AdaBoost, HOG, LBP, Pedestrian Detection, Urban Navigation. |
Palabras clave: AdaBoost, detección de peatones, HOG, LBP, navegación urbana.
|
||
1 Departamento de Eléctrica y Electrónica DEEE, Universidad de las Fuerzas Armadas ESPE, Sangolquí - Ecuador. |
|||
2 Departamento de Seguridad y Defensa DESD, Universidad de las Fuerzas Armadas ESPE, Sangolquí - Ecuador. 3 Centro de Investigación Científica y Tecnológica del Ejército CICTE, Universidad de las Fuerzas Armadas ESPE, Sangolquí - Ecuador. 4,*
Grup de Recerca en Enginyeria del Coneixement
GREC, Universitat Politècnica de Catalunya UPC, Barcelona - España. Autor
para correspondencia 5 Departament de Genètica, Universitat de Barcelona UB, Barcelona - España.
67 |
|
|
where:
l= Longitudinal separation between wheels. d= lateral separation between wheels.
For traction and steering control the system use Pulse Width Modulator (PWM) with Chopper configuration [34]. Additionally, there is a smart device (smartphone) camera for navigation. The robot has two motors, one for forward/reverse displacement and other that controls direction. An H bridges circuit for motor control is implemented. The camera of smartphone is used as IP camera for image capture.
4.2. Pedestrian Detection
The feature extraction algorithms previously described are applied on the mobile robot to obtain an autonomous navigation system based on pedestrian tracking. We use Adaboost as machine learning method. It is an algorithm in machine learning based on the notion of creating a highly accurate prediction rule by combining many relatively weak and inaccurate rules [33]. Fig. 2 shows images of the HOG algorithm performance in the mobile robot.
Figure 2. Pedestrian detection with HOG algorithm. (a) Side detection of a person. (b) Back detection of a person. (c) Frontal Detection of a person.
Performance of LBP algorithm in images captured from the mobile robot is presented on Fig. 3.
Figure 3. Fig. 3. Pedestrian detection with LBP algorithm. (a) Detection of a false positive. (b) Detection of two people. (c) Frontal detection of a person. |
4.3. Controller
The mobile robot uses two main inputs for the controller (needed for 2D navigation [35]): horizontal position and distance obtained from the bounding box of the pedestrian detection. Horizontal position depends of x coordinate of the bounding box centroid and the distance depends of the width of the bounding box. We use PI control for the distance that reduces overshoot in order to have a soft controller. For horizontal position we use an on-off controller with hysteresis, i.e. with a band of maximum and minimum va-lues in the output due to the variation range of the actuators is small and precision is not necessary (Fig. 4).
Figure 4. Robot navigation control scheme.
4.4. Communication
Wi-Fi communication is implemented for transmitting data from the IP camera to the computer. This is because Wi-Fi is able to send image data. In the other side we use Bluetooth communication to send control command from the computer to the robot. The transmission speed obtained from experimentation is 115200 baud. (See section 5).
5. Experimentation and Results
5.1. Pedestrian Detection
The performance of pedestrian detection algorithms was compared using online and offline videos. We are using recall and precision as metrics of evaluation. The formulas used are presented in equations 7 and 8 respectively.
where: TP: true positives. FN: false negative. FP: false positives.
We realized two experiments, offline and online. For the online test, we used four different security cameras videos obtained from Internet. We captured 1000 frames |
separated in groups of 30 frames. For each group, we determined true positives, true negatives, false positives and false negatives using HOG and LBP algorithms. The image processing was realized with a 2.4 GHz processor and RAM memory of 4 GB. In the online test we recorded 7 minutes of video, i.e. 12600 frames, and the same procedure in offline test was performed. The results are presented in Table 1 and Table 2.
The Table 1 presents the results of recall of the detectors tested offline and online.
Table 1. Detection recall of the algorithms previously Trained
Table 2. Detection precision of the algorithms previously trained.
The HOG precision test is better than LBP because LBP performance give more false positives in both cases (offline and online). The LBP algorithm compares image textures and confuses other objects with persons easily. HOG is a descriptor based on objects shape and focused principally on pedestrians [24], thus it has better performance but with higher computational cost. Based on the results, we choose HOG because the computational cost problem can be reduced using a processor with better characteristics.
5.2. Controller response and communication
During implementation of the robot, it was found that the response time of the controller varied according to Bluetooth transmission speed, the results of the test are presented in Table 3.
Table 3. Delay in the controller’s response to different transmission speeds. |
[17] L. Zhang, B. Wu, and R. Nevatia, “Pedestrian Detection in Infrared Images based on Local Shape Features,” 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp. 0-7, 2007. [18] D. M. Gavrila, “Pedestrian Detection from a Moving Vehicle,” System, pp. 37-49, 2000. [19] M. Enzweiler and D. M. Gavrila, “Monocular pedestrian detection: Survey and experiments,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 12, pp. 2179-2195, 2009. [20] O. H. Jafari, D. Mitzel, and B. Leibe, “Real-Time RGB-D based People Detection and Tracking for Mobile Robots and Head-Worn Cameras,” IEEE International Conference on Robotics & Automation (ICRA), no. May 2016, pp. 5636-5643, 2014. [21] M. Kobilarov, G. Sukhatme, J. Hyams, and P. Batavia, “People tracking and following with mobile robot using an omnidirectional camera and a laser,” Proceedings - IEEE International Conference on Robotics and Automation, vol. 2006, no. May, pp. 557-562, 2006. [22] Benenson, Rodrigo; Omran, Mohamed; Hosang, Jan; Schiele, Bernt, “Ten Years of Pedestrian Detection, What Have We Learned?” Proceedings of the Computer Vision-ECCV 2014 Workshops, pp. 613-627, 2014. [23] P. Viola and M. Jones, “Robust real-time face detection,” International journal of computer vision, vol. 57, no. 2, pp. 137-154, 2004. [24] N. Dalal and W. Triggs, “Histograms of Oriented Gradients for Human Detection,” 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition CVPR05, vol. 1, no. 3, pp. 886-893, 2004. [25] Q. Zhu, S. Avidan, M. C. Yeh, and K. T. Cheng, “Fast Human Detection Using a Cascade of Histograms of Oriented Gradients,” IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 1491-1498, 2006.
|