Abstract:
This project investigates the general robots behavior, control and Multi-Sensor sensory fusion techniques using low cost Infra-red (1R) sensors, Sonar sensors (Ultrasonic sensors), Optical encoders and general-purpose Web camera specially CMUcam CMOS camera. According to my literature survey 1 have found that current high - tech researches are going on by applying very expensive and sophisticated sensory devices such as, Stereo-Vision sensors, Laser scanners, High resolution CCD camera etc. along with embedded high speed Digital Signal Processor (DSP) systems. Due to above technical and financial restrictions facing with research as well as depth of expected research study to be performed; very complex and highly expensive components should have been eliminated and would not be illustrated further. This project particularly based on sensory fusion with image processing techniques. The objective as well as motivation is, to build a low cost, optimal level resource consuming reliable sensory system for a robot. Relying only one sensor especially the time - of flight sensor (sonar) will probably cause problems such as, sonar sensors are limited in resolution, range and the size of the object they can detect, sensor value (from sonar) may not correspond to the actual distance of the object, cross talk, fore shorting and specula reflection. Sensors sometimes can be complementary or redundant since it is necessary to make an appropriate selection of sensors when building the sensor suit for a mobile robot. For an example, Infra-Red (IR) can provide less-accurate range measurements compared to the ultrasonic sensors but IR sensors can provide a large number of measurements in a short time period; can easily be mounted on a scanner to provide panoramic view and sonar sensors are excellent for mobile robot applications when especially navigate through a room filled with obstacles. In many cases multiple sensor sources are better than single sensor reading. This led to the development of the sensor system architecture with sensory fusion techniques. Further, this permits more than one sensor making the sensory system more reliable and robust. Typically the general architecture of the fusion sensor has been categorized into two; low-level fusion and high-level fusion. In this project, the architecture is developed based on actionoriented sensory fusion, in belief that multiple sensor reading can be fused and would give rise to certain behavior for mobile robot. And also Filtering techniques are employed to reduce the uncertainties in the line segment representation and Data / Image fusion. Some of the issues in designing particular vision system for the robot, involves capturing and storing the entire image before starting the image analysis and to overcome some of general vision system issues, system has to be designed to extract visual information from the environment in Real-Time using an affordable 'off - the - shell' digital CMU cam color camera and embedded controller. The final implementation and results were obtained by using Simrobot simulations and real low cost mobile platform was developed to certify the trialed simulations and implemented behaviors. It is discussed that complex situations such as emergency behavioral decision making, significantly deviates from expected once so that vision and image processing in real time make hardcore experience in low cost camera 1 had used and also uncertainty of sensor inputs truly make unexpected fusion results with noise addition as well
Citation:
Pallegedara, A. (2006). Sensor fusion model for low cost mobile robot platform [Master's theses, University of Moratuwa]. Institutional Repository University of Moratuwa. http://dl.lib.mrt.ac.lk/handle/123/10304