Abstract
Cleaning is the most common household task we perform in our everyday life. For kids ages 2 to 5, tidying up their room and vacuum cleaning the house is a challenging and tiring task that most people faced in their day-to-day life. This report presents our Final Year Project of Intelligent Floor Cleaning Robot that can perform Object arrangement and Tidy up tasks in the Kids’ Room and efficient cleaning tasks for the Living Room. To identify small objects Convolutional Neural Network (CNN) based Image Classification model is implemented. They can be categorized as Toys, Waste and Others as in the kids’ room, and then identified objects are pushed to pre-assigned regions. In the living room, small objects are pushed into the area that has been already cleaned. User commands, Lidar sensor and camera data are used in decision making. Using a robot operating system (ROS) and their Gmapping library, the cleaning environment is mapped, and Lidar-based SLAM algorithms are used for navigation and localization. The entire simulation is carried out in Webots robotic simulation software. This method implements a good classification accuracy and a good Cohen’s kappa value showing a promising improvement in cleaning effectiveness. Also, the S pattern navigation approach we used is validated and compared with the Random navigation approach.
Acknowledgement
The successful completion of the final year undergraduate project will not be an easy task without the kind support of many parties. So, we would like to express our sincere gratitude to all those who provide guidance and support to achieve the goal of the project. The most valuable party behind the success of this project is our supervisor Prof. A. G. B. P Jayasekara. Without his encouragement and dedication, we would not achieve successful completion. So, we would like to convey our heartiest gratitude, especially to our final year project supervisor, Prof. A. G. B. P Jayasekara for guidance and for following up with us throughout these two semesters. Next, we would like to express our sincere appreciation to the Former Head of the Department of Electrical Engineering Prof. Sisil Kumarawadu and the Head of the Department of Electrical Engineering Prof. K.T.M.U. Hemapala for arranging the university facilities even under the Covid-19 pandemic and as well as the country’s economic crisis. Also, we would like to convey our gratitude to Project Coordinator Dr W.D. Prasad for coordinating the project schedule with the rest of the academic work and arranging required permissions to facilitate laboratories and the university as well. It was a great help to implement the project as a group. So, we give our sincere thanks to all those who dedicate their time and concerns to us. Also, we would like to express our gratitude to the rest of the academic staff of the Department of Electrical Engineering for providing guidance, feedback, and suggestions during the project and progress reviews. The support given by the non-academic staff of the Department of Electrical Engineering is also highly appreciated. Especially, the staff of Robotics Laboratory was a great help. Finally, we would like to thank all our colleagues for their support during the project. As well as, we would like to thank our parents, families, friends and others who supported us in various ways to complete the project successfully
Introduction
Cleaning is the most common household task performed in our day-to-day life. Since the pandemic situation makes people stay home most of the time, It accelerated the task of cleaning. Even people stayed home, work from home culture and take care of kids make cleaning and tidying up tasks more difficult and tiring. Because of these Cleaning Robots become more critical than ever.
We have identified that homes with kids of age 2-5 and the homes that raise pets inside the house have the problem of the floor getting messy and cluttered frequently. When a cleaning robot starts cleaning it has to face many small obstacles such as small toys, waste and lightweight objects. Most of the cleaning robots avoid those things and clean the rest area. Therefore, most areas were left uncleaned. Others will drive over some of the small objects and get tangled with them. Then the user has to come and rescue the robot. This makes a poor cleaning performance.
Objectives
The objectives of this project are to improve the cleaning performance and clean efficiently in the living room and tidy up the kids' playroom which belongs to the age group of 2 to 5 children. For that, we have introduced a novel design shape to our cleaning robot which can support the pushing of small objects.
Primary Objectives
- Develop an algorithm to map the environment effectively and classify small objects in the kids' room as Toys, Waste and Other. And for living room as Movable or Not.
- Plan the paths and navigate for cleaning by considering the above intelligent algorithm.
- Modelling and simulation of the cleaning robot to tidy up kids' playroom and clean efficiently in the living room.
Objective 1: Develop an algorithm to map the environment effectively and classify small objects in the kids' room as Toys, Waste and Other. And for living room as Movable or Not
Some studies use image classification and machine vision technologies to identify objects in the environment.
Study 1 states a debris detection and classification scheme for an autonomous floor cleaning robot using a deep Convolutional Neural Network (CNN) and Support Vector Machine (SVM) cascaded system. Here classifies the solid and liquid spill debris on the floor through the captured images. As limitations it has identifies the problem of cleaning devices cannot avoid hard-to-clean liquid debris regions and cannot predetermine the amount of cleaning effort required for small-sized liquid debris. To address these shortcomings, it combines CNN with an SVM model.
Study [2] has implemented a cleanup robot composed of a mobile base and a manipulator. An object recognition algorithm is introduced to identify objects on a dining table and clear them using manipulators. The active stereo camera system is mounted on top of the robot and identifies the objects and their position is expected. Convolutional Neural Network with a transfer learning method that is capable of omitting the huge data and the sliding window method that can extract the object area. This method retrieves ongoing images in the constructed neural network and identifies them from the collected image data.
As seen in approach [3] some studies focused on minimizing the memory size and implementation cost of AI models. In this approach, they distinguished indoor objects between cleanable litter and noncleanable hazardous objects. This method addresses the shortcoming of existing AI-based cleaning robots which is it requires far more memory space than a typical robot vacuum can provide. It focuses on a good balance between classification accuracy and memory usage. The proposed method implements a lightweight and efficient CNN model which is called the SqueezeNet model. It is mainly used in an embedded environment; it involves several methods of model compression. The number of parameters used for one convolutional operation is reduced. As a result, SqueezeNet reduces the number of trainable parameters and the computational effort of the entire work. As cleanable litters identify Rice, Sunflower seed shells, soybeans, Cat Food and Dog Food. As non-cleanable litters identify Keychains, Shoes, Socks and Power Codes. !(a-convolution without quantization. b-convolution with post-training weight quantization c-convolution with a quantization aware weight training)[/images/Convolution_with.png]
here are also studies done to make a novel benchmark dataset for object detection purposes for sweeping robots in the domestic environment [4]. They have reviewed the development of object detection based on deep learning in computer vision. And proposed a large-scale publicly available benchmark dataset called object detection for sweeping robots in domestic scenarios. They have also taken into consideration that targets detected by a sweeping robot are unique, considering the varying camera angles, different indoor scenarios, and object category issues. This dataset was made to validate and develop data- driven object detection methods and laid the foundation for the optimization of the target detection model for sweeping robots. They evaluated several state-of-the-art object detection methods based on deep learning on their proposed dataset and then provided direction for future studies of target recognition of sweeping robots. To verify the feasibility of the model, they transplanted the trained AI model to the embedded system, which can provide a meaningful benchmark for the object detection application of sweeping robots in actual environments and promote the development of lightweight CNN.
Objective 2: Plan the paths and navigate for cleaning by considering the above intelligent algorithm.
Path planning and Navigation of a robot are of challenging tasks in mobile robots. Navigation is the robot’s ability to determine its position and its frame of reference. Path planning is to find a path from the current position to the target position. The study of Pandey [5] has stated that mobile robot navigation algorithms are three types Deterministic algorithms, Nondeterministic algorithms (stochastic) and Evolutionary algorithms. Also, the task of navigation is two types global navigation and local navigation. In global navigation, prior knowledge of the environment should be available. The local navigation robot can decide or control its motion and orientation autonomously
!(Mobile robot navigation algorithms)[/images/mobile_robot_navigation_algorithms.png]
Objective 3: Modeling and simulation of the cleaning robot to tidy up the Kids' Rooms and clean efficiently in Living Room.
Studies have been performed in the direction of human-robot interactions. In the study [8] they have introduced a breakfast table setting scenario where a robot acquires information from human demonstrations to arrange objects in a meaningful way. The objective of their study is to have robots obtain and combine the necessary amount of information from different sources in a meaningful way without being remotely controlled or teleoperated. In this study, they have used the dataset of experiences recorded from humans in virtual reality (VR) and its corresponding queries, the robot can extract and reason about object arrangements on a table setting for breakfast. The intentions of humans are reflected in the location and orientation of objects. The main extracted object properties are location and orientation, dimensions, the time it touched the table, and the category it belongs to. Then the spatial relationships were found and objects were moved using robot manipulators.
In the study [9] they have introduced a service robot which performs automated task planning using object arrangement optimization. For constructing the layout, they use positive examples and pre-extract hierarchical, spatial, and pairwise relationships between objects, to understand the user preference for arranging objects. Since the problem addressed is extremely wide, they assumed the robot can recognize and grasp objects so can assign a grasping pose for each object. Therefore, in this study, they focus on making goal states, task sequences, and robot trajectories to arrange objects automatically. Their layout computation first constructs a target layout of objects contained in the environment. In its learning phase, their work extracts various object relationships, such as spatial, hierarchical, and pairwise relationships, from positive examples to a given environment. By using the extracted relationship and considering the reachability of a robot to objects, their work optimizes configurations of objects into arranged configurations. Then using the target layout, their work plans feasible actions of the robot to arrange objects. They also proposed a priority layer that utilizes extracted relationships between objects and enables efficient object arrangement.