Intelligent Systems Research Group (ISRG) engages in designing intelligent systems and their deployment in real-world applications. The target areas of ISRG are robotics, drones, control systems, teleoperation, visual servoing, and AI.
Ongoing Projects
Hornet
Hornet is a VToL (vertical take-off and landing) winged drone project, which is funded by the National Research Council, and it is nearing its successful completion. Hornet uses four vertical thrusters in quadrotor configuration together with a horizontal pusher propeller. The four vertical thrusters are used to take off and land vertically, and the horizontal pusher propeller is used for cruise flying during the mission using wings. Hornet does not need a runway and it has long endurance due to winged flight in the mission.
Quad²
Quad2 is four quadrotors in a quadrotor assembly. This novel design gives redundancy and robustness in view of actuator failures while reducing EMI (electromagnetic interference) from the ESC (electronic speed controllers) and BLDC (brushless DC motors) on the electronic sensor system. Quad2 is funded by the Senate Research Committee.
Drone Based Agriculture
In this project drones are used for aerial monitoring of the green complexion of paddy over the season and clinically advise the farmer the best course of actions to improve the yield.
Vision based Traffic Monitoring
In this project, the photos taken are processed in real-time and determines the road occupancy using a trained neural network. Experiments conducted recently have given accurate results for implementation of the method for traffic control where the present static timing can be dynamically altered using vision-based traffic information.
Solar-powered Autonomous Surface Vessel
A project for aquatic surveillance and monitoring has been initiated. This work is expected to address several key issues including the increase in the number of illegal fishermen approaching Sri Lankan territory and the increase in illegal activities on local water bodies such as sand mining, garbage dumping and toxic material disposal.
Disaster Response
Several disaster response robotic platforms have been developed with the collaboration of The Commonwealth Scientific and Industrial Research Organization (CSIRO), Australia.
Two semi-autonomous mobile robot platforms for disaster response related mapping, localization and search for victims have been completed as final year projects. One platform has a leg-wheel hybrid locomotion mechanism to tackle challenging terrain, in which its autonomous and dynamic reconfigurability of the locomotion is achieved upon successful identification of the forward terrain characteristics. The second platform is a quadruped robot that can adapt to a variety of challenging terrains. The platform comprises an improved direct-drive leg design, which adapts to three main variants of terrains by changing the leg structure.
Our latest project in this domain deals with a collaborative ground-aerial multi-robot system for disaster response missions. The ground robot is a hexapod legged robot whereas the aerial robot is a quadcopter with a custom hardware addon. The teleoperated quadcopter is utilized to survey the area to map and identify targets, and transmit visual feed to a ground control station. The autonomous hexapod legged robot further inspects and interacts with the area, and is capable of reaching a victim to deliver a suitable medipack while enabling teleconference with the ground control station.
Self-Driving Car
We have partnered with Creative Software (pvt) Ltd. to embark on a research project focusing on driverless car technology. Main objectives of the project are to strengthen local R&D capacity in Intelligent Systems, Computer Vision and Machine Learning, and forge even closer ties between industry and academia.
In this self-driving car project, we mainly look at three main components namely, state estimation & localization, perception, and motion planning, that should work hand-in-hand to achieve desired autonomy. In state estimation & localization, the vehicle needs to be localized on a given map. Perception deals with detecting and identifying road signs as well as tracking dynamic objects such as other vehicles and pedestrians. Motion planning enables the vehicle to locomote according to a mission plan while adhering to traffic rules and avoiding obstacles.
In this multi-project proposal, we have the following overall targets:
- State estimation mechanism for self-driving, which can provide uninterrupted estimations of position, velocity and orientation of the vehicle with respect to an earth-fixed coordinate frame.
- Detection and identification of regulatory elements such as traffic signals, speed limits and traffic lights without a prior HD map.
- Detection, tracking and trajectory prediction of dynamic objects such as vehicles, pedestrians and cyclists based on multi-sensor fusion.
- Car should handle following maneuvers: Lane keeping and keeping a safe distance with the vehicle in front (Adaptive cruise control); Overtaking; Auto parking (parallel and perpendicular); Giving way to pedestrian/cyclist on, or entering a crossing.; Entering a road giving way to vehicles; Self-driving the car in a dynamic-obstacle-free environment such as a racing track.
Our Research Members
-
Dr. Peshala Jayasekara
Senior Lecturer -
Dr. Jayathu Samarawickrama
Senior Lecturer -
Prof. Rohan Munasinghe
Professor -
Dr. Ranga Rodrigo
Senior Lecturer