Humans are social beings, therefore communication with another happens everywhere in the world in various languages and through various devices. What about the mute community? They don’t have the opportunity to present their ideas to other people who can speak, without the help of another person who knows the sign language for interpretation or without the use of some device to do the translation. From the world’s population reportedly, there are a considerable amount of people suffering from speech disorders such as muteness, Apraxia (childhood/acquired) and Aphasia. These may occur due to brain damage, stroke, tumor or any other illness that affects the brain/ vocal cords/ mouth/ tongue etc. Since most of us do not understand sign language, for them to conduct a normal conversation with people like us they definitely need a source of translation. Our team came up with a design idea for a device to offer real-time sign language translation, to support efficient communication for the mute with the rest of the community who doesn’t understand sign language. We participated in InnovateFPGA 2019 to bring this idea to life and to make it our first steps to introduce our idea and findings to the world. The InnovateFPGA is a global FPGA design contest where teams from around the world compete as they invent the future of Artificial Intelligence with Terasic and Intel. This is a competition open to everyone including students, professors, makers and industry. We were able to feature engineer Electromyography (EMG) signals and signals from an Inertial Measurement Unit (IMU) obtained from a MYO armband (a wearable armband that consists of EMG pods and an IMU) and train a Neural Network to classify 5 sign language gestures. Once a model was built, we implemented this neural network on a De10-Nano Field Programmable Gate Array (FPGA) board, sent to us from Terasic to complete our project for the competition. We were able to interface the FPGA board via a Bluetooth connection with an Arduino since our final translation output was to be given as speech through a speaker. As second year undergraduates this was rather challenging, but we were able to secure the Iron Award at the Asia Pacific and Japan regional final round of the InnovateFPGA competition out of 30 teams. It was a great experience to have represented our country and bring glory to our university and nation. We are extremely thankful for the academic and non- academic staff of our department for their advices and immense support given to us. We wouldn’t have come this far in this competition if it weren’t for them. After this successful phase we are hoping to further improve our recognition algorithm and output full sentences using Natural Language Processing. We believe our findings will benefit this community largely in the future.
Project Members: Ramith Hettiarachchi (ENTC), Kithmini Herath (ENTC), Hasindu Kariyawasam (ENTC)