Emotions are powerful. They add color to our life and influence what we do. Despite being surrounded by advanced hardware and software, these devices are oblivious to how we feel, and they can’t respond naturally. The technology that we interact with every day can leverage this sentiment layer and improve the user experience. Any human conversation can be well acknowledged by recognizing the underlying emotions. Our interaction with the current IoT devices like Google voice assistant or Apple Siri has been an emotional agnostic conversation, where the devices perform specific tasks based on the user instruction. These IoT devices do not perceive the feelings of the user.
However, imagine a Voice Assistant who can suggest a song based on the emotion detected. On a serious note, this technology can help somebody suffering from depression by giving their loved ones daily updates on their mood and how they are feeling. That’s the potential of emotion-aware technology. This project’s idea is based on the process of relating sentiments and computers, which will result in the outcome from a smart device more meaningful when it performs the tasks after sensing the emotions and analyzing them.
This is an interesting IoT project idea for enthusiasts who want to work on and combine various technologies like artificial intelligence, IoT, and cloud computing. This project’s components include a Wi-Fi connection, Raspberry Pi (RPi) with a microphone and camera module, and usage of various services provided by AWS. The RPi is configured to work as a voice assistant using python programming, capturing the user’s voice and image whilst the user is interacting with it. The usage of python audio processing libraries and OpenCV to detect facial expressions to optimize the system and make it further efficient.
The two neural network models are built using supervised learning and are deployed on AWS in the background. There are various datasets available that can be employed to train the deep learning models. These models will predict the human emotion and notify it back to Raspberry p, which will respond appropriately based on the system’s feeling. The smart notification option gets triggered whenever the voice assistant senses extreme feelings like anger or sadness and notifies the loved one. This notification system can be developed using a programming tool like Node-RED or using services like AWS SNS.
Figure 1: Project Flow
The incorporation of sensors can further extend the system’s functionality to capture and monitor human vitals like heart rate, pulse, and skin temperature. Based on the human physiological data, we can result in precise emotion-sensing, as this data can provide more real- time information to the prediction models. Also, this project’s applications are vast ranging from various industries like healthcare, national security, education, gaming, and multimedia. In 2018, Spotify, the music streaming company, filed a patent that describes a voice assistant that can read your feelings and recommend songs. Besides, Amazon and Google have filed various petitions on emotion-sensing technology.
As to the future, consent, and privacy play an important role in such type of technology. Although it might seem intimidating to build devices that sense your emotion, it is important that people actively consent and use emotion-aware technology as there are many applications and transform the way we live our lives by enabling interaction with smart devices more fluid.