NEON: An Artificial Human
In this era, conversational ai is in full swing. Many successful businesses have invested in speech-based assistant and chatbots to automate communication and create personalized customer experiences at scale. Messaging and speech-based platforms are rapidly displacing traditional web and mobile apps to become the new medium for interactive conversations. The real power of conversational AI lies in its ability to carry out highly personalized interactions. But facilitate stronger interaction and greater engagement is still quite a tedious task for speech based agents and chatbots like google assistant, Siri, etc.
For better engagement and to improve the quality of conversation Samsung decided to develop an artificial human. Sounds like something you would only hear in sci-fi movies, right? Well, not anymore. Samsung is working towards the development of artificial humans with much-talked-about Project Neon. The company put an end to the mystery and took center stage at CES 2020 in Las Vegas to explain what neon exactly is.
What is NEON?
Neons are computationally generated virtual beings meaning that they are digital and they live and exist in the virtual realm, they don’t have a physical embodiment but can be like us. They have their own expressions, smiles, personalities, etc. The company recently made it evident by saying it is not some kind of up-gradation of some voice-based chatbot like say Bixby you might have been waiting for. It won’t answer your calls to play a song or tell what’s the weather outside. Instead, The company revealed that neon is “a Computationally created virtual being”, essentially a real human being that would be able to show intelligence and emotions after training. It even looks and behaves exactly like a normal human as you can see in the image.
Technologies used for NEON:
They are based on a platform called core R3 which is reality, real-time, responsiveness. Reality means creating 100 per real human being that exists in the virtual world. A computer or an algorithm in the backend generates every single frame every single instance of what is happening. This is real-time. So we may often see very realistic characters in movies or in games but those are created by hundreds of digital artists and take maybe a couple of years to render. But neons are created in real-time. The last R stands for responsiveness meaning we are creating these to respond with humans to interact with each other. They can create movements and facial expressions in real-time.
The other proprietary technology that is used is called spectra. It is a new technology platform that makes neon “immersion all”. It is responsible for Neon’s intelligence, learning, memory, and emotions. So both core R3 and spectra will combine solutions to rapidly adapt human behavior.
What is their use?
Neon has many applications. We can envision them, maybe one day a financial advisor, a bank teller may be at the airport when we check in we can interact with one or maybe at the coffee shop for your order these neons can interact with you. Domain-specific areas such as the bank or financial institutions can personalize neon so that they can provide knowledge behind it.
The developers still have to integrate the spectra system in the neon so it’s still rigid. They mentioned it will be an open platform for all and they still have to work hard to meet the expectations of everyone. The neon is still in its early stages, still quite impressive.