Mobius Labs unveils breakthrough multimodal AI video search
At IBC 2023 on booth 5.G04, Mobius Labs, a developer of next-generation AI-powered metadata technology, will unveil its latest Multimodal AI technology that represents a breakthrough in how organisations can “chat” with their content libraries in the same way the world has learned to chat with large language model (LLM) systems such as OpenAI’s ChatGPT and Google’s Bard. The difference with Mobius Labs is that this system has been designed specifically for the Media & Entertainment industry, and it is efficient enough to be hosted locally, in the cloud, or both.
When humans look at a piece of video we use our vision, hearing and language capabilities to understand the content. Mobius Labs has trained foundational models based on computer vision, audio recognition and LLMs to interpret media in the same way.
“Imagine having a private conversation with your content library about what is happening in a scene or episode using natural language prompts,” said Appu Shaji, CEO and Chief Scientist at Mobius Labs. “Multimodal AI technology lets us combine what the AI sees, hears, and reads to create a more nuanced understanding of what is happening within the content. Once AI can summarise and understand what the content is, things like search and recommendation becomes infinitely more powerful.”
As an extension of Mobius Lab’s Visual DNA, the company’s AI-based metadata tagging solution, this new technology breaks new barriers in how content can be described and indexed without any human involvement. In the past few years, AI solutions have begun to address search and recommendation challenges, but these solutions required extensive development, customization, and engineering efforts. With new multimodal solutions, the technology works 'out of the box' to cover a wide range of use cases.
Furthermore, to ensure that customers maintain full ownership of their data, these solutions offer a headless SDK that adheres to the principle of 'bringing the code to the data, rather than bringing the data to the code’. This approach not only reduces expensive network communication but also incorporates privacy by design.
“Our R&D team are true pioneers as they bring the power of multimodal AI to the M&E industry,” said Jeremy Deaner, COO, Mobius Labs. As data volume continues to grow exponentially, a key design tenet of the team is to keep the code efficient to bring the marginal cost of running Mobius Labs’ AI solution to near zero. “We have some fundamental R&D within the company which is able to make our model smaller and in some cases as much as 20 times more efficient than the competition,” said Shaji.
By enabling this new level of data awareness, Mobius Labs empowers media companies to go beyond content search to provide their customers with a new level of value. Curated recommendations can be made on a large scale, tailored to individual tastes and preferences, leading to significantly higher customer engagement. This is achieved without the need for a large data science team to build and iterate these algorithms. For example, the greatest successes in the media industry have been Netflix and TikTok's recommendation algorithms. Mobius Labs' solution can be easily integrated into existing systems to provide superior recommendation capabilities.
Visitors to IBC Booth 5.G04 can get a demo of this breakthrough technology and see first-hand what it’s like to chat privately with your content library.
For more information visit: https://www.mobiuslabs.com