Meta releases AI model to enhance Metaverse encounter
Meta releases AI model to enhance Metaverse encounter
Meta said on Thursday it was releasing an artificial intelligence model called Meta Motivo,which could control the movements of a human-like digital agent with the potential to enhance the Metaverse encounter.
The corporation has been plowing tens of billions of dollars into its investments in AI, augmented reality and other Metaverse technologies, driving up its stake apportionment outlay projection for 2024 to a record high of between $37 billion and $40 billion.
Meta has also been releasing many of its AI models for free use by developers, believing that an open way could advantage its business by fostering the creation of better tools for itsservices.
“We depend this research could pave the way for fully embodied agents in the Metaverse, leading to more lifelike NPCs, democratization of character animation, and recent types ofimmersive experiences,” the corporation said in a statement.
Meta Motivo addresses body control problems commonly seen in digital avatars, enabling them to perform movements in a more realistic, human-like manner, the corporation said.
Holiday deals: Shop this period’s top products and sales curated by our editors.
Meta said it was also introducing a different training model for language modeling called the Large concept Model (LCM), which aims to “decouple reasoning from language representation.”
“The LCM is a significant departure from a typical LLM. Rather than predicting the next token, the LCM is trained to forecast the next concept or high-level concept, represented by a packed sentence in a multimodal and multilingual embedding space,” the corporation said.
Other AI tools released by Meta include the Video Seal, which embeds a hidden watermark into videos, making it invisible to the naked eye but traceable.
Post Comment