Meta Platforms, Inc.’s Fundamental AI Research team has released a set of state-of-the-art artificial intelligence research models to help users in multiple creative applications.
The five FAIR research models include image-to-text and text-to-music generation models, a multi-token prediction model and a technique for detecting AI-generated speech.
Meta released key components of Chameleon models under a research-only license, which can process and generate both text and images.
Just as humans can process the words and images simultaneously, Chameleon can process and deliver both image and text at the same time. While most large language models usually have unimodal results, where they turn texts into images, Chameleon can take any combination of text and images as input and also output any combination of text and images. The possibilities with Chameleon include generating creative captions for images or using a mix of text prompts and images to create an entirely new scene.
Meta is releasing the pretrained models for code completion under a non-commercial, research-only license. Using multi-token prediction tool can help people in predicting multiple future words at once while generating creative text, brainstorm ideas and answer questions.
Another AI feature being introduced is new improved text-to-music generation model, JASCO, which is capable of accepting various inputs, such as chords or beat, to improve control over generated music outputs.
Meta is also releasing AudioSeal, which the tech giant claims to be the first audio watermarking technique designed specifically for the localized detection of AI-generated speech.
Being released under a commercial license, AudioSeal is “one of several lines of research we have shared to help prevent the misuse of generative AI tools,” Meta said in a press release.
The company has also developed automatic indicators to evaluate potential geographical disparities in text-to-image generation models.
Business News
Meta Releases New AI Research Models
2024-06-20 12:14:46