Social media behemoth Meta is shaking things up again.
This time, they're introducing two new artificial intelligence (AI) models designed to revolutionise content editing and generation.
The first model, Emu Video, is a nifty little tool that can generate video clips based on text and image inputs. Imagine being able to create a video walkthrough of a property simply by typing a description and uploading a few photos. Mind-blowing, right?
The second model, Emu Edit, is all about image manipulation. This tool promises precision editing, allowing users to remove or add backgrounds, perform colour and geometry transformations, and carry out local and global image editing.
Both models are still in the research stage, but Meta is optimistic about their potential. They see these tools as game-changers for creators, artists, and animators - and we can definitely see why.
The Emu Video model uses a 'factorised' approach to video generation. This involves splitting the process into two steps: first generating images based on a text prompt, and then generating video based on both the text and the generated image. This efficient approach allows the model to generate high-quality videos at 16 frames per second.
The Emu Edit model, on the other hand, was trained using a dataset of 10 million synthesised images. Each image came with an input image, a description of the task, and the targeted output image. This model aims to precisely alter only the pixels relevant to the edit request, ensuring that the final image is as close to the original vision as possible.
These new tools are part of Meta's ongoing efforts to harness the power of AI. And while regulators are keeping a close eye on these developments, we can't help but be excited about the potential these tools have to transform the way we create and share content. So, watch this space - the future of content creation is here, and it's powered by AI!
Made with TRUST_AI - see the Charter: https://www.modelprop.co.uk/trust-ai
Comments