In Partnership with Nvidia, Meta Debuts latest LIama AI Model to Rival OpenAI
Earlier today Meta announced the latest version of its Llama artificial intelligence model, dubbed Llama 3.1. The newest Llama technology comes in three different versions, with one variant being the biggest and most capable AI model from Meta to date. Like previous versions of Llama, the newest model continues to be open source, which means it can be accessed for free.
The new large language model, or LLM, underscores the social network’s massive investment in keeping up in AI spending with the likes of highflying startups OpenAI and Anthropic and other tech giants like Google and Amazon.
The announcement also highlights the growing partnership between Meta and Nvidia. Nvidia is a key Meta partner, providing the Facebook parent with computing chips called GPUs to help train its AI models, including the latest version of Llama.
While companies like OpenAI aim to make money selling access to their proprietary LLMs or offering services to help clients use the technology, Meta has no plans to debut its own competing enterprise business, a Meta spokesperson said during a media briefing.
Instead, similar to when Meta released Llama 2 last summer, the company is partnering with a handful of tech companies that will offer their customers access to Llama 3.1 via their respective cloud computing platforms, as well as sell security and management tools that work with the new software. Some of Meta’s 25 Llama-related corporate partners include Amazon Web Services, Google Cloud, Microsoft Azure, Databricks and Dell.
Although Meta CEO Mark Zuckerberg has told analysts during previous corporate earnings calls that the company generates some revenue from its corporate Llama partnerships, a Meta spokesperson said that any financial benefit is merely incremental. Instead, Meta believes that by investing in Llama and related AI technologies and making them available for free via open source, it can attract high-quality talent in a competitive market and lower its overall computing infrastructure costs, among other benefits.
Meta’s launch of Llama 3.1 occurs in advance of a conference on advanced computer graphics in which Zuckerberg and Nvidia CEO Jensen Huang are scheduled to speak together.
The social networking giant is one of Nvidia’s top-end customers that doesn’t run its own business-facing cloud, and Meta needs the latest chips in order to train its AI models, which it uses internally for targeting and other products. For example, Meta said that the biggest version of the Llama 3.1 model announced on Tuesday was trained on 16,000 of Nvidia’s H100 graphics processors. For more on this, read the full CNBC report.