In my last post, I had mentioned that lots of people would be watching NVIDIA’s Q2 results closely. The reason is that NVIDIA currently leading the hardware charge that’s handling much of the vast computing load required by the Large Language Models (LLMs) powering today’s AI frenzy. This article sheds light on why NVIDIA's in that dominant position for now. But competitors aren’t standing still.
Unlock the Potential of AI, Securely
On the topic of LLMs, one key aspect that will most likely take years to play out is whether or not open LLMs (like ChatGPT or Bard) will retain their early lead or whether open LLMs like Meta’s Llama 2 will surpass them. The open or closed debate will likely rage on since it’s such a complex topic. Case in point: The alleged leaked internal “We Have No Moat…” document argued that open-source LLMs will eventually win, while Google’s own DeepMind CEO disagrees with that premise.
Besides the open or closed nature of LLMs, the other key aspect that business are considering is fine-tuning LLMs for a couple of reasons: 1) they want to limit the focus of an AI tool (bigger is not always better) or 2) they want to train LLMs using their own internal datasets (think of internal efficient customer support tools as an example).
There are elements of the open vs. closed race when it comes to fine-tuning as well. That’s why OpenAI recently unveiled GPT-3.5 Turbo and why IBM recently announced that it would make a Llama 2-powereed version of Watsonx available to select clients and partners.
While we’re still in the early days of the AI hype cycle, it’s clear that businesses’ appetite to implement AI in a multitude of ways shows no sign of slowing.