Keeping Up with GenAI: What You Need to Know About the Rise of Generative AI
EDGE100 Report, 2023
Generative AI is reshaping how we create content – and so much more
Few technological developments in recent years have generated such excitement, consternation and implementation frenzy as generative artificial intelligence (GenAI). To put it simply, GenAI comprises deep learning models that can generate text, images, video, code, and other synthetic data based on training data.
ChatGPT, OpenAI’s platform based on large language models (LLMs) captured the public imagination owing to its uncanny ability to answer questions, create catchy songs and tell jokes that actually land. ChatGPT ignited a flurry of GenAI development and adoption, with a host of competing and complementary tools cropping up and corporations scrambling to find viable use cases to give them an edge. ChatGPT has also generated its fair share of controversy, facing criticism relating to copyright issues, bias, cybersecurity, economics and employment, and existential risk, among others. Nevertheless, there’s no denying the impact that it’s had on the GenAI landscape, which is set to continue growing exponentially.
It’s an idea with a surprisingly long history. The notion of automated machines capable of producing text or sound stretches back as far as the automatons described in the writings of ancient Greece, but was really popularized in the mid-20th century by science fiction. However, it was not until 1950, when Alan Turing published his seminal paper Computing Machinery and Intelligence, which considered the fundamental question: “Can machines think?” (It’s a fairly deep philosophical rabbit hole; the learning refers to identifying statistically probable outputs based on inputs.) Six years later, researchers from a range of fields converged at The Dartmouth Summer Research Project on Artificial Intelligence, and AI entered the realm of academic study, igniting decades of rapid development as computers became more and more sophisticated and powerful. The first chatbot, ELIZA, was developed in the mid-60s, gradually evolving into Clippy, the virtual assistant many fondly – or less fondly – remember from Windows 95, among countless other applications.
The GenAI arms race
GenAI is about far more than just generating content from prompts; its ability to analyze and draw inferences from vast datasets has extraordinary implications for science, medicine, finance, economics, law, sociology and more. A growing number of startups have been accelerating the pace of GenAI development across foundation models, hardware, and end-user applications.
Foundation models
These are AI systems trained on vast sets of unlabeled data that can be used across a range of applications, as opposed to data-specific models for specific tasks. This sort of self-supervised learning is important not only because it speeds up the development process for new GenAI applications, but also because it reduces the considerable carbon footprint of training a new AI model. In 2023, companies developing foundation models collectively raised over $21 billion in funding, although nearly 85% of this was concentrated among big players such as OpenAI and Anthropic.
OpenAI’s GPT-3 and GPT-4 are notable examples of foundation models, but the company isn’t stopping there. After revolutionizing the notion of a chatbot, OpenAI has set the impressively rendered cat among the equally realistic digital pigeons with its announcement of Sora, a text-to-video generation model with a much-vaunted demo reel doing the rounds on social media.
Stability AI, the startup behind the popular text-to-image generator Stable Diffusion, has announced Stable Video 3D, which builds upon its existing video offering by creating multi-view 3D representations of objects, unlocking potential applications across e-commerce, gaming and creative work.
Not to be left behind, Google has introduced GenAI video capabilities to its office suite Workspace, along with Google Vlogger, a multi-modal tool for creating digital avatars from images.
Elsewhere, Anthropic has launched Claude 3, which now includes a tool to create AI-driven solutions, Mistral AI launched its multilingual text-generation model Mistral Large earlier this year, and Inflection AI released the latest version of its conversational AI tool, Inflection 2.5.
GenAI hardware
Hardware provides the infrastructure on which these AI models run, with companies devoting increasing research, development and funding to dedicated AI chips. Dedicated GenAI hardware developers such as d-Matrix and Lightmatter account for more than 40% of all GenAI funding in the infrastructure space.
NVIDIA continues its dominance with the launch of the Blackwell GPU, which can run trillion-parameter LLMs in real time at a fraction of the cost and energy consumption of its predecessor. NVIDIA GPUs are also behind power Meta’s new GPU cluster infrastructure, which will enable the tech giant to train larger, more complex AI models.
On-device AI processing has seen significant growth through incumbents like NVIDIA launching its lightweight RTX 500 and 1000 GPUs to power new-generation AI computers, and startups like DEEPX, with its ultra-low-power AI chips, and Expedera, with its Origin neural processing units to run LLMs on edge devices.
In the same vein, Qualcomm’s Snapdragon 8S Gen 3 chipset is bringing device-side AI capabilities to mid-tier Android devices.
End-user applications
Increased competition in the GenAI space has seen continuous improvements made to existing end-user applications. Sound is hot right now, with OpenAI’s Read Aloud function and Character AI’s Character Voice functions giving vocal capabilities to chatbots. Pika Labs, on the other hand, has added GenAI sound effects to its video maker, removing the need for users to separately create sounds.
There’s also a shift towards more precise solutions, with Google updating its Gemini chatbot for more customized replies, and OpenAI launching GPT Store, which aggregates a range of custom chatbots developed by OpenAI’s partners and the broader development community.
GenAI players to watch
The GenAI market comprises a number of large established players that have attracted the lion’s share of funding thus far, along with a proliferation of startups looking to make their mark.
Big players
On the hardware front, NVIDIA complete dominance is illustrated by its 92% share of the data center GPU market in 2023. This helped its quarterly revenue to mushroom 272% that same year, from $4.3 billion in Q1 to a projected $16 billion in Q4.
Foundation models and platforms are dominated by OpenAI and Microsoft, which together comprise more than two-thirds of market share. OpenAI is on track to make $2 billion in annual revenue just seven years after being founded, thanks no doubt to its partnership with Microsoft, whose investments in AI have yielded record revenues of $62 billion in Q2 2024. Amazon Web Services’ differentiated vendor approach has seen it clinch 8% of the market, while Google finds itself playing catch-up with 7%.
Startups
GenAI startups are competing fiercely for their piece of the pie, keeping an eye on the activities of the incumbents and finding ways to differentiate themselves.
SPEEDA Edge analysis found that the top GenAI startups are split fairly evenly between foundation models, GenAI applications, and the enabling infrastructure. However, the top three – Mistral AI, Aleph Alpha, and Cohere – are all in the business of foundation models.
Founded only last year, Mistral AI has, with the help of a team of researchers from Google DeepMind and Meta, quickly established a reputation for building industry-leading LLMs, including Mistral 7B and Mixtral 8x7B. Its Mistral Large model has been hailed for its multilingual, reasoning, math, and coding capabilities, while its Le Chat chatbot offers a conversational entry point to interact with the company’s various models. Mistral has seen investment from the likes of NVIDIA, Microsoft, and Salesforce to swell its total funding of $544-million raised since June 2023, which it is putting into development and deployment of new products. Recent partnerships with Google Cloud, Amazon, Microsoft, and IBM should help propel it to further successes.
Riding the sustainability wave is Aleph Alpha, which has the distinction of running its data centers on renewable energy, meaning any inference jobs executed through its API incur no CO2 emissions. The company is also dedicated to maintaining data sovereignty within Europe, making its LLMs ideal for data-sensitive and highly regulated workloads. Aleph Alpha’s Luminous series of transformer LLMs supports multiple input types and handles everything from classification to creative writing. It’s partnered with Graphcore to advance the research, development, and deployment of advanced multimodal AI models, and with Innovation Park Artificial Intelligence to strengthen AI in Europe. With $500 million Series B funding raised in Q4 2023, on top of $143 million prior funds, the future is looking good for Aleph Alpha.
Cohere is targeting enterprise use cases with its meticulously crafted LLMs, specialized for retrieval-augmented generation (RAG) on proprietary data, which are outperforming established counterparts in this space. Cohere's Command LLMs are accessible through the Azure AI Model Catalog, thanks to a partnership with Microsoft. Moreover, Cohere has partnered with consulting firms like Accenture and McKinsey to offer customized GenAI solutions to businesses. With big hitters like NVIDIA, SAP, and Oracle throwing their weight behind it, Cohere has raised over $430 million and is reportedly in talks to raise another $500 million–$1 billion, which will bolster AI development, team expansion, and sales efforts.
Looking ahead
Despite a slowdown in VC funding across almost all industries tracked by SPEEDA Edge, the success of ChatGPT spurred a major surge in funding for GenAI, which accounted for almost a quarter of all VC funds raised. That totaled $22.7 billion in 2023, a meteoric rise from the $2.4 billion raised in 2022. Foundation models led the charge, with 2022’s $1.2 billion soaring to $20.7 billion in 2023. Believe the money; the GenAI train is only speeding up. These are some of the key areas we’re watching in the near future.
Small language models (SLMs) offers a solution to the limitations of large language models (LLMs). Designed with fewer parameters and requiring less training data, SLMs are faster, more cost-effective, and easier to deploy, particularly on smaller or less powerful devices, and they can be fine-tuned for greater adaptability and customization in specific applications. Small is attracting big business, with Microsoft reportedly developing smaller and more economical GenAI models, and Google introducing the Gemma model, which can run on PCs. Although GenAI has hitherto seen greater adoption among large corporations, small and medium-sized enterprises are expected to drive market demand for SLMs.
GenAI is expanding its reach by integrating with hardware devices. Developers are exploring ways to embed GenAI directly into hardware, with the potential to enhance digital content creation and user experiences. OpenAI launched the ChatGPT app and Adobe integrated its text-to-image tool with Apple’s Vision Pro headset, while Brilliant Labs has introduced lightweight AR glasses featuring AI assistant Noa and Google’s Gemini assistant may be coming to its earbuds.
Encouragingly, after initially lagging behind their proprietary counterparts, open-source models have greatly improved their quality and performance. Meta’s open-source artificial general intelligence initiative and xAI’s release of the Grok AI model’s base code on GitHub are promising signs of greater collaboration and transparency to come, offering developers the potential to create features, applications, and design interfaces faster than ever.
Keeping pace with GenAI
Considering the breakneck pace of development, it can be difficult for organizations to keep track of GenAI, let alone know which startups to invest in or which solutions to adopt. That’s where market intelligence comes in. SPEEDA Edge maps corporate strategies to their partnerships, acquisitions, and investments in emerging industries, visualizing and comparing these activities to guide strategic priorities. To do so, we aggregate a vast quantity of industry, technology and company information. What sets us apart in an age of data overload, however, is that all data entered into our platform is verified, making this the most reliable source of information for companies that need to understand their competitors, identify present and future use cases, and make accurate, informed decisions in a highly competitive environment.
SPEEDA Edge’s team of analysts is also available to assist enterprises in corralling and drawing insights from data, reminding us that, even in the age of thinking machines, the human connection can make all the difference.
Conclusion
Looking at the current trends in GenAI, few could have predicted the impact that ChatGPT would have not just on the technological landscape, but on the fabric of society. Within just a couple of years, GenAI has become intertwined in countless facets of our lives, with startups racing to develop the next killer solution, and businesses stumbling over themselves to figure out how to make this technology work for them.
SPEEDA Edge helps companies understand technological impacts like this and how they affect the competitive landscape. To understand the potential impact of our services on your business, contact us for a personalized demo.
For more information about the tech trends shaping strategic priorities, head over to our blog and subscribe to our newsletter.