BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

The Future Of AI—And Everything Else—Is Hybrid

Following

Qualcomm recently released a white paper titled, “The Future of AI is Hybrid.” In the paper, they outline a clear case that for AI to develop to its maximum capabilities, it needs to be processed both on the cloud and the edge. Computing at the edge would improve issues like cost, energy use, reliability, latency issues, privacy—all of the things that make scaling and growing a technology difficult. And they’re right: for AI to optimize fully, it needs more than one partner, more than one solution. But the greater lesson here is: that’s true for all technology moving forward.

Why hybrid technology? Why now?

When we hear the term “hybrid,” many of us think of hybrid cars—cars that run on both gasoline and electricity. We in the tech space eventually grabbed that term to refer to things like hybrid cloud—a situation where companies may process some of their data on the public cloud, private cloud, or data center in some type of mix. The goal in creating these hybrid models in technology was the same as it was with hybrid cars—to reduce energy consumption, improve costs, enhance performance.

The hybrid cars grew in popularity because they allowed users the enjoy the best qualities of both types of cars—gas and electric. Gas engines allow the hybrid to refuel quickly and move longer distances before needing fuel. The electric side helps cut emissions and save money. A similar concept is true for AI. AI needs somewhere powerful and stable for model training and inference, which require huge amounts of space for processing complex workloads. That’s where the cloud comes in. At the same time, AI also needs to happen fast. For it to be useful, it needs to process closer to where the action actually happens—the edge of a mobile device.

Edge AI can be implemented closer to where data is being created. It doesn’t need to move data “off-site” to a cloud or data center. Because of that, it can work faster, make decisions faster, and use less power. This is important for not just phones but cars, cameras, health devices and security devices that are taking on more and more advanced decision-making capabilities. After all, none of us—as much as we’d like to be—can rely on 24/7 connectivity to the internet or cloud.

If AI is hybrid, what does that look like?

Generative AI requires incredibly high amounts of compute. It uses far more data, resources, and now—even more curious and demanding users—than any technology we’ve seen before. To process all of that data at the speed at which users are demanding it, in the “real-time” or “near real time” windows that users want and need it would be impossible on the cloud. It would also be incredibly expensive.

In its paper, Qualcomm agrees—large language models take months to train and require complex server hardware capable of processing impossibly vast amounts of data quickly. At the same time, they say mobile devices can handle powering some of the smaller large language models at the local level. With mobile devices taking care of smaller, easier processing at the edge, the cloud frees up to manage the bigger, stronger work. The partnership saves time and energy and gives end users a more seamless experience. It’s a more powerful, efficient way to distribute generative AI workloads, and we’ll likely see this model continue to grow in the near future, with the idea that mobile devices will also continue to get stronger and more capable of taking on stronger and more capable work.

For its part, Qualcomm is already using this approach, creating a unified AI stack that can be deployed across both small devices and the cloud to help scale AI to the maximum level. And while Qualcomm is showing leadership in this area, I certainly expect to see and hear more from companies participating across the AI stack as to how more compute and processing will be performed at the edge to maximize the value of AI while potentially better managing cost and resources required for AI to fully scale.

The future of Hybrid AI—and everything else

Qualcomm’s statement that the future of AI is hybrid is true. At the same time, the future of AI is also still unknown. Generative AI is growing faster than any technology we’ve ever seen. It’s learning and changing and igniting new ideas daily. Today, hybrid AI is what we as humans see as the solution for advancing AI at scale. We would be short-sighted to think this is the only way.

As noted in the paper, we are just at the beginning of seeing what new use cases will emerge for generative AI. As generative AI becomes more democratized, we will likely see an increased focus on processing at the edge where users are. After all, most normal people don’t own huge cloud spaces for processing data. They need generative AI to work quickly and easily where they are. And for the most part, more specialized generative AI apps will be able to work that way because they will require much less data to learn and generate. The marketplace seems to agree. Studies show Edge AI hardware marketplace will expand from 900+ million in 2021 to 2 billion+ in 2026.

The bigger picture here is something I’ve been writing about for quite a while now: as technology becomes more complex and even overwhelming to grasp, we are seeing more and more “hybrids” pop up. That doesn’t just mean two technologies—edge and cloud—working together. It means two more companies—OpenAI and Microsoft, for instance—coming together to share their strengths to create something even stronger. Google pairing Brain and DeepMind another example. In this world, where technology is moving so incredibly fast, we are passing the era where any single company can do it all. Sure, companies can buy other companies that can help them do it all, which we are also seeing. But the age of the single engine technology company is over. From here on in, we’re all going hybrid.

Follow me on Twitter or LinkedInCheck out my website or some of my other work here