Could new regulations kill the cloud’s AI buzz?

The boom in artificial intelligence (AI) helped propel Google Cloud to its first-ever operating profit and has been cited by Microsoft and others as an expected boon for cloud growth overall in the coming years. But with governments across the globe stepping up their scrutiny of the technology, it’s worth asking whether new rules and regulations could burst the expected AI bubble. The short answer? It's not likely. In fact, some analysts suggested that requirements could actually spur demand for new cloud products for AI workloads.

According to Gartner data, global public cloud end-user spending is set to increase from an estimated $597.3 billion in 2023 to $1.02 trillion by 2026. That includes application services, business process services, management and security services, and infrastructure as a service. While AI is factor in the expected growth across those categories, Gartner VP of Cloud Services and Technologies Sid Nag told Silverlinings the forecast doesn’t include potential uplift from generative AI services like ChatGPT.

“The amount of additional growth that could potentially come out of generative AI is obviously gravy on top of the $1 trillion,” he said. But, he noted, Gartner hasn’t yet quantified how much gravy there will be nor what percentage of that might be impacted by hurdles related to regulatory, licensing, sovereignty, privacy and other issues that spring up.

Many-tentacled beast

At this point it certainly seems at least something will spring up. In the U.S., the Federal Trade Commission, Department of Justice, Consumer Financial Protection Bureau, Equal Employment Opportunity Commission and both chambers of Congress have wary eyes trained on AI. Across the pond, the European Union has proposed a regulatory framework for AI which could soon take effect and the U.K. is also working to flesh out its own position on AI regulation. And last month, China proposed measures to govern generative AI systems operating there.

But there are different kinds of limitations – or guardrails, shall we say – that may be applied to AI.

The word regulations is commonly used as an imprecise catchall term and is one New Street Research’s Blair Levin prefers not to use for that reason. Levin explained there are “regulations” and then there are Regulations. The difference being that Regulations refer to actual regulatory requirements passed by government entities that directly affect AI, while “regulations” are rules set by different parts of the value chain or legal system that may impact the way AI evolves. The latter might include rules around privacy, fraud, licensing, criminal liability, or guidelines set by ratings agencies.

He added you also have to factor in the influence specific purchasing entities wield. Governments are likely to be huge buyers of AI services and will likely demand specific features which could guide the way AI develops. The same could be said of demands from healthcare or educational entities, he said.

Basically, the idea of AI regulation (for lack of a better word) is a many-tentacled beast.

What happens next?

Across the board, Nag, Levin and analysts from IDC and S&P Global Market Intelligence told Silverlinings efforts to rein in AI - regardless of the source - are unlikely to stifle investment in the technology. In fact, Nag said the varying needs of different countries and verticals could drive development of new cloud products for AI workloads.

S&P’s Eric Hanselman put it this way: “While regulations could impact how the use of AI is implemented, it will be difficult to blunt either investment or the growth in the amount of cloud resources that are consumed in pursuit of it.”

“Investment in AI may be enhanced by regulatory efforts, as there is support not only for model development, but also in evaluation and compliance tools and assessments of AI applications,” he continued. “Certifying the provenance of training data to address intellectual property and data sovereignty concerns will be an opportunity for data providers, cloud operators and enterprises.”

Ritu Jyoti, Group VP of IDC’s Worldwide Artificial Intelligence and Automation Research Practice, added that while they may come with a cost burden for compliance, “regulations have the potential to enhance wider trust and confidence in AI systems and propel further AI spend.”

IDC Research Director Lara Gerden concluded:“What I would worry about more would be the equivalent of cybersecurity breaches. To solve for those, we need regulation and open, transparent collaboration across industry, academia and government.”