AI takes center stage at Google Cloud Next

GOOGLE CLOUD NEXT, SAN FRANCISCO – If you’ve never been to Google Cloud Next before, you might be forgiven for wondering if you were in the wrong venue sitting in the Moscone Center Tuesday morning. That’s because while there were token mentions of updates to its cloud infrastructure, a partnership with NVIDIA and the launch of a new cross-cloud product, the vast majority of the hour-and-a-half-long keynote was focused squarely on artificial intelligence (AI).

Alphabet and Google CEO Sundar Pichai touted the rise of AI as “one of the most profound shifts we will see in our lifetimes…as a company we’ve been preparing for this moment for some time.”

To be fair, Google’s focus on AI was somewhat expected given it pulled a similar move at its I/O conference earlier this year. And it’s far from the only one chasing the AI dream. For good reason, too. ABI Research predicted generative AI will add more than $450 billion in value to enterprise market by 2030.


Keep up with the all the news from Google Cloud Next 2023 with our dedicated news hub here.


But AI has to be trained and run on something. And that’s where the cloud part comes in.

As Mark Lohmeyer, Google Cloud’s VP and GM of compute and machine learning (ML) infrastructure, put it during a prebriefing call with press, “These workloads really require a purpose-built hardware, as well as an integrated and optimized software stack, working in conjunction with that hardware that can support sort of the entirely new level of computational demands that we're seeing."

Lohmeyer continued, "This is really, I would say, a once-in-a-generation inflection point in computing.”

Infrastructure updates

During the keynote, Google Cloud CEO Thomas Kurian talked up five key infrastructure announcements. Among these were a new Cross Cloud Network platform and updates to Google Distributed Cloud. Kurian didn’t delve into these onstage, but Lohmeyer provided more color on the call. The idea behind the former, he said, is to provide secure, high-performance network connectivity between public and private cloud assets.

“It's really focused on helping to ease the operational requirements in running in these multi-cloud environments and enable them to focus on running their business and supporting their applications as opposed to just operating the network,” he explained. “It can provide us with the underlying network substrate that would support the ability to move any workload between those environments.”

Meanwhile, updates to its Distributed Cloud product will bring its Vertex AI integrations to the edge, Lohmeyer said. That means customers will be able to run AI and data workloads anywhere – including on premises in their own data centers “while still bringing all the benefits of the cloud to bear in those more distributed environments.”

More Google Cloud to love

But we mentioned five infrastructure announcements, not two. The other three include:

  • The launch of a new Cloud tensor processing unit (TPU) v5e in preview, which Google claimed offers 2x better training performance per dollar and 2.5x inference performance per dollar compared to its predecessors.
  • The debut of GKE Enterprise, which enables multi-cluster horizontal scaling to enable productivity gains and slash deployment times. This is currently in preview.
  • It also announced forthcoming general availability of its A3 virtual machines, which are powered by NVIDIA’s H100 GPU. These will be available next month to help organizations improve their AI training performance. Lohmeyer explained the A3 supercomputers are “purpose built” to train, tune and serve “incredibly demanding and scalable generative AI workloads.”

Silverlinings is on site all week, and we’ve got plenty more coverage to come. That includes more detail on the AI front and how companies are putting the technology to work. Stay tuned!


Want to learn how to maximize your cloud investment? Check out our Cloud Executive Summit in beautiful Sonoma, Calif., from Dec. 6-7.