While Nvidia dominates the market for AI model training and deployment, with over 90%, Google has been designing and deploying AI chips called Tensor Processing Units, or TPUs, since 2016.
On Tuesday, Google said that it had built a system with over 4,000 TPUs joined with custom components designed to run and train AI models.
It's been running since 2020, and was used to train Google's PaLM model, which competes with OpenAI's GPT model, over 50 days.
Google's TPU-based supercomputer, called TPU v4, is "1.2x–1.7x faster and uses 1.3x–1.9x less power than the Nvidia A100," the Google researchers wrote.
For example, Google said that Midjourney, an AI image generator, was trained on its TPU chips.