Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Artificialanalysis.ai"


1 mentions found


Liang told Business Insider he expects 90% of AI computing workloads will be in inference in the not-too-distant future. AdvertisementThat's why several startups are charging aggressively into the inference market — emphasizing where they might outperform the goliath in the space. Speed is an important factor when multiple AI models talk to each other and waiting for an answer can dampen the magic of generative AI. The number of tokens per second that can be consumed (when a prompt goes in) and generated (when a response comes out) is a common metric for AI computing speed. AdvertisementCerebras's AI chip is roughly size of a dinner plate.
Persons: , Rodrigo Liang, Colleen Kress, workloads, Liang, Andrew Feldman, Bernstein, Colette Kress, Kress, Nvidia's, SambaNova, Anthropic's Claude, Nvidia isn't, Jensen, Dylan Patel, Patel Organizations: Service, Business, SambaNova Systems, Nvidia, ARM, AMD, o1, BI Locations: Artificialanalysis.ai, TCO
Total: 1