Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Nvidia H100"


10 mentions found


The AI cloud-computing organization, called Voltage Park, has 24,000 Nvidia H100 chips, Voltage Park CEO Eric Park told Reuters in an interview. The operation plans to offer long- and short-term, low-cost AI computing to help alleviate the shortage of AI chips. Voltage Park plans to set up clusters of the Nvidia AI chips in Texas, Virginia and Washington. After the release of OpenAI's ChatGPT last year, demand for Nvidia's advanced AI silicon soared as businesses scrambled for chips to power their AI ambitions. Intel , Advanced Micro Devices (AMD.O) and a host of startups sell competing AI chips that are also in heavy demand.
Persons: Jed McCaleb, Eric Park, OpenAI's ChatGPT, McCaleb, Max A, Anna Tong, Cynthia Osterman Organizations: NVIDIA, Handout, REUTERS Acquire, Nvidia, Reuters, Intel, Devices, Navigation Fund, Navigation, Thomson Locations: Santa Clara , CA, Texas , Virginia, Washington, Gox, San Mateo , California, San Francisco
The U.S. Department of Commerce announced Tuesday that it plans to prevent the sale of more advanced artificial intelligence chips to China in the coming weeks. Those earlier restrictions banned the sale of the Nvidia H100, which is the processor of choice for AI firms in the U.S. such as OpenAI. The new rules will ban those chips as well, senior administration officials said in a briefing with reporters. Other rules will likely hamper the sale and export to China of semiconductor manufacturing equipment from companies such as Applied Materials , Lam and KLA. Companies that want to export AI chips to China or other embargoed regions will have to notify the U.S. government.
Persons: Gina Raimondo, They're, Raimondo, " Raimondo, — CNBC's Kristina Partsinevelos Organizations: Nvidia, U.S . Department of Commerce, Broadcom, Marvell, AMD, Intel, KLA, U.S, CNBC, . Commerce, ., Commerce Department Locations: Santa Clara , California, China, U.S, Macao, United States
Tesla needs as many Nvidia chips as it can get, based on CEO Elon Musk recent comments on the company's quarterly earnings call. Tesla isn't the first to work on custom AI chips focused specifically on the company's bread and butter. And that's coming from the head of one of the most cutting-edge AI companies on the planet. Nvidia chips can be utilized by clients for many applications, making it easier for them to accelerate growth. The fresh Nvidia-Tesla conversation adds another layer to Oracle's report Monday, when viewed through the lens of Nvidia's AI leadership.
Persons: Tesla, Morgan Stanley, Morgan, Jim Cramer, Elon Musk, Musk, Sundar Pichai, Jim, Oracle, Jim Cramer's, Jensen Huang Organizations: Nvidia, Oracle, Broadcom, Google, NVIDIA, Citigroup, Oracle Cloud Infrastructure, Citi, Amazon Web Services, UBS, CNBC, Getty Locations: OCI, Taiwan
Collectively, these six long-term holdings are expected to boost profits by 87% in the second half, up from 27% in the first half, according to estimates from FactSet. Nvidia 1st half earnings growth: 103% 2nd half expected earnings growth: 371% Investor excitement in Nvidia, already high, was lifted even further by the company's recently reported fiscal second quarter , which blew expectations out of the park. Amazon 1st half earnings growth: 266% 2nd half expected earnings growth: 297% The company's second quarter North America segment posted an operating margin of 3.9%, impressing investors like us with a 620-basis-point improvement over the last five quarters. That's why earnings growth for the second half of 2023 looks lighter than the recently reported fiscal fourth quarter of 2023 . Apple 1st half earnings growth: 2% 2nd half expected earning growth: 11% The iPhone maker had a "muted start" to the second half as consumers wait for the September release of the iPhone 15, said analysts at Loop Capital.
Persons: there's, Zev Fima, Morgan Stanley's, Wedbush, Morgan Stanley, Oppenheimer, Jim Cramer's, Jim Cramer, Jim, Jensen Huang, Walid Berrazeg Organizations: Big Tech, Nvidia, Microsoft, Apple, Investing, NVIDIA, Amazon, Meta, Facebook, Meta Connect, Management, Google, YouTube, Bank of America, Loop, CNBC, Getty Locations: FactSet, America, 2H23, Taipei
"During the quarter, major cloud service providers announced massive NVIDIA H100 AI infrastructures. Leading enterprise IT system and software providers announced partnerships to bring NVIDIA AI to every industry. Quarterly commentary Data-center revenue in the quarter — comprising 76% of total revenue — reached a new record, more than doubling from a year ago and on a sequential basis. However, revenue was up sequentially on improved demand for Nvidia's enterprise workstation and the ramp of Nvidia RTX products. Nvidia said it bought back $3.28 billion worth of stock during its second quarter.
Persons: chipmaker, , Jensen Huang, Jensen, Wall, there's, Jim Cramer's, Jim Cramer, Jim, Walid Berrazeg Organizations: Nvidia, Revenue, Club, Analysts, UBS, NVIDIA, Microsoft, Oracle, Ampere, Gaming, Truist, Jim Cramer's Charitable, CNBC, Getty Locations: China, Taiwan
In this article METAGOOGLMSFTAMZN Follow your favorite stocks CREATE FREE ACCOUNTChips as 'true differentiation'In the long run, Dekate said, Amazon's custom silicon could give it an edge in generative AI. Microsoft has yet to announce the Athena AI chip it's been working on, reportedly in partnership with AMD. So you train the machine learning models and then you run inference against those trained models," Wood said. Amazon's custom chips, from left to right, Inferentia, Trainium and Graviton are shown at Amazon's Seattle headquarters on July 13, 2023. An Amazon employee works on custom AI chips, in a jacket branded with AWS' chip Inferentia, at the AWS chip lab in Austin, Texas, on July 25, 2023.
Persons: Dekate, It's, Nitro, Stacy Rasgon, Matt Wood, Wood, Trainium, Nvidia H100s, Rasgon, Joseph Huerta, they're, Mai, Lan Tomsen Bukovec, Gartner, Swami Sivasubramanian, Sivasubramanian, Katie Tarasov Organizations: Microsoft, AWS, Amazon, CNBC, AMD, Intel, Bernstein Research, Google, Unit, Nvidia, Seattle, AI21 Labs Locations: Austin , Texas
Aug 3 (Reuters) - Specialized cloud provider CoreWeave has raised $2.3 billion in a debt facility led by Magnetar Capital and Blackstone (BX.N) and collateralized by Nvidia chips, with the funds to be used to expand to meet rising AI workload, the company said on Thursday. Other lenders in the facility include Coatue and DigitalBridge (DBRG.N), as well as BlackRock, PIMCO, and Carlyle (CG.O). Nvidia-backed CoreWeave has seen a boost from the generative AI boom thanks to its purpose-built cloud infrastructure at scale. It has partnerships with AI startups and cloud providers, which it also competes with, to build clusters to power AI workload. CoreWeave also raised $421 million in equity this year led by Magnetar Capital at a valuation of over $2 billion.
Persons: Carlyle, Michael Intrator, CoreWeave, Krystal Hu, Conor Humphries Organizations: Magnetar Capital, Blackstone, Nvidia, Nvidia H100, CoreWeave, Microsoft, Google, Thomson Locations: BlackRock, PIMCO, Texas, U.S, New York
Demand for AI chips in data centers spurred Nvidia to guide for $11 billion in sales during the current quarter, blowing away analyst estimates of $7.15 billion. "The flashpoint was generative AI," Huang said in an interview with CNBC. Nvidia currently dominates the market for AI GPUs. "The data center of the past, which was largely CPUs for file retrieval, is going to be, in the future, generative data," Huang said. That's one reason why Nvidia's data center business grew 14% during the first calendar quarter versus flat growth for AMD's data center unit and a decline of 39% in Intel's AI and data center business unit.
Google has designed its own custom chip called the Tensor Processing Unit, or TPU. The Google TPU is now in its fourth generation. Google said its supercomputers make it easy to reconfigure connections between chips on the fly, helping avoid problems and tweak for performance gains. Google said that startup Midjourney used the system to train its model, which generates fresh images after being fed a few words of text. Google said it did not compare its fourth-generation to Nvidia's current flagship H100 chip because the H100 came to the market after Google's chip and is made with newer technology.
Google has designed its own custom chip called the Tensor Processing Unit, or TPU. The Google TPU is now in its fourth generation. Google said its supercomputers make it easy to reconfigure connections between chips on the fly, helping avoid problems and tweak for performance gains. "Circuit switching makes it easy to route around failed components," Google Fellow Norm Jouppi and Google Distinguished Engineer David Patterson wrote in a blog post about the system. Google said that startup Midjourney used the system to train its model, which generates fresh images after being fed a few words of text.
Total: 10