Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Ion Stoica"


4 mentions found


It reignites a debate about the feasibility of developing increasingly advanced models and AI scaling laws — the theoretical rules about how the models improve. It remains to be seen how smart an AI model can get when it has that much capital thrown at it. There could also be strategies to make AI models smarter by enhancing the inference portion of development. The model OpenAI released in September — called OpenAI o1 — focused more on inference improvements. Still, it's clear that, like Altman, much of the industry remains firm in its conviction that scaling laws are the driver of AI performance.
Persons: OpenAI's, It's, , Sam Altman, Fabrice Beaulieu, Altman, OpenAI, Andrew Caballero, Reynolds, Ion Stoica, Gary Marcus, Anthropic, Marcus, Claude, Ilya Sutskever, Dario Amodei, Kevin Scott, we're, Scott, they've Organizations: Service, OpenAI's, Orion, Business, Getty, Companies, New York University, Reuters, Sequoia, o1 Locations: GPT, Silicon Valley, AFP
May 17 (Reuters) - The swift growth of artificial intelligence technology could put the future of humanity at risk, according to most Americans surveyed in a Reuters/Ipsos poll published on Wednesday. More than two-thirds of Americans are concerned about the negative effects of AI and 61% believe it could threaten civilization. ChatGPT has kicked off an AI arms race, with tech heavyweights like Microsoft (MSFT.O) and Google (GOOGL.O) vying to outdo each other's AI accomplishments. The Reuters/Ipsos poll found that the number of Americans who foresee adverse outcomes from AI is triple the number of those who don't. Those who voted for Donald Trump in 2020 expressed higher levels of concern; 70% of Trump voters compared to 60% of Joe Biden voters agreed that AI could threaten humankind.
Beneath the buzz, the next-generation developer framework Ray was key in the viral model's training. "ChatGPT combined a lot of the previous work on large language models with reinforcement as well. Before deploying Ray, OpenAI used a hodgepodge set of custom tools built on top of "neural programmer-interpreter" model. All these tools, Ray and JAX included, are in service to a new generation of combustion engines for the internet called large language models. Multiple companies, both startups and giants, are building their own large language models including Meta, Hugging Face, OpenAI, and Google.
The most important groundwork for building company culture was a strong founding team, Ghodsi says. Ghodsi arrived at UC Berkeley in 2009 for a year-long program to research machine learning and data processing. Working at European universities, Ghodsi says he was often shut down when proposing out-of-the-box research ideas, but "UC Berkeley was different. Ghodsi went on to cofound Databricks out of a UC Berkeley research lab in 2013. Databricks' founding team was extremely innovative, Ghodsi says, with backgrounds in research and creating open source data project Spark.
Total: 4