#openai
https://youtube.com/watch?v=YHbNwygLlzs
Former #OpenAI #researcher and #whistleblower found #dead at age 26
Suchir Balaji, a former OpenAI researcher, was found dead in his San Francisco apartment in recent weeks, officials confirmed to CNBC.
Balaji left OpenAI earlier this year and voiced concerns publicly that the company had allegedly violated U.S. copyright laws in building its popular ChatGPT chatbot.
"The manner of death has been determined to be suicide," David Serrano Sewell, executive director of San Francisco's Office of the Chief Medical Examiner, told CNBC in an email
A 26-year-old former OpenAI researcher, Suchir Balaji, was found dead in his San Francisco apartment in recent weeks, CNBC has confirmed.
Balaji left OpenAI earlier this year and raised concerns publicly that the company had allegedly violated U.S. copyright law while developing its popular ChatGPT chatbot. "The manner of death has been determined to be suicide," David Serrano Sewell, executive director of San Francisco's Office of the Chief Medical Examiner, told CNBC in an email on Friday. He said Balaji's next of kin have been notified.
The San Francisco Police Department said in an e-mail that on the afternoon of Nov. 26, officers were called to an apartment on Buchanan Street to conduct a "wellbeing check." They found a deceased adult male, and discovered "no evidence of foul play" in their initial investigation, the department said.
News of Balaji's death was first reported by the San Jose Mercury News. A family member contacted by the paper requested privacy. In October, The New York Times published a story about Balaji's concerns.
"If you believe what I believe, you have to just leave the company," Balaji told the paper. He reportedly believed that ChatGPT and other chatbots like it would destroy the commercial viability of people and organizations who created the digital data and content now widely used to train #AI #systems. A spokesperson for OpenAI confirmed Balaji's death.
"We are devastated to learn of this incredibly sad #news today and our hearts go out to Suchir's loved ones during this difficult time," the spokesperson said in an email. OpenAI is currently involved in legal disputes with a number of publishers, authors and artists over alleged use of copyrighted material for AI training data. A lawsuit filed by news outlets last December seeks to hold OpenAI and principal backer Microsoft accountable for billions of dollars in damages.
"We actually don't need to train on their data," OpenAI CEO Sam Altman said at an event organized by Bloomberg in Davos earlier this year. "I think this is something that people don't understand. Any one particular training source, it doesn't move the needle for us that much."
One person like that
Today, we’re adding #ChatGPT Pro, a $200 monthly plan... 🤑
Source: https://openai.com/index/introducing-chatgpt-pro/
So AI is only for rich people who can afford it. The digital divide continues to progress.
#openai #ai #technology #business #economy #future #knowlege #finance #politics #news #information #money
2 Likes
2 Comments
3 Likes
1 Comments
2 Shares
3 Likes
2 Comments
Copying is not theft said #OpenAI 😐
Do we really want to criminalize a large part of the population while tech companies do the same? In the end, is the only difference that #PirateBay was not a multimillion corporation?
#sony #court #piracy #internet #economy #news #usa #copy #Problem #society #crime
3 Likes
1 Shares
DeepSeek, the Chinese large language model company, claims to have made a large language model that performs similar to OpenAI's o1-preview on a number of benchmarks.
It makes you wonder how the Chinese figured out, ahead of all OpenAI's US competitors, how OpenAI's "o1" model is built. Do the Chinese have spies inside OpenAI? OpenAI, despite its name, has revealed little about how "o1" is built.
Impressive results of DeepSeek-R1-Lite-Preview across benchmarks!
2 Likes
6 Comments
You press something wrong and suddenly it's all gone 😱
Source: https://www.wired.com/story/new-york-times-openai-erased-potential-lawsuit-evidence/
#ai #technology #news #openai #Copyright #nyt #court #law #fail #problem
4 Likes
2 Comments
OpenAI o1 isn't as good as an experienced professional programmer, but... "the set of tasks that O1 can do is impressive, and it's becoming more and more difficult to find easily demonstrated examples of things it can't do."
"There's a ton of things it can't do. But a lot of them are so complicated they don't really fit in a video."
"There are a small number of specific kinds of entry level developer jobs it could actually do as well, or maybe even better, than new hires."
Carl of "Internet of Bugs" recounts how he spent the last 3 weeks experimenting with the o1 model to try to find its shortcomings. /
"I've been saying for months now that AI couldn't do the work of a programmer, and that's been true, and to a large extent it still is. But in one common case, that's less true than it used to be, if it's still true at all."
"I've worked with a bunch of new hires that were fresh out with CS degrees from major colleges. Generally these new hires come out of school unfamiliar with the specific frameworks used on active projects. They have to be closely supervised for a while before they can work on their own. They have to be given self-contained pieces of code so they don't screw up something else and create regressions. A lot of them have never actually built anything that wasn't in response to a homework assignment.
"This o1 thing is more productive than most, if not all, of those fresh CS graduates I've worked with.
"Now, after a few months, the new grads get the hang of things, and from then on, for the most part, they become productive enough that I'd rather have them on a project than o1."
When I have a choice, I never hire anyone who only has an academic and theoretical understanding of programming and has never actually built anything that faces a customer, even if they only built it for themselves. But in the tech industry, many companies specifically create entry-level positions for new grads."
"In my opinion, those positions where people can get hired with no practical experience, those positions were stupid to have before and they're completely irrelevant now. But as long as those kinds of positions still exist, and now that o1 exists, I can no longer honestly say that there aren't any jobs that an AI could do better than a human, at least as far as programming goes."
"o1 Still has a lot of limitations."
Some of the limitations he cited were writing tests and writing a SQL RDBMS in Zig.
ChatGPT-O1 Changes Programming as a Profession. I really hated saying that. - Internet of Bugs
#solidstatelife #ai #genai #llms #codingai #openai #technologicalunemployment
3 Likes
10 Comments
ChatGPT topped 3 billion user visits in September 2024.
- google.com - 82.0B
- youtube.com - 28.0B
- facebook.com - 12.3B
- instagram.com - 5.7B
- whatsapp.com - 4.5B
- x.com - 4.3B
- wikipedia.org - 3.8B
- yahoo.com - 3.4B
- reddit.com - 3.4B
- yahoo.co.jp - 3.2B
- chatgpt.com - 3.1B
- yandex.ru - 2.7B
- amazon.com - 2.6B
- baidu.com - 2.4B
- tiktok.com - 2.1B
None of the other language models (Gemini, Claude, Meta, X.AI, Perplexity, etc) register on this global ranking -- ChatGPT crushes them all. Which is interesting to me as I use 8 LLMs (most of the time -- will probably try more soon) and ChatGPT doesn't consistently stand out as better than the others. But ChatGPT seems to have lept far ahead in terms of brand recognition with users.
3 Likes
"The New York Times on Thursday published a look at the 'fraying' relationship between OpenAI and its investor, partner, and, increasingly, rival, Microsoft."
"Most fascinating perhaps is a reported clause in OpenAI's contract with Microsoft that cuts off Microsoft's access to OpenAI's tech if the latter develops so-called artificial general intelligence (AGI), meaning an AI system capable of rivaling human thinking."
The surprising way OpenAI could get out of its pact with Microsoft
4 Likes
1 Comments
Here’s the deal: AI giants get to grab all your data unless you say they can’t. Fancy that? No, neither do I.
The Guardian
Data is vital to AI systems, so firms want the right to take it and ministers may let them. We must wake up to the danger.
(Text continues underneath the photo.)
The OpenAI logo on a laptop and ChatGPT on a smartphone. Photograph: Jakub Porzycki/NurPhoto/REX/Shutterstock.
Imagine someone drives up to a pub in a top-of-the-range sports car – a £1.5m Koenigsegg Regera, to pick one at random – parks up and saunters out of the vehicle. They come into the pub you’re drinking in and begin walking around its patrons, slipping their hand into your pocket in full view, smiling at you as they take out your wallet and empty it of its cash and cards.
The not-so-subtle pickpocket stops if you shout and ask what the hell they’re doing. “Sorry for the inconvenience,” the pickpocket says. “It’s an opt-out regime, mate.”
Sounds absurd. Yet it seems to be the approach the government is pursuing in order to placate AI companies. A consultation is soon to open, the Financial Times reports, that will allow AI companies to scrape content from individuals and organisations unless they explicitly opt out of their data being used. (...)
Tags: #ai #artificial_intelligence #ChatGPT #Claude #Gemini #Grok #OpenAI #data #opt-out #meta #facebook #instagram #google #alphabet #copyright #big_tech #lobby