#nytimes

psychmesu@diaspora.glasswings.com

https://newsie.social/@ZhiZhu/112026279928099587 ZhiZhu@newsie.social - @protecttruth

The #NYTimes once published an article saying that #Hitler wasn’t really that bad. He was just using #antisemitism as a way to attract followers & keep them excited about his #political campaign.

The NYTimes more recently published an article saying that #Trump isn’t really that bad. He is just using threats of #violence & #authoritarianism as a way to attract followers & keep them excited about his political campaign.

#Politics #Journalism #Media #Press #News

birne@diaspora.psyco.fr

How the Media Got the Hospital Explosion Wrong

The Palestinian health authorities claimed that Israel was responsible for the death of some 500 civilians. Because the details were extremely murky, it was impossible to tell who had caused the explosion or how many people had died. And yet some of the most reputable names in news media sent push alerts that broadcast Hamas’s claims far and wide.

But both push alerts would have led reasonable readers to conclude that these statements must basically be true. Both talked about “Israeli” air strikes. Both uncritically reported that many hundreds had died.

News of the supposed Israeli strike quickly had huge real-world consequences. The king of Jordan canceled a planned meeting with President Joe Biden. Mass protests broke out in cities across the Middle East, some culminating in attacks on foreign embassies. In Germany, two unknown assailants threw Molotov cocktails at a synagogue in Berlin.

A live video transmission from Al Jazeera appeared to show that a projectile rose from inside Gaza before changing course and exploding in the vicinity of the hospital; the Israel Defense Forces have claimed that this was one of several rockets fired from Palestinian territory. Subsequent analysis by the Associated Press has substantially corroborated this. In addition, pictures of the site taken by Reuters showed a small crater that, according to independent analysts using open-source intelligence, is inconsistent with the effect of munitions typically used by Israel. It came to look doubtful that the missile had directly hit the hospital; as a BBC team investigating the blast reported, “Images of the ground after the blast do not show significant damage to surrounding hospital buildings.”

The cause of the tragedy, it appears, is the opposite of what news outlets around the world first reported.

Such a glaring example of major outlets messing up on a very consequential event helps explain why trust in traditional news media has been falling fast. As recently as 2003, eight out of 10 British respondents said that they “trust BBC journalists to tell the truth.” By 2020, the share of respondents who said that they trust the BBC had fallen to fewer than one in two. Americans have been mistrustful of media for longer, but here, too, the share of respondents who say that they trust mass media to report “the news fully, accurately, and fairly” has fallen to a near-record low.

#Atlantic #BBC #NYTimes #media #journalism

hernanlg@diasp.org

‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead

For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.

Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future.

On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.

Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough.

Dr. Hinton’s journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflection point in decades. Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.

But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.

“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.

After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I. technologies pose “profound risks to society and humanity.”

Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.

Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job. He notified the company last month that he was resigning, and on Thursday, he talked by phone with Sundar Pichai, the chief executive of Google’s parent company, Alphabet. He declined to publicly discuss the details of his conversation with Mr. Pichai.

Google’s chief scientist, Jeff Dean, said in a statement: “We remain committed to a responsible approach to A.I. We’re continually learning to understand emerging risks while also innovating boldly.”

Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.

In the 1980s, Dr. Hinton was a professor of computer science at Carnegie Mellon University, but left the university for Canada because he said he was reluctant to take Pentagon funding. At the time, most A.I. research in the United States was funded by the Defense Department. Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”

In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.

Google spent $44 million to acquire a company started by Dr. Hinton and his two students. And their system led to the creation of increasingly powerful technologies, including new chatbots like ChatGPT and Google Bard. Mr. Sutskever went on to become chief scientist at OpenAI. In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.

Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.

Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”

As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”

Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.

His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”

He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”

Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.

But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.

Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”

He does not say that anymore.

#ChatGPT #NYTimes #Opinion #AI #danger #society #FakeNews #News #Google #Society

faab64@diasp.org

"In tomorrow’s world, we should not worry if some roads to peace go through Beijing, New Delhi or Brasília. So long as all roads to war do not go through Washington." Trita Parsi in brilliant #nytimes op-ed.

There was a time when all roads to peace went through Washington. From the 1978 Camp David Accords between Israel and Egypt brokered by President Jimmy Carter to the 1993 Oslo Accords signed on the White House lawn to Senator George Mitchell’s Good Friday Agreement that ended the fighting in Northern Ireland in 1998, America was the indispensable nation for peacemaking. To Paul Nitze, a longtime diplomat and Washington insider, “making evident its qualifications as an honest broker” was central to America’s influence after the end of the Cold War.

But over the years, as America’s foreign policy became more militarized and as sustaining the so-called rules-based order increasingly meant that the United States put itself above all rules, America appears to have given up on the virtues of honest peacemaking.

We deliberately chose a different path. America increasingly prides itself on not being an impartial mediator. We abhor neutrality. We strive to take sides in order to be “on the right side of history” since we view statecraft as a cosmic battle between good and evil rather than the pragmatic management of conflict where peace inevitably comes at the expense of some justice.

This has perhaps been most evident in the Israeli-Palestinian conflict but is now increasingly defining America’s general posture. In 2000, when Madeleine Albright defended the Clinton administration’s refusal to veto a U.N. Security Council Resolution condemning the excessive use of force against Palestinians, she cited the need for the United States to be seen as an “honest broker.” But since then, the United States has vetoed 12 Security Council resolutions expressing criticisms of Israel — so much for neutrality.

We started to follow a different playbook. Today, our leaders mediate to help “our” side in a conflict advance our position rather than to establish a lasting peace. We do it to demonstrate the value of allying with the United States. While this trend is more than two decades long, it has reached full maturity now with great-power competition with China becoming the organizing principle of U.S. foreign policy. This rivalry is, in the words of Colin Kahl, the under secretary of defense for policy, “not a competition of countries. It is a competition of coalitions.” Following Dr. Kahl’s logic, we keep our coalition partners close by offering them — in addition to military might — our services as a “partial broker” to tilt the scales of diplomacy in their favor.

It’s what you do when you see the world through the prism of a Marvel movie: Peace is born not out of compromise but out of total victory.

But just as America has changed, so has the world. Elsewhere in the world, Marvel movie logic is seen for what it is: Fairy tales where the simplicity of good versus evil leaves no space for compromise or coexistence. Few have the luxury of pretending to live in such fantasy worlds.

So while America may have lost interest in peacemaking, the world has not. As the Ukraine crisis has shown, America has been immensely effective in mobilizing the West but hopelessly clueless in inspiring the global south. While the Western nations wanted the United States to rally them to defend Ukraine, the global south was looking for leadership to bring peace to Ukraine — of which the United States has offered little to none.

But America not only has moved beyond peacemaking. It is also increasingly dismissive of other powers’ efforts to mediate. Though the White House officially welcomed the Saudi-Iranian normalization deal, it could not conceal its irritation at China’s new-won role as a broker in the Middle East. And Beijing’s earlier offer to mediate between Ukraine and Russia was quickly dismissed by Washington as a distraction, even though President Volodymyr Zelensky of Ukraine welcomed it on the condition that Russian troops would withdraw from Ukrainian territory. As Mark Hannah of the Eurasia Group Foundation recently pointed out, there is an inherent hypocrisy “in touting Ukraine’s agency when it prosecutes war, but not when it pursues peace.”

Still, Xi Jinping of China seems undeterred. He traveled to Moscow this week and also plans to speak directly to Mr. Zelensky in what appears to be the preparation for an active mediation attempt to bring the war to an end.

Mr. Xi succeeded in bringing Iran and Saudi Arabia together precisely because he was on neither’s side. With stubborn discipline, Beijing maintained a neutral position on the two countries’ squabbles and didn’t moralize their conflict or bother with whose side history would take. Nor did China bribe Iran and Saudi Arabia with security guarantees, arms deals or military bases, as all too often is our habit.

Whether Mr. Xi’s formula will work to end Russia’s war on Ukraine remains to be seen. But just as a more stable Middle East where the Saudis and Iranians aren’t at each other’s throats benefits the United States, so too will any effort to get Russia and Ukraine to the negotiating table.

In a multipolar world, shared responsibility for security can be a virtue that reduces the burden on Americans without increasing threats to U.S. interests. It is not security that we would give up, but the illusion that we are — and have to be — in control of developments far away. For too long, Americans have been told that if we do not dominate, the world will descend into chaos. In reality, as the Chinese mediation has shown, other powers are likely to step up to shoulder the burden of security and peacemaking.

The greatest threat to our own security and reputation is if we stand in the way of a world where others have a stake in peace, if we become a nation that doesn’t just put diplomacy last but also dismisses those who seek to put diplomacy first.

In tomorrow’s world, we should not worry if some roads to peace go through Beijing, New Delhi or Brasília. So long as all roads to war do not go through Washington.

Trita Parsi is the author of “Losing an Enemy: Obama, Iran and the Triumph of Diplomacy” and the executive vice president of the Quincy Institute.
#Iran #SaudiArabia #Chinan #US #Politics #Peace #War #Russia #Ukraine #Ydmen #Syria #Israel
Paywall link:
https://www.nytimes.com/2023/03/22/opinion/international-world/us-china-russia-ukraine.html