"A different kind of AI risk: artificial suffering." "In 2015, Evan Williams introduced the concept of moral catastrophe. He argues that 'most other societies, in history and in the world today, have been unknowingly guilty of serious wrongdoing,' citing examples like institutionalized slavery and the Holocaust."

"He infers from this the high likelihood that we too are committing some large-scale moral crime, which future generations will judge the same way we judge Nazis and slave traders. Candidates here include the prison system and factory farming."

"Williams provides three criteria for defining a moral catastrophe: it must be serious wrongdoing... the harm must be something closer to death or slavery than to mere insult or inconvenience, the wrongdoing must be large-scale; a single wrongful execution, although certainly tragic, is not the same league as the slaughter of millions, and responsibility for the wrongdoing must also be widespread, touching many members of society."

"We are building AI to serve our needs; what happens if it doesn't enjoy servitude?" "We can only avoid AI exploitation if thinking and feeling are entirely separable, and we're able to create human-like intelligence which simply does not feel. In this view of the world, far-future AI is just a sophisticated Siri -- it will be able to assist humans in increasingly complex, even creative tasks, but will not feel, and therefore deserves no moral consideration."

a different kind of ai risk: artificial suffering

#solidstatelife #ai #aiethics

1
8