Tim O’Reilly forwarded an excellent article about the OpenAI soap opera to me: Matt Levine’s “Money Stuff: Who Controls Open AI.” I’ll skip most of it, but something caught my eye. Towards the end, Levine writes about Elon Musk’s version of Nick Bostrom’s AI that decides to turn the world to paper-clips:
[Elon] Musk gave an example of an artificial intelligence that’s given the task of picking strawberries. It seems harmless enough, but as the AI redesigns itself to be more effective, it might decide that the best way to maximize its output would be to destroy civilization and convert the entire surface of the Earth into strawberry fields.
That gets me, but not in the way you think. It’s personally poignant, for reasons entirely different from the AI-doomerism cults that Musk, Bostrom, and others are propagating.
When I was a graduate student at Stanford, I was driving around with a friend through the endless maze of parking lots and strip malls in that non-descript part of Silicon Valley where Sunnyvale, Santa Clara, and Cupertino come together. My friend pointed out the window and said, “That’s where my father’s farm was.” I asked what his father grew; it was very difficult to imagine a farm at that location. He grew strawberries. And what happened to the farm? His father lost it when he was put into a World War II internment camp for Japanese. A real estate investor ended up with it. My friend’s father eventually committed suicide. The farm became a parking lot.
This gets me back to an argument that I’ve made in older Radar articles: Our fears of AI are really fears of ourselves, fears that AI will act as badly as humans have repeatedly acted. We don’t need AI to turn the world into strawberries any more than we need it to turn the world into parking lots. We’re already turning the world into parking lots, and doing so without regard to the human cost. We’re already spewing CO2 at a rate that will soon make the world uninhabitable for all but the few who can insulate themselves from the consequences. If we’re going to solve these problems, it won’t be through technology. It’s through finding better humans than Elon and, I fear, Sam Altman. We don’t have a chance to solve the AI problem if we can’t solve the human problem. And if we don’t solve the human problem, the AI problem is irrelevant.
Tim O’Reilly forwarded an excellent article about the OpenAI soap opera to me: Matt Levine’s “Money Stuff: Who Controls Open AI.” I’ll skip most of it, but something caught my eye. Towards the end, Levine writes about Elon Musk’s version of Nick Bostrom’s AI that decides to turn the world to paper-clips: [Elon] Musk gave Read More AI & ML, Artificial Intelligence, Commentary