From: Past Imperfect (the newest version of Babylon Dreams) p. 265.
Does it bother you, Miranda, not being real?
In this case “real” means to be born and then to make your way, to fight in order to continue existing and to procreate.
No, Gunter, it does not bother me because I see no advantage in being “real.”
Selected quotes from:
John Havens via Mashable Op-Ed on Aug. 3, 2013
You Should Be Afraid of Artificial Intelligence
Link to article: http://mashable.com/2013/08/03/artificial-intelligence-fear/
“When is the best time to discuss the ethical uses of these technologies? NOW.”
“How we can ensure humans will be able to control AI once it achieves human-level intelligence?”
“Not to shock you with my mad math skills, but 2023 is 10 years away. Forget that robots are stealing our jobs, will be taking care of us when we’re older, asking us to turn and cough in the medical arena.
It’s not that robots are evil, per se. (Although Ken Jennings, Jeopardy champion who lost to IBM’s Watson might feel differently.) It’s more that machines and robots are currently, and for the moment, predominantly, programmed by humans who always experience biases.
Who is deciding when a target should be engaged? Come to think of it, who’s deciding who is a target? Do we really want to surrender control for weaponized AI to machines, in the wake of situations like the cultural morass of the Trayvon Martin shooting? How would Florida’s Stand Your Ground Law operate if controlled by weaponized AI-police enforcement hooked into a city’s smart grid?
Short answer: choose Disneyland.”
“Nuclear fission was announced to the world at Hiroshima.” James Barrat is author of Our Final Invention: Artificial Intelligence and the End of the Human Era, which expounds a thorough description of the chief players in the larger AI space, along with an arresting sense of where we’re headed with machine learning — a world we can’t define.
For our interview, he cited the Manhattan Project and the development of nuclear fission as a precedent for how we should consider the present state of AI research:
We need to develop a science for understanding advanced Artificial Intelligence before we develop it further. It’s just common sense. Nuclear fission is used as an energy source and can be reliable. In the 1930s the focus of that technology was on energy production, initially, but an outcome of the research led directly to Hiroshima. We’re at a similar turning point in history, especially regarding weaponized machine learning. But with AI we can’t survive a fully realized human level intelligence that arrives as abruptly as Hiroshima.
Barrat also pointed out the difficulty regarding AI and anthropomorphism. It’s easy to imbue machines with human values, but by definition, they’re silicon versus carbon.”
“Intelligent machines won’t love you any more than your toaster does,” he says. “As for enhancing human intelligence, a percentage of our population is also psychopathic. Giving people a device that enhances intelligence may not be a terrific idea.”
It loves me; it loves me not.
“Options for AI”
The nature of FAB, as I’m proposing it, is to move beyond the dichotomy of only two ways of thinking about AI and elevate the work of unique thinkers in the space. Use our Fears about the nature of potential scenarios to help create Awareness of positive possibilities that will Bias us to action regarding AI, versus succumbing to complacency or tacit acceptance toward inevitable overlord rule.
In that regard, I appreciated when James Barrat told me about the work of Steve Omohundro who holds degrees in physics and mathematics from Stanford, a Ph.D. in physics from U.C. Berkeley and is president of Self-Aware Systems, a think tank he created to “bring positive human values to new intelligent technologies.”
Steve Mann, pioneer in the field of wearable computing, has a theory of Humanistic Intelligence (HI) that also adds a unique layer to the discussion surrounding Artificial Intelligence. The theory came from his Ph.D. work at MIT, where Marvin Minsky (whom many call the father of AI) was on his thesis committee.
Mann explains in the opening of his thesis, “Rather than trying to emulate human intelligence, HI recognizes that the human brain is perhaps the best neural network of its kind, and that there are many new signal processing applications, within the domain of personal technologies, that can make use of this excellent but often overlooked processor.” By leveraging tools like Google’s Glass or other intelligent wearable camera systems, we can enhance our lives as aided by technology, versus having our consciousness supplanted by it. He described his theory for our interview:
HI is intelligence that arises by having the human being in the feedback loop of the computational process. AI is not immediately a reality, whereas HI is here and now and viable. HI is a revolution in communications, not mere computation. It’s really a matter of people caring about people, not machines caring about people.
Compared to the notion of the Singularity as described by Ray Kurzweil (the moment in time when machines gain true sentience), Mann’s description of Humanistic Intelligence in full fruition is the Sensularity. It’s an appealing concept: that technology assisting humanity towards greater innovation can feature compassion over computation as its primary goal.
I love technology, but I want to be part of the revolution that dares to stand up and say, “I like being human! I want humans to retain autonomy over machines!”
My hope is that, like the Genesis Angels, the $100 million fund created to spur acceleration in AI and robotics startups, someone will step up and justify the ramifications of AI before unleashing it full-blown onto humanity.”