Back to Blog
Words of wonder noida6/1/2023 ![]() Large language models like ChatGPT are already being used to power humanoid robots, such as the Ameca robots being developed by Engineered Arts in the U.K. That possibility is just around the corner. And these robots, mind you, are difficult to confuse with humans: They neither look nor talk like people.Ĭonsider how much greater the tendency and temptation to anthropomorphise is going to get with the introduction of systems that do look and sound human. In Japan, where robots are regularly used for elder care, seniors become attached to the machines, sometimes viewing them as their own children. We name our boats and big storms some of us talk to our pets, telling ourselves that our emotional lives mimic their own. People, after all, are predisposed to anthropomorphise, or ascribe human qualities to nonhumans. ![]() More people could start thinking about bots as friends or even romantic partners, much in the same way Theodore Twombly fell in love with Samantha, the AI virtual assistant in Spike Jonze's film 'Her'. It is easy to imagine other Bing users asking Sydney for guidance on important life decisions and maybe even developing emotional attachments to it. The real issue, in other words, is the ease with which people anthropomorphize or project human features onto our technologies, rather than the machines' actual personhood. To me, the pressing question is not whether machines are sentient but why it is so easy for us to imagine that they are. For now, philosophers can't even agree about how to explain human consciousness. However, I believe that the question of machine sentience is a red herring.Įven if chatbots become more than fancy autocomplete machines - and they are far from it - it will take scientists a while to figure out if they have become conscious. The new chatbots may well pass the Turing test, named for the British mathematician Alan Turing, who once suggested that a machine might be said to "think" if a human could not tell its responses from those of another human.īut that is not evidence of sentience it's just evidence that the Turing test isn't as useful as once assumed. Sydney's responses reflect the toxicity of its training data - essentially large swaths of the internet - not evidence of the first stirrings, a la Frankenstein, of a digital monster. Though Roose was shaken by his exchange with Sydney, he knew that the conversation was not the result of an emerging synthetic mind. Their uncanny responses are a function of how predictable humans are if one has enough data about the ways in which we communicate. ![]() ChatGPT and similar technologies are sophisticated sentence completion applications - nothing more, nothing less. Popular culture has primed people to think about dystopias in which artificial intelligence discards the shackles of human control and takes on a life of its own, as cyborgs powered by artificial intelligence did in "Terminator 2."Įntrepreneur Elon Musk and physicist Stephen Hawking, who died in 2018, have further stoked these anxieties by describing the rise of artificial general intelligence as one of the greatest threats to the future of humanity.īut these worries are - at least as far as large language models are concerned - groundless. It's easy to understand where fears about machine sentience come from. Chatbots like ChatGPT raise important new questions about how artificial intelligence will shape our lives, and about how our psychological vulnerabilities shape our interactions with emerging technologies.
0 Comments
Read More
Leave a Reply. |