"Google said it is working on a super-fast "quantum" computer chip as part of a vision to one day have machines think like humans."
So, how is it that humans think?
Wouldn’t Google need to need to have a really good grasp of human thought in order to mimic it? Not only how humans think, but what human thought is? More than just looking at the processes, but dealing with the question: “what is it that humans are doing when they are thinking different types of thoughts?”
For example, how is wisdom different from empiricism?
I don't mean, what parts of the brain handle wisdom and which parts handle empiricism.
I mean what is wisdom and what is empiricism.
Does Google have a department of researchers who have delved into the mysteries of “epistemé” or a “Division of Epistemology” perhaps?
|WOWEE!!! It's a human soul...Nope, never mind. It's just a bunch of servers in a datacenter.|
eWeek has a little more detail on this story and cites a blog post from Hartmut Neven, director of engineering for the Google Research group who writes the following:
“If we want to cure diseases, we need better models of how they develop. If we want to create effective environmental policies, we need better models of what's happening to our climate. And if we want to build a more useful search engine, we need to better understand spoken questions and what's on the Web so you get the best answer."
Hmmm. So we’ve just got to do this or we won’t be able to cure diseases…or find really cute pictures of kittens on the internet.
But really the eWeek story is a bit less sensational while the Yahoo! bit seems like a headline to get clicks. All we can garner from any of this is that Google is really just about building systems that have the capacity to churn more data. I can’t really see how this is anything new from what computers already do. It’s just bigger, faster capacities and more simultaneous operations.
All a computer can do is operate a bunch of commands…essentially a pattern of “if-then” executions. This is not how human’s think.
As Jaron Lanier writes “You can't tell if a machine has gotten smarter or if you've just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you've let your sense of personhood degrade in order to make the illusion work for you?”