I read the article below (Subverse) after I read Sridhar Mahadevan's "Deciphering the universe Video Game". I am sure we all have mediated on a similar thought like Sridhar (at least I have). Loop this thought, tie it into the first article, and what do you have?
You have characters in a game (us) trying to create machines (further games) which will further behave like us. Also parallely we are trying to create games which mimic our real life. (Read Second Life by LindenLabs).
Where does all this lead to?
A game created within a game created within a game....and this can go on. 'n' levels of recursion and mutiple levels of stack.
One last thought to ponder upon - what happens when a character in the 'nth' level of the game, raise a "who am I, what am I doing here?" type of question.
How ironic is that?
--------------------------------------------------------------
From TimesofIndia.com 2nd April Thu 2009
S U B V E R S E
Computers vs brains by Sandra Aamodt and Sam Wang
Inventor Ray Kurzweil, in his 2005 futurist manifesto, The Singularity Is Near, extrapolates current trends in computer technology to conclude that machines will be able to out-think people within a few decades. However, any comparison with computers misses a messy truth. Because the brain arose through natural selection, it contains layers of systems that arose for one function and then were adopted for another, even though they don’t work perfectly.
An engineer with time to get it right would have started over, but it’s easier for evolution to adapt an old system to a new purpose than to come up with an entirely new structure. As a result, brains differ from computers in many ways, from their highly efficient use of energy to their tremendous adaptability.
In the brain’s wiring, space is at a premium, and is more tightly packed than even the most condensed computer architecture. One cubic centimetre of human brain tissue, which would fill a thimble, contains 50 million neurons; several hundred miles of axons, the wires over which neurons send signals; and close to a trillion synapses, the connections between neurons. The memory capacity in this small volume is potentially immense. Although we’re forced to guess because the neural basis of memory isn’t understood at this level, let’s say that one movable synapse could store one byte (8 bits) of memory. That thimble would then contain 1,000 gigabytes (1 terabyte) of information. A thousand thimblefuls make up a whole brain, giving us a million gigabytes — a petabyte — of information. To put this in perspective, the entire archived contents of the internet fill just 3 petabytes.
To address this challenge, Kurzweil invokes Moore’s law, the principle that for the last four decades, engineers have managed to double the capacity of chips (and hard drives) every year or two. If we imagine that the trend will continue, it’s possible to guess when a single computer the size of a brain could contain a petabyte. That would be about 2025 to 2030.
This projection overlooks the dark, hot underbelly of Moore’s law: power consumption per chip, which has also exploded since 1985. By 2025, the memory of an artificial brain would use nearly a gigawatt of power, the amount currently consumed by all of Washington, DC. Compare this with your brain, which uses about 12 watts, an amount that supports not only memory but all your thought processes. This is less than the energy consumed by a typical refrigerator light.
A brain’s success is not measured by its ability to process information in precisely repeatable ways. Instead, it has evolved to guide behaviours that allow us to survive and reproduce, which often requires fast responses to complex situations. As a result, we constantly make approximations and find “good-enough” solutions. This leads to mistakes and biases.
Still, engineers could learn a thing or two from brain strategies. For example, even the most advanced computers have difficulty telling a dog from a cat, something that can be done at a glance by a toddler — or a cat. We use emotions,the brain’s steersman, to assign value to our experiences and to future possibilities, often allowing us to evaluate potential outcomes efficiently and rapidly when information is uncertain.
If engineers can understand how to apply these shortcuts and tricks, computer performance could begin to emulate some of the more impressive feats of human brains. However, this route may lead to computers that share our imperfections. This may not be exactly what we want from robot overlords.
This gets us to the deepest point: why bother building an artificial brain? As neuroscientists, we’re excited about the potential of using computational models to test our understanding of how the brain works. On the other hand, although it eventually may be possible to design sophisticated computing devices that imitate what we do, the capability to make such a device is already here. All you need is a fertile man and woman with the resources to nurture their child to adulthood. With luck, by 2030 you’ll have a full-grown, college educated, walking petabyte. A drawback is that it may be difficult to get this computing device to do what you ask. — NYTNS