Saturday, August 9, 2008

New conference examines what risks super intelligent robots might hold

A group of the international community's brightest research minds will meet Thursday at the four-day Global Catastrophic Risk Conference at Oxford University in England. The conference, the first of its kind, will aim to provide thought provoking discussion and analysis on how risks could lead to the end of human life or the end of our planet as we know it.

Why doesn't anyone use the term apocalypse?

Dr. Nick Bostrom, director of Oxford's Future of Humanity Institute, host of the symposium, is fearful that mankind may eventually create such a machine, capable of destroying its creators. He states, "Any entity which is radically smarter than human beings would also be very powerful. If we get something wrong, you could imagine the consequences would involve the extinction of the human species."

I keep hearing this tag line thought from Dr. Bostrom, but if humans where to create machines smarter than humans, then the probability that these humans would have developed technology that enhances their intelligence over any machine that is built is a strong possibility as well.

Bostrom leads a movement known as transhumanism, which dually aims to watch for potential threats in emerging technologies and conversely adopt radical emerging technologies to enrich human life. Bostrom and other transhumanist hope that one day biotechnology, molecular nanotechnologies, and artificial intelligence will merge man with machine, yielding humans that have increased cognitive abilties, are physically stronger, and emotionally more stable. This path, they say will lead to "posthumans", augmented beings so superior to traditional man, they are separate entity.

Most fighter pilots are already transhuman in a rudamentary sense. It is just that their equipment that can enhance their senses can be removed after they finish flying.

No comments: