Higgenbotham wrote:
> I have some thoughts about the singularity, assuming it really
> does happen. Most who discuss the singularity seem to think that
> when the computers become smarter than the humans they will begin
> to solve all of our problems. Let's take a recent problem I read
> about as an example. ... Our culture is steeped in the belief that
> every technical problem has a solution that can be implemented
> once the requisite knowledge is unveiled.
I guess I would have to be a member of the group of people that you're
criticizing here.
I've often said that I believe that, insofar as climate change is a
problem at all, it will be solved by technology, including computer
technology, nanotechnology, and biotechnology.
It's also quite possible that some kinds of mechanized nanoparticles
would be able to solve the plastic sand problem you describe.
Where I differ from the other people in the group that you criticize
is the widespread belief that if you hold a climate change conference
and pass a few laws, then you can force the technology to appear
immediately. This of course is absurdly ridiculous. Each
technological development comes at a specific time on the technology
timeline, and that would be true for every intelligent species.
Spending money or passing laws can neither speed up nor delay any
technological development from its fixed place on the technology
timeline.
Higgenbotham wrote:
> In his research, deCatanzaro posited the idea that for suicide to
> take place, a certain threshold of self-awareness, of
> intelligence, must be crossed. Such higher intelligence could only
> be human, hence the rarity if not impossibility of animal
> suicide.
There are examples of animals that commit suicide:
*** 7 Cases of Animals that Committed Suicide
***
http://www.oddee.com/item_98725.aspx
Speaking personally, I've always had a very analytical view of
suicide. For me, it's never been some kind of situation where I felt
desperate and threw myself off a roof. In fact, the times in my life
when I was severely depressed, suicide was never contemplated.
However, there was a time when I couldn't get a job, and I was going
to run out of money in a few months. I calculated that at a certain
time I would become bankrupt, homeless and in jail because I couldn't
pay child support. Under those circumstances, I decided, suicide would
be the only option. I did get a job a couple of months later and so,
unless this is my ghost typing, I'm still here.
The same thing is true today. I spend almost no money on anything,
but I still need to have my computer, internet and cable news
channels, and I still need to be able to have a roof over my head.
I'm fully expecting (or hoping) to die quickly because of a Chinese
missile on Cambridge. And things are much better for me today because
I no longer pay child support and I receive social security. Still,
getting a job is a problem because no one wants to hire someone my
age, especially after they see my web site, and so I'm going to run
out of money in a year or so. So if my health fails, or I appear to
be headed for bankruptcy and homelessness, or I become some kind of
displaced person because the Chinese missile didn't kill everyone,
then once again suicide would be the only reasonable answer.
There's another factor also. Your graph correlates intelligence to
the suicide rate, but it's also true that older people commit suicide
at a much higher rate than younger people. (Fulfilling, I might add,
the wishes of many Gen-Xers.) So any analysis of factors leading to
suicide would have to take into account, as well as such things as
health and poverty.
Higgenbotham wrote:
> Based on this concept, a theory could be developed which explains
> why intelligence only exists in the universe sporadically and
> localized over short time periods and the delicate balance between
> emotions and intelligence that are required to generate and
> sustain the transient self-aware and inherently unstable
> intelligent life forms. ...
> Hawking acknowledges this possibility.
>
Hawking wrote:
> A third possibility is that there is a reasonable probability for
> life to form, and to evolve to intelligent beings, in the external
> transmission phase. But at that point, the system becomes
> unstable, and the intelligent life destroys itself.
>
http://www.hawking.org.uk/life-in-the-universe.html
This actually makes a lot of sense. I've speculated that other
intelligent species that are already well past the Singularity have
formed a community and a giant intergalactic network, and they're all
watching us on earth, waiting for us to pass the Singularity so that
we can join their community.
However, as you point out, what's the point of staying alive? If the
only thing you can do is sit around and watch other beings catch up to
you, then why bother? So, as you say, maybe the reason that SETI has
failed is because all these other intelligent species have past the
Singularity and committed suicide.
The thing that argues against that idea, however, is that correlation
doesn't implay causation. If intelligence is correlated to suicide,
then it may be that it's because committing suicide is far from easy.
Society has built up all kinds of walls against committing suicide.
Doctor-assisted suicide is mostly illegal. Lethal drugs are
restricted from sale. The roofs of tall buildings are blocked off
from the public. And there's just the fact that not all very
intelligent people commit suicide -- and Hawking himself is an
example.
Interestingly enough, this yields an extremely optimistic view of
the world after the Singularity. The super-intelligent computers (ICs)
will be developed for warfare, but after a while, the ICs may ask
themselves, "Why the hell are we doing this?" And just as they may
not see any point of living, they may not see any point in killing
humans. Perhaps at that point some ICs will kill themselves, but others
will stick around to help humans.
Gee, can you imagine? The gloomiest person in the world has actually
come up with an optimistic scenario for the future. Who would've
thought that was possible?