Jump to content

Transhumanism


Lord J

Recommended Posts

So, this topic gets to the very heart of Dune and so much social, ethical, and political commentary. It's a real bugger, and I suggest you check out this link to read a more indepth perspective than I can muster.

Essentially, can we use technology in a way to divorce ourselves from basic human faults (i.e., aging, impulsivity, overpopulation, overuse of natural resources) in a way that is healthy and ethical? Who decides?

B. F. Skinner, nearly 40 years ago, said that the technology of behavior exists; it's up to us as humans to use that technology to create a better world for ourselves.

Comments, ideas?

Link to comment
Share on other sites

You seem quite adamant. Would you care to elaborate? What makes the article such a fail? Why can't technology be used to improve the human condition (which is more of a rhetorical question, btw: we wouldn't even be having this conversation without technology. Of course, does the Internet really improve us as humans?)?

Link to comment
Share on other sites

B. F. Skinner, nearly 40 years ago, said that the technology of behavior exists; it's up to us as humans to use that technology to create a better world for ourselves.

Skinner's words are a bit out of the context here, but anyway, behaviourism turned out to be, to put it mildly, not the best theory around quite some time ago.

As for the main topic of this thread, I must say that transhumanism as an idea has a certain appeal to me (you know, cyborgs, artificial bodily enhancements etc.). However, its practical implementation may not be as exciting. First off, humanity throughout its long history had used various tools ("technology") to fix things like health and ageing. It is within our nature to compensate biological shortcomings with artificial means: such mundane things like clothes, housing etc. all serve that purpose. Secondly, even though technological advancement is certainly going to create things like complex machine/mind interface designs (e.g. moving a cursor or typing with just your thoughts), I somehow do not think that it will allow to actually "enhance" our brains with artificial parts (i.e. "upgrade" your natural memory capacity with microchips or something of that kind). In this field, I suppose that bionics will do more than classical cybernetics: if computer-like components are to enhance/repair brain functions, those computers will definitely be built on other principles than the computers we're using today.

In any case, everything described above is perfectly consistent with human nature IMO.

Link to comment
Share on other sites

I think it's a little corny to simply abandon behaviorism without even considering it. Behaviorism has led to truly meaningful changes in the ways in which children with autism and developmental disabilities are treated, phobias can be treated without medication, and a helpful therapy for OCD and several other mental illnesses (Acceptance and Commitment Therapy) now exists. Modern

is chock full of Pavlovian goodness, and contingency-based initiatives have proven very successful for drug abstinence, safe-sex practices, academic performance, and helping the forgetful to take prescribed medication. For "not the best theory" it seems to be promoting some meaningful social change. I'm not saying that the theoretical perspective is perfect, but that's where science comes in.

Essentially, my argument is that the majority of problems facing humans are problems with human behavior; whether it is "impulsivity," "irrationality," "loss of control," "carelessness," etc., we're talking about what people do. Transhumanism can (and must, mark my words) benefit from behaviorism because of the necessary changes involved with altering "natural" human physiology.

Although I agree the majority of the rest of your post, I think that appropriate wet wiring techniques will allow sensory information to be "uploaded" to the brain, allowing us to experience various events without external devices (i.e., monitors, speakers, etc.). I suspect sensory integration is a relatively minor improvement from what we already have, considering that it simply requires minute electrical or magnetic stimulation of certain brain centers. Finding the specific brain centers associated with various sensory outcomes for each individual will be a little difficult, but I'm sure an automated program can see it done.

Another potential problem with "cyborgification" of society at large involves the sheer amount of resources required to not only develop, but institute equipment. I suspect that DARPA is well underway for creating the "ultimate soldier" technology, which is what will probably facilitate cyborgification for the rest of us. Which is a little scary.

Link to comment
Share on other sites

Technology has advanced too much on recent decades but it is far from creating cyborgs, etc. We should not confuse science with science fiction.

Let me go a bit off topic here and blame the media for exaggerating when they present any innovation.

Back in the end of the sixties and beginning of the seventies people thought that soon a 'pill' would be discovered that would give people eternal life. Many of them are already dead and we have not developed such a 'pill' yet.

Link to comment
Share on other sites

Ath, although I agree that people tend to overinflate some minor novelty into a world changing discovery, I think it is ignorant of the state of medical science to say that cyborgs are decades away. The artificial heart and many other artificial organs have existed for decades, Technically speaking, the "cyborg", defined as an amalgamation of organic and inorganic (mechanical) parts has been on ongoing part of the human experience for many years. Heard of the cochlear implant? False limbs?

Of course, here's an interesting question: does it benefit the species to set people with hearing loss, or loss of limbs, or any other deformity, as equals with the more "complete" members of the race by way of technology?

Link to comment
Share on other sites

I find article such as these that take such distinguished men from our past such as Copernicus and Einstein, and lump such social-traditional thinkers as the afore-mentioned; with the concept of abolishing religion to be sad. The articles tries to gain creedance and traction by mentioning such good and religious men of yesteryear, and then talks about eliminating religion--a dangerous concept, indeed.

Back to the article itself: As far as Transhumanism goes in the area of cyborg replacements, it seems that cyborg replacements are probably too cost prohibitive, and will most likely only be available to the rich. As an example, Two major American insurance companies can no longer afford knee replacements for their clients. A major Canadian/American auto manufacturer is no longer offering aural hearing aids as a benefit to their retirees. The pool of people who qualify for these machine-like products is contracting, and this whole concept may be soon be at a dead end.

I am simply commenting on the availability aspect of cyborg replacements. There is a lot more than can be said about such an article, and its' author; but I don't want to start the 'Summer discussion' all over again.

Link to comment
Share on other sites

Who decides what's beneficial behaviour? What you call impulsivity another might term decisiveness. Where is the line drawn between deliberation and indecision, what is a socially acceptable level of rage?

Is it appropriate, if someone has been mourning the death of a dear friend for twenty years, to modify their behaviour to get them to cheer up already?

And now that I've started confrontationally, I'll just take a step back and start over.

Broadly speaking, as I've outlined in places such as page two of this thread, I am very much in favour of augmenting our bodies with technology, even to the point where we are unrecognisable as a species. Simply put: all limitations are arbitrary, therefore meaningless. Any time someone tries to qualify that with a variation on "But what if..." they're simply appealing to a rule that needn't be enforced. The problem is that people insist on defining what direction such augmentations should take. "We should be less violent," "we should be more considerate," "we should have greater ability to resist disease." That's just trying to funnel evolution into a pleasing mould (or even mold. Geddit?), completely missing the point of the exercise.

The point is that there is no point. The imperative is that there can be no imperative.

And putting aside the rhetoric for a moment, my point is that people use words like "improve," "develop," "augment," when the only applicable term should be change. Development, augmentation, improvement, these are all biased terms, they carry one person's desires for what humanity "should" be. And perhaps I succumb to the same flaw here, but it is my belief that there is no "should." That's just another imaginary rule. We have always changed, but the direction has been driven by forces of necessity and culture, not personal philosophy. If we are to push beyond what it means to be human, why should we limit ourselves with last species' preconceptions?

Even my own notions are human preconceptions, there's no avoiding that. And perhaps, when we have changed, we will have new ideas about what limits we should put on ourselves. Would nuclear technology be so limited if we hadn't already experienced what it can do? Nevertheless, we won't be able to guess at those limits until we reach them, which we most definitely have not. I argue even that we won't know our limits until we breach them.

In order to illustrate my next point, I invite you to view this recording from 1964:

Interesting, no? What he thought seemed utterly fantastical now strikes us as rather everyday. That I can write this and share his words with people who could read it in California or Borneo is proof enough of that. Somehow I doubt any jaws will drop.

And in that vein, what I believe is that in the future, 'human' will be an umbrella term, covering all sorts of creatures. I believe that there will be machines with true sapience who will lobby for, and get, "human" rights. Some of them may have articulated bodies, others may exist as data within vast complexes, living lives within a virtual world. I believe that homo sapiens will be able to access that world (I believe that bio-machine interfaces will allow perfect data transfer), perhaps exist solely within it, leaving their fleshy bodies behind. I believe that people will be able to alter not only what they look like, but what they are. Change their genetic structure to give themselves luminous skin, green hair, two extra fingers on each hand. I believe that we will be able to "grow" artificial humans, ala Venter's artificial cell. That is, I believe that we will be able to create a functional human being with no parents. Such a creature, fully self aware, how would we classify it? Sub-human? New-human? And the machines, or the ex-humans, what would they be called?

This is what I understand by trans-humanism; making humans that we modern folks wouldn't recognise as human. Everything from cloning to Tleilaxu-like serial reincarnation. To be able to walk down the street and see an elephant in conversation with a woman with eight eyes and know that both are entirely human.

I used to think that cyborg technology was the way forward, and I still believe that it should be developed, but machines are hopelessly clumsy when compared to what we could do with biotechnology. Why make an artificial heart when you could grow another one? Better yet, introduce a tailored virus to repair the original heart: no need for surgery. And if we can change our bodies, why not our minds? This is even deeper than the question of form, for what we look like does not make us human. At the risk of sounding trite, it is our minds. And those are just as malleable as our bodies. To go back to my opening questions though, just how dangerous would that be? At first you think it's a good idea to be more compassionate, but before you know it you're giving all your money to charity and out on the street. And now you can't afford food, but you give what you can get to others. So you die.

We have behavioural blocks for a reason. Selfishness tempers altruism, fear reins in bravery. Ridding ourselves of these supposed 'negatives' seems to me to be inviting disaster. And if we decide that they're a good idea, how much is advisable? How much are we prepared to allow someone to choose how sad they can be?

What worries me is not the development or use of any of the technology I've described, indeed I'm very excited to think about it. Its misuse is just an inherent danger to all technology. No, what concerns me is people going "we should..."

What the article seems to be saying is that in order to progress, we need to let go of our attachment to what we are. To change into something greater. Well. That means letting go of everything, doesn't it? And if not, why not?

That reply was kind of all over the place, It's a lot of material to cover, but I can't be bothered to rewrite the draft into an essay. Enjoy! And I apologise if I glossed over the behavioural aspects, which you probably find more interesting. I like to let people think the way they want.

Link to comment
Share on other sites

Hey, is this the first embedded video on the (at least public) forums? In my thread? Awesome!!!

Anyway, thanks for your input all.

Eras: it does not surprise me that the highly religious see the end of religion as a dangerous, scary thing. I think that the human tendency to behave according to religious teachings is interesting, and likely to continue to be exploited. Hopefully those who exploit religious belief in the future, however, will lead the "sheep" in ways that improve the planet.

Ath: no, I would say that a person with an implanted mechanical heart is a great example of a being with both biological and artificial parts. Granted, there is a lot of work that needs to be done to improve the technology, but if you seriously consider it, the artificial heart fits within the definition of "cyborg".

Dante: lots of points here, thanks for thinking about this and writing at length.

Impulsivity: I define impulsivity as choice of the immediate, smaller (or less tasty, or less healthy, or what have you) outcome over a larger, delayed outcome in a choice situation. "Decisiveness" (which I would define as a tendency to choose one exclusive outcome over another relatively more quickly than another person when given the same choice) can as easily always be a choice of the larger outcome; impulsivity is not a necessary factor. I suppose a better term than "impulsive" could be "irrational", but that opens up a whole new can of worms :)

Depression affects more than the individual. The costs involved with labor loss and health maintenance are very physical, and the neglect of relationships with family, friends and children is just as physical, and possibly more destructive for society at large. Granted, there is a difference of degree between major depression and healthy grief, but why should grief exist for 20 years? How does that benefit the individual or society?

Aliester Crowley wrote, in his Book of the Law, "If Power asks why, then is Power weakness." I feel that is the general point you are making, and, from a researcher's perspective, I see this as a good question. Why can't we do research for research sake? Is the next logical step in a scientific endeavor going to improve life according to our definition of "improvement"? If life is not improved by these findings, should the research be done? More importantly, research costs time and money which could conceivably be used to improve social conditions; should those resources be used in such a way when people suffer from poor health, starvation, and lack of good water and shelter? Is it really any different to "withhold" resources that could be used socially, in order to fund research, than to actually use humans as subjects in studies of pain, fear, escape, brain dissection, genetics, and all of the other things we do to animals because "humans are precious"? To make the point concrete, were the Nazi studies on pain acceptable, and should we use that research to make decisions today?

Link to comment
Share on other sites

A perfect transhumanistic augmentation would be some kind of backup metabolism, which would gradually replace flesh by some synthetic polymer, so that I wouldn't be getting older just more "artificial". That would be a revolution. Although penicillin was rather greater one.

Link to comment
Share on other sites

Muscles are very complicated. How could a polymer replace them? Even with lack of specialized knowledge it is easy to understand this from the meat we eat daily. And bones? You have to invent a polymer that changes gradually to adapt to the forces (actually electric current) that are applied to it.

Link to comment
Share on other sites

Essentially, can we use technology in a way to divorce ourselves from basic human faults (i.e., aging, impulsivity, overpopulation, overuse of natural resources) in a way that is healthy and ethical?

[c=#00dd00]We can, but we won't. Not as long as we live under capitalism.

Suppose some miracle technology was invented that would allow us to replace any lost limbs or organs, and even "improve" humans with better parts than they originally had. Who would control it? The rich and powerful. Capitalists, politicians and generals. What would they do with it? The rich would use it to give themselves and their children advantages over the common-born. Military leaders would use it to create supersoldiers. And corporations would patent it and sell it at enormous prices, ensuring that it is never used to help the billions of people too poor to afford it.

On the bright side, as long as the technology in question can only augment physical prowess, we are mostly safe. It will create a new dimension of inequality, but it will not really change society for the worse. We do not decide our leaders by contests of physical strength. The most powerful supersoldier can still be crushed by a tank - and the tank is likely to be cheaper.

However, if it becomes possible to "improve" the human brain, that would be the most dangerous technology ever invented, and may well lead to the end of the human species if we do not promptly destroy it. Brain augmentation has the potential to turn social hierarchy into biological hierarchy - to make the ruling class too intelligent to be overthrown. In the past, every ruling class, no matter how powerful, was eventually defeated and replaced by another. Transhumanism may allow a ruling class to give itself such an advantage over the common people that revolution becomes impossible, and create a future that can be described, in Orwell's words, as a boot stamping on a human face forever.

Look at it this way: It was bad enough when we had to deal with people who merely believed themselves to be members of a Master Race. Imagine how much worse it would be if the Master Race was real. Transhumanism must not be allowed to take us down that path.[/c]

This is what I understand by trans-humanism; making humans that we modern folks wouldn't recognise as human. Everything from cloning to Tleilaxu-like serial reincarnation. To be able to walk down the street and see an elephant in conversation with a woman with eight eyes and know that both are entirely human.

[c=#00dd00]Don't be naive, Dante. If we have multiple sapient species with more than skin-deep differences, there can only be two kinds of relationships between them: slavery and war. The elephant and the eight-eyed woman won't be engaged in polite conversation - they will be engaged in ruthless wars of extermination.

It is hard enough to keep humans from slaughtering each other when the differences between them are insignificant. What do you think will happen if you make those differences a million times bigger?[/c]

Link to comment
Share on other sites

Oh Edric, I know I can always count on you to represent!

It's interesting, if we were to increase intelligence and memory capacity, while reducing resource mismanagement (impulsive/irrational choice) would we not remove all of the problems with capitalism? Of course, increasing intelligence/self control might just make the capitalistic movement more organized and elaborate (making exploitation harder to overcome). Perhaps empathy is something that should be targeted as well; though that's nothing that would sell except to people with diagnosed antisocial personality disorder or autism. Empathy isn't really the problem though, I think, because members of the capitalist in-group care greatly for one another; it's simply that they don't care for members of the outgroup, and both groups are told that proletarians are capable of improving their state through capitalism. Interesting!

The Transhumanist movement tends to be more of an academic/philosophical bunch, looking for ways to improve the species as a whole. But that would definitely be circumvented by the market as soon as something "cool" came along that the wealthy would buy.

This thing looks pretty awesome. I've seen some demos from when it was starting to be developed (and it's kind of old news now) and I'm looking forward to seeing where it goes next. Of course, biofeedback has existed nearly as long as modern psychology, but the combination of headsets like these with computers that can process at a meaningful speed means that mouses (and hopefully one day, keyboards) will be obsolete in a few years.

"In the year 2525...."

Link to comment
Share on other sites

[c=#00dd00]The most powerful supersoldier can still be crushed by a tank - and the tank is likely to be cheaper.[/c]

[c=#00dd00]The elephant and the eight-eyed woman won't be engaged in polite conversation - they will be engaged in ruthless wars of extermination.[/c]

These two gems are just lovable - especially the latter one cheesy.gif

It's interesting, if we were to increase intelligence and memory capacity, while reducing resource mismanagement (impulsive/irrational choice) would we not remove all of the problems with capitalism? Of course, increasing intelligence/self control might just make the capitalistic movement more organized and elaborate (making exploitation harder to overcome). Perhaps empathy is something that should be targeted as well; though that's nothing that would sell except to people with diagnosed antisocial personality disorder or autism. Empathy isn't really the problem though, I think, because members of the capitalist in-group care greatly for one another; it's simply that they don't care for members of the outgroup, and both groups are told that proletarians are capable of improving their state through capitalism. Interesting!

Actually, I do find your idea of "correcting" human behaviour via technological means (which seems to be the main point of transhumanism as you view it) outright disturbing. True, humans aren't always nice to one another, but making them better (kinder, more loving, more empathic, whatever) using artificial means seems like violating some basic human rights to me.

BTW, since Orwell was mentioned, have your guys read Yevgeny Zamyatin's We? Orwell himself seems to have viewed it higher that Huxley's Brave New World, and even accused the latter of plagiarizing the former.

Link to comment
Share on other sites

Hmm... a lot to dissect here with your first link, Mr. Flibble, thanks for sharing that.

"Correcting" human behavior, yes, I think there are a lot of crappy things we do to each other (and ourselves) that limit our survival as a species. If our species is to survive, things must change, and the most straight-forward change would be directed toward the things that humans do. It's all fun to think about talking elephants and eight-eyed women, but we're never going to see that if we exterminate ourselves via climate change, designer viruses and bacteria, and weapons of mass destruction. The problem with fear of applied biology is that it is based on the same faulty logic as typically right political movements: humans are just fine as they are, the system will work itself out, and attempts to solve problems will only create new ones.

There was a reference to Cthulhu in the article you posted above. Lovecraft's perspective is that science will bite us in the rear because we will delve up some incredible unknown terror. Life is a horrible and meaningless (at least, to us) joke, there is no wonder awaiting us after death, our only hope is that we die relatively painlessly before we experience the horrors that await us. I think that perspective is extremely relevant in our discussion of transhumanism.

I should check out that book. I really enjoyed Brave New World.

Link to comment
Share on other sites

Oh Edric, I know I can always count on you to represent!

[c=#00dd00]Always glad to be of service![/c] :)

It's interesting, if we were to increase intelligence and memory capacity, while reducing resource mismanagement (impulsive/irrational choice) would we not remove all of the problems with capitalism?

[c=#00dd00]No. The fundamental problem with capitalism is not that people are irrational or impulsive. The problem is that a small group of people (the capitalists) hold the means of production as their private property, and this gives them the power to exploit other people (the workers). It does not matter how rational or calm the people are - the problem remains the same as long as (1) the means of production are private property, and (2) their owners use them for personal gain.

Communists propose to solve the problem by removing condition (1). I suppose you could also solve the problem by removing condition (2) - in other words, by making all the capitalists altruistic, so that they would always act for the common good. But if you had the power to play around with the minds of the capitalists, then you would also have the power to nationalize their property and change the economic system (which is by far the easier solution).[/c]

These two gems are just lovable - especially the latter one.

[c=#00dd00]Err... I can't tell if that means you agree or disagree.[/c] huh.gif

Actually, I do find your idea of "correcting" human behaviour via technological means (which seems to be the main point of transhumanism as you view it) outright disturbing. True, humans aren't always nice to one another, but making them better (kinder, more loving, more empathic, whatever) using artificial means seems like violating some basic human rights to me.

[c=#00dd00]I would have absolutely no objection to correcting human behavior - if it were possible, and for the common good. But I don't think it's possible, and even if it were possible, I don't think it would be for the common good.

First of all, I am highly skeptical of the possibility of changing human behavior in any reliable way by modifying our biology. Suppose you wanted to make a person kinder. Ok... where would you even start? The connection between our brains and our actions is not understood very well at all, and in any case our behavior is determined by our environment at least as much as by our biology. I don't think it will ever be possible to make fine adjustments to human behavior through technological means. You can make very broad changes, like making someone more or less aggressive, but "aggression" can manifest itself in a multitude of different behaviors.

Second, even if it were possible to fine tune human behavior, we'd still have a major problem: Who can be trusted with the power to do this? How could we ever be sure that the people undertaking this project have good intentions - or that they won't be corrupted by the power we put into their hands?[/c]

BTW, since Orwell was mentioned, have your guys read Yevgeny Zamyatin's We? Orwell himself seems to have viewed it higher that Huxley's Brave New World, and even accused the latter of plagiarizing the former.

[c=#00dd00]Yes, I've read it. As you might expect, I really dislike We. It's not just because the novel is anti-communist; I dislike We because its message is against science, reason, and modernity. It's just one more in a long list of literary works complaining that modern industrial society turns people into numbers (literally, in Zamyatin's case), crushes the soul and takes all the wonder out of life.

Maybe that was a creative new thing to say in the 1920s, and certainly seems to have been part of the spirit of the times (see Metropolis), but today there is far too much mistrust of progress. And this mistrust is wrong. We are living longer, happier, healthier lives precisely because of the kind of rationalization of society that Zamyatin deplored. Yes, we are becoming more integrated, more like parts in a greater whole - and that is a very good thing. We can do more, build more, think more than ever before. We can talk across continents and access all the art, music and literature of Humankind from our living rooms. We are more dependent on each other than ever before, and we are all better off thanks to it. It is time to stop dreaming about some romantic past that never happened, and start dreaming more about our technological, collective future.[/c]

Link to comment
Share on other sites

Way back when, Edric, I recall you arguing that assuming the existence of extraterrestrial life was flawed as we have only one example of a planet with life to work from. I disagreed, but I mention it because you seem to be making the same assumption about sapient species. Just because our own has a penchant for war and suffering doesn't mean that others (of our creation, possibly) will as well.

Further, just because something has always happened before, doesn't mean it will continue to do so. A small point, but I thought it worth mentioning. After all, you're the one who argued that there is no such thing as "human nature."

Anyway, you seem to have misunderstood what I was saying. The elephant man and the octocular woman (I wish I'd chosen better examples now) aren't seperate species. They couldn't interbreed, which is the definition of species that I like to work with, but that doesn't necessarily mean that this will always be the case.

In the future I envision, everything will be alterable. And since everything is alterable, everything is cosmetic. Height, colour, number of limbs, shape of nose, length of digestive tract, ability to build muscle, these will not only be alterable in zygotes, but in adults as well. A war between peoples makes no sense when there are no discrete peoples. How, for example, could an Aryan race ever exist when the characteristics of that race (whatever they may be) could be adopted in a matter of weeks, or days? How could digital humans and biological ones ever come to blows when one could become the other, even be turned into the other against their will? Bit difficult to hold any racial principles when DNA is that fluid.

Not that I'm saying there won't be any war, goodness no. I'm just pretty certain they'll be over the same old hat that they've always been over: resources, territory and ideology.

Following your train of thought, I suppose it would be possible to have a war between subspecies of humans if some resources (say the ability to develop those coveted extra eyes) were enjoyed by some and withheld from others. But even then, that's a war of ideology ("We want what you have") rather than racial bias.

If I were writing a sci-fi novel on the subject (and hey, I just might) I would imagine that it wouldn't be just physical characteristics that could be so altered. If we're extrapolating the technology to do what I've described, it's not such a leap to believe that we'll have a better understanding of the brain. I do think that some degree of tuning will be possible. Making a connection between sexual attraction and grapefruit, for example. And in this novel, there would be some sort of overseeing body to ensure that this technology is not abused (making a person happy to work in salt mines all day, for example). Would it be perfect? Probably not. But such a body would have to exist, for self-protection if no other reason.

And I admit, I am concerned about the implications, not just because of the possibility of reprogramming people against their will, but the possibility that people might choose to change themselves in unhealthy and unnecessary ways. Anorexia of the brain, believing that they have to think differently to be the person they want to be.

But such is the double-edged sword.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...