Jump to content

Recommended Posts

What do you guys think of transhumanism/the technological singularity? I recently read Kurzweil's book, and live in the Bay Area where these topics are common, and wonder if Stefan Molyneux has discussed the topic.---If you are not familiar with the concept I included a few links

Ray Kurzweil explains the singularity:https://www.youtube.com/watch?v=1uIzS1uCOcE

 

Transhumanism wikipedia:http://en.wikipedia.org/wiki/Transhumanism

 

An introduction to transhumanism video:https://www.youtube.com/watch?v=bTMS9y8OVuY

 

Libertarian Transhumanism:

http://en.wikipedia.org/wiki/Libertarian_transhumanism

 

GF2045, an initiative started by a Russian billionaire to achieve transhumanism:https://www.youtube.com/watch?v=01hbkh4hXEk

  • Upvote 2
Link to comment
Share on other sites

Stefan Molyneux once said that if the Singularity happened, he would welcome it, but he did not yet really react to the philosophical implications. Unfortunately I do not remember when he said this.

 

He also recently posted a short comment on Facebook  that I roughly remember to be like "Just watched the movie transcendence: Oh dear."

 

Which Kurzweil book did you read? 

I liked to read The Singularity Is Near (2005) it partly summarizes his older books and gives a broader overview than his 2012 book "How to Create a Mind"

Link to comment
Share on other sites

What do I think of transhumanism?

 

I classify transhumanism with self-improvement. As a moral category, it would be tantamount to working out, or eating healthful foods, or devoting one's self to a useful art of study.

 

I'm a fan of the healing arts. Medicine. Gene therapy. Pharmacology. All of 'em.

 

If health can be improved, then I think that "fitness" is an important aspect of increasing human freedom (thereby increasing choice, and, thus, moral choice).

 

Without many choices there aren't very many moral choices. And you can't make choices if you're dead... so, living longer is linked to living well on some level.

 

To put it another way; gnats don't have time to read UPB... the lifespan of a gnat is just long enough to eat, mate, and die. And to counter the argument of, say, Galapagos tortoises, I would say that they have a slow time perspective - a slower metabolism, essentially... I'd say that the inner life of a 100 year old tortoise is short.

 

(quick aside: time perspective changes with age, and number of memories logged... as a child, many memories are logged very quickly, and the time of one year seems like forever. As an adult, people have "seen it all before", log few new, distinct memories, and a year might just fly by... even though a year is the same unit of time, obviously, for the adult as it is for the child)

 

My point is that; as life [bios] gets better - people have more opportunity to do good and to enrich their lives... in a way, it is a positive feedback loop of morality, quality of life, and advances in technology/freedom.

 

I'm all for increasing good through increases in standards (of living, of health, of interactions with people, etc.).

 

I'm a biochemist, and I'd love to research how to live longer. My medium-term goal is to study a master's degree in bioinformatics at Tulane, or Johns Hopkins, or where ever I can (to add a little more biochemistry to computer science, make genechips and microarrays for high thorough-put genetic screening, maybe study a bit of computational biology, and get better with computers and automation on the side).

 

There's nothing wrong with cyborgs, just as there's nothing wrong with prosthetic limbs for amputees. If you can enhance your life (and your body) in a peaceful way, then I see nothing wrong with that.

 

Of course there are always bad ways to go about the pursuit of enhancement...

 

This is where bioethics needs to take a long, hard look at experimental designs, and about what is (and what is not) ok to do with humans, and a number of topics.

 

For instance, how does a technologist minimize negative externalities (such as increasing antibiotic resistance by introducing a product into cattle feed... most of the world's antibiotics production is actually feed to livestock, btw). Can a technologist adopt a more clever approach (say, using bacteriophages, instead, to kill only harmful bacteria, as opposed to selecting for only strains that carry resistance by wiping out all competing bacteria)? The more clever the approach, the more efficient the technology, and the less spill-over there is... but... how much of R'n'D should go to optimization? How does a market incentivize cleverness that is subtle?

 

Capitalism does a great job at allocating resources, but is ultimately dependent on economic agents (who are also moral agents) to make decisions of where to allocate resources.

 

It is sometimes difficult to use, say, environmental forensics - to pin down the source of negative externalities caused by a technologist on the cutting edge...

 

Technologists on the cutting edge are well served by understanding ethics (in my case, bioethics). I may gain expediency by cutting corners... but I have to ask, "is this moral?" For instance, it would be very expedient to jump to human trails for a drug or treatment... but would it be moral? Have I been diligent, and am I worthy of the trust of my subjects?

 

What if no one will know if I introduced a repeating sequence into a genome that results in an anticipation disease 3 generations from now (http://en.wikipedia.org/wiki/Anticipation_%28genetics%29 )? This is not a question that the market can solve. In fact, it may be cheaper to quickly, and easily have a cassette containing a dangerous sequence that also contains a favorable sequence... Basically, what do you do when no one is looking?

 

At the cutting edge, all alone and ahead of people, one must rely on the integrity of their morals and knowledge - because social cues don't exist on a frontier. Transhumanism is very cutting edge...

 

(again, living longer would mean that I would be alive and responsible for more generations, thus, justice would find me and there would be additional consequences, and the moral argument would become more and more tangible as the children's children of a genetically engineered subject seek restitution for my sloppiness, in this hypothetical)

 

Unfortunately, few laypeople can make an informed decision on a technical matter. Voting, consensus, social cues, (and sometimes even markets) can't tell a technologist what they ought to do in the moment. There is a lag to adopting technologies... sometimes the lag is longer than a person's lifetime. How does one spend their life, when they cannot get feedback?

 

Is it worth it even if you won't see it in your lifetime?

 

I would ask Stef if he felt it is worth it to improve philosophy even though he won't see the full results of his actions within his lifetime... He would probably say that it was worth it for him... but Stef has a compass... not everyone has a very good compass.

 

I think that the relative peacefulness that we see at this point in history (as compared to, say, the killing fields of past epochs) relates to quality of life. I think that death and disease cause a lot of destructive behaviors in humans.

 

I don't intend on having children, so peaceful parenting is not something that I can have much primacy, or excellence, in doing. I'm not a famous philosopher. I'm a simple biochemist.

 

What is my philosophy? What are my ethics? How do I actualize my ideals?

 

I spent two years working in research oncology (among other jobs). I have some skills which may be useful. I can save people's lives (in some limited sense), but I can't give them a life (or inner-life, if you will). I can research how to cure people like Stef, but I cannot (nor would I want to be) Stefan (nor anyone else). I think that's an example of a longer life yielding more good in the world.

 

I see that as, essentially, my lived philosophy. It is, essentially, work-ethic. My works are there in harmony with my ethics.

 

I think transhumanism raises the stakes. Longer lives raises the stakes. I think the added freedom raises the stakes... but also increases moral agency... I'm all for increasing moral agency, and so I will, as far as I am able.

 

...High stakes...

 

...There was a time when I wanted to kill everyone (before philosophy, before I read Atlas Shrugged)... I wanted to rain death from the skies by isolating botulism toxin (one of the most potent toxins known to man, produced by the bacterium Clostridium botulinum. I have the skills. I can culture anaerobic bacteria, isolate the bio-molecule that acts as a protease inhibitor that is so lethal that the LD50 [lethal dose 50% of the time: dose per kilogram of living tissue] is 1 nanogarm... per kilogram... one gram could kill one million people http://www.siumed.edu/medicine/id/current_issues/BotulismPPT.pdf )... I could have run it on a chromatography column, done my biochem-thing, and seeded clouds with it, or put it in the water, and, literally, rain death upon humanity... And it's cheap, too. Scary cheap.

 

It is fortunate that Ayn Rand got to me first.

 

It's even more fortunate that it set me down a path (objectivism), which lead me here, and to moral and constructive pursuits. FDR can't take full credit for preventing genocide... but it helped prevent it to a degree (again, Atlas Shrugged was my triage and intervention).

 

...So, what is it about philosophy, here, which results in my interest in transhumanism instead of genocide?

 

Well... empathy and healing, I suppose, in short.

 

I think that technology is amoral. Any tool is amoral, and depends on the morality of the wielder. But humans... they have moral agency.

 

I like humans (now that I can empathize with myself and others, and with a healthy distance from an abusive past... I mean what mad scientist doesn't have a history of being abused?). I'd like to heal them, and give them freedom. If a person wants to enhance themselves, then I think that's great. With more knowledge comes more power, and as people get more powerful, violence becomes less of an option (because of the raised stakes, and the "discipline of constant dealings"), and good becomes more preferred.

 

I expect the future to be... interesting, to say the least. Transhumanism is one such interesting thing that is highly probable (knowing what I know).

 

As Doug Casey would say (who advocates getting rich - to get ahead of these technologies and to adopt things which enrich and extend life), "Hold onto your hats."

Want more?

 

... I did this in college; I made the enzyme Lactate dehydrogenase better [faster] =) http://en.wikipedia.org/wiki/Lactate_dehydrogenase The enzyme that converts a simple sugar into lactic acid (that stuff that supposedly makes your muscles sore after a workout) - which you use to get a quick burst of energy, when, say, you're lifting weights or sprinting.

 

Here's my experimental design; a simple point mutation that I picked because I wanted to study the hinge region that clasps the lactate:

 

Hypothesis: Point mutagenesis of Alanine 98 (Highlighted in red) to a Glycine will increase the degrees of freedom of the hinge region, which will in turn, allow greater access to the active site and faster kinetics. Alanine's Methyl group sterically hinders the hinge and limits the Phi Psi angles. Glycine would not have the Methyl R group and would have greater rotational freedom because of the Hydrogen R group. The null hypothesis would show that this rigidity is nessesary to the hinge's function.

Posted Image
 

Fig. 1) Normal LDH structure. Hinge regions in red, loop in blue.

Posted Image

Fig. 2) Normal LDH. Hinge regions in red, loop in blue.

Posted Image

Fig. 3) Proposed site of mutagenesis in yellow.

Posted Image

 

Fig. 4) Site of mutagenesis in yellow. The methyl group of the highlighted Alanine points to the lower left.

 

Primer design:

Query 1 MSTKEKLIDHVMKEEPIGSRNKVTVVGVGMVGMASAVSILLKDLCDELALVDVMEDKLKG 60

MSTKEKLIDHVMKEEPIGSRNKVTVVGVGMVGMASAVSILLKDLCDELALVDVMEDKLKG

Sbjct 1 MSTKEKLIDHVMKEEPIGSRNKVTVVGVGMVGMASAVSILLKDLCDELALVDVMEDKLKG 60

 

Query 61 EVMDLQHGGLFLKTHKIVGDKDYSVTANSRVVVVTAGARQQEGESRLNLVQRNVNIFKFI 120

EVMDLQHGGLFLKTHKIVGDKDYSVTANSRVVVVTAG RQQEGESRLNLVQRNVNIFKFI

Sbjct 61 EVMDLQHGGLFLKTHKIVGDKDYSVTANSRVVVVTAGGRQQEGESRLNLVQRNVNIFKFI 120

 

Query 121 IPNIVKYSPNCILMVVSNPVDILTYVAWKLSGFPRHRVIGSGTNLDSARFRHIMGEKLHL 180

IPNIVKYSPNCILMVVSNPVDILTYVAWKLSGFPRHRVIGSGTNLDSARFRHIMGEKLHL

Sbjct 121 IPNIVKYSPNCILMVVSNPVDILTYVAWKLSGFPRHRVIGSGTNLDSARFRHIMGEKLHL 180

 

Query 181 HPSSCHGWIVGEHGDSSVPVWSGVNVAGVSLQTLNPKMGAEGDTENWKAVHKMVVDGAYE 240

HPSSCHGWIVGEHGDSSVPVWSGVNVAGVSLQTLNPKMGAEGDTENWKAVHKMVVDGAYE

Sbjct 181 HPSSCHGWIVGEHGDSSVPVWSGVNVAGVSLQTLNPKMGAEGDTENWKAVHKMVVDGAYE 240

 

Query 241 VIKLKGYTSWAIGMSVADLVESIVKNLHKVHPVSTLVKGMHGVKDEVFLSVPCVLGNSGL 300

VIKLKGYTSWAIGMSVADLVESIVKNLHKVHPVSTLVKGMHGVKDEVFLSVPCVLGNSGL

Sbjct 241 VIKLKGYTSWAIGMSVADLVESIVKNLHKVHPVSTLVKGMHGVKDEVFLSVPCVLGNSGL 300

 

Query 301 TDVIHMTLKPEEEKQLVKSAETLWGVQKELTLGSSSHHHHHH 342

TDVIHMTLKPEEEKQLVKSAETLWGVQKELTLGSSSHHHHHH

Sbjct 301 TDVIHMTLKPEEEKQLVKSAETLWGVQKELTLGSSSHHHHHH 342

atgtccacca aggagaagct catcgaccac gtgatgaagg aggagcctat tggcagcagg aacaaggtga cggtggtggg cgttggcatg gtgggcatgg cctccgccgt cagcatcctg

ctcaaggacc tgtgtgacga gctggccctg gttgacgtga tggaggacaa gctgaagggc

gaggtcatgg acctgcagca cggaggcctc ttcctcaaga cgcacaagat tgttggcgac

aaagactaca gtgtcacagc caactccagg gtggtggtgg tgaccgccgg cgcccgccag

caggagggcg agagccgtct caacctggtg cagcgcaacg tcaacatctt caagttcatc

atccccaaca tcgtcaagta cagccccaac tgcatcctga tggtggtctc caacccagtg

gacatcctga cctacgtggc ctggaagctg agcgggttcc cccgccaccg cgtcatcggc

tctggcacca acctggactc tgcccgtttc cgccacatca tgggagagaa gctccacctc

cacccttcca gctgccacgg ctggatcgtc ggagagcacg gagactccag tgtgcctgtg

tggagtggag tgaacgttgc tggagtttct ctgcagaccc ttaacccaaa gatgggggct

gagggtgaca cggagaactg gaaggcggtt cataagatgg tggttgatgg agcctacgag

gtgatcaagc tgaagggcta cacttcctgg gccatcggca tgtccgtggc tgacctggtg

gagagcatcg tgaagaacct gcacaaagtg cacccagtgt ccacactggt caagggcatg

cacggagtaa aggacgaggt cttcctgagt gtcccttgcg tcctgggcaa cagcggcctg

acggacgtca ttcacatgac gctgaagccc gaagaggaga agcagctggt gaagagcgcc

gagaccctgt ggggcgtaca gaaggagctc accctgggta gctcgagcca tcaccatcac catcactag

 

The red highlighted sequence is the alanine. The GCC will be replaced with GGC, in order to switch A->G.

 

The forward primer will be 5` gtgg tgaccgccgg cggccgccag caggagggc 3` (16 before the mutant and 16 after; as 33 bps is within the westlab protocol)

The reverse primer will be 5` gcc ctcct gctgg cggcc gccgg cggtc accac 3`

(side by side)

forward 5` gtggt gaccg ccggc ggccg ccagc aggag ggc 3`

reverse 5` gcc ctcct gctgg cggcc gccgg cggtc accac 3`

 

Design Timeline:

Lab1

  • Get primer.

  • Using circular pBG89-LDH recombinate plasmids (purified earlier), preform a “Quickchange 2 Site-Directed Mutagenesis” (a PCR using the mutant primer)

  • (if time during same session) Transform competent cells, using DH5α cells (as done before) and plate.

Lab2

  • Select colony(ies), grow the colony for determination and expression.

Lab3

  • Do a digest and agrose electroproesis to make sure we have an approriate sized digest/plasmid, if good;

  • Express LDH 98A∆G during log phase, using IPTG or lactose analogue

  • Pellet DH5α cells

Lab 4

  • Lyse and purify mutant LDH (98A∆G)

Lab5

  • Do SDS PAGE and kinetics

...

 

Good fun.

(from http://www.radiolab.org/story/91596-so-called-life/ )

...

You can look up the sequence on BLAST, if you want, and download the structure to pyMOL, if you'd like:

 

http://blast.ncbi.nlm.nih.gov/Blast.cgi?PROGRAM=blastp&PAGE_TYPE=BlastSearch&LINK_LOC=blasthome

 

http://www.pymol.org/

Link to comment
Share on other sites

I'm curious if there would be any difference between sentient AI's and humans, if there was how much of a difference? Could they really 'feel' emotions, would their morals be pre-defined lines of code or would they be able to determine new situations as moral/immoral? etc.

 

I definitely don't think it's a good idea while we have psycopath's running around with billions of tax dollars, we could possibly see the biggest democide event in history with conscious AI's being used for millitary.

  • Upvote 1
Link to comment
Share on other sites

I definitely don't think it's a good idea while we have psycopath's running around with billions of tax dollars, we could possibly see the biggest democide event in history with conscious AI's being used for millitary.

 I share your concern about the military.

 

As an amateur I think two scenarios are plausible, either mainly code based AI, or feeling/learning AI might win in the race for sentience.

But as long as humans are building the machines there is a point for philosophy either way.

Until now humans are the most intelligent entities on earth and I assume that before AI's are the most entities on earth, there will be a time when hybrid humans enhanced with AI intelligence will be most intelligent.

 

As more intelligent humans are not necessarily more moral, so should we not expect strong AI's to be automatically moral.

 

Kurzweil also speaks about  the huge importance of ethics to navigate the coming changes in his newest book. He mentions for example the "golden rule".

Link to comment
Share on other sites

...

As more intelligent humans are not necessarily more moral, so should we not expect strong AI's to be automatically moral.

 

Kurzweil also speaks about  the huge importance of ethics to navigate the coming changes in his newest book. He mentions for example the "golden rule".

I'd like a bit of clarification on "As more intelligent humans are not necessarily more moral" proposition:

 

I suppose I'd ask for a definition of intelligence. Do you mean raw data that a person can recall? Do you mean useful data that a person can recall? Do you mean capacity for critical thinking (to see the vital aspects of a thing, i.e.; to derive principles)?

 

If intelligence is defined as the amoral recall of facts and data (for instance, I can recall the names of various [bio]molecules and what their functions are), then I agree that "intelligence" is amoral.

 

But if intelligence also contains agency, then I'm not certain of your claim that intelligence [in humans, or anything] is decoupled from morality.

 

In fact, I would suspect that morality could be directly linked to intelligence (and, likely, vice versa).

 

I'm a little unclear on the meaning of the word intelligence (I think it might be too imprecise of a term for it to be useful for my understanding).

Link to comment
Share on other sites

I'd like a bit of clarification on "As more intelligent humans are not necessarily more moral" proposition:

 

I suppose I'd ask for a definition of intelligence. Do you mean raw data that a person can recall? Do you mean useful data that a person can recall? Do you mean capacity for critical thinking (to see the vital aspects of a thing, i.e.; to derive principles)?

 

Indeed I see Intelligence is too broad a term. This would be my definition of intelligence in this case:

The speed at which somebody can recall information.

The amount of information somebody can access in a given time to make a decision.

The speed at which somebody can detect patterns in his environment.

Link to comment
Share on other sites

Indeed I see Intelligence is too broad a term. This would be my definition of intelligence in this case:

The speed at which somebody can recall information.

The amount of information somebody can access in a given time to make a decision.

The speed at which somebody can detect patterns in his environment.

With that definition, I agree that intelligence (as you've defined it) is amoral.

 

It doesn't seem like it incorporates mutation, synthesis, recursion, feedback loops, and rational hope. Being rational about when to be rational is something that (I think) adds to my intelligence.

 

Check back in a few years after I'm a better programmer, and I'll let you know what I think about machine learning then. In the meantime:

http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-867-machine-learning-fall-2006/

  • Upvote 1
Link to comment
Share on other sites

Listening to Kurzweil it seems to me that he is trying to have his cake and eat it too. To be human and non-human. To be uploaded to a computer and still keep his human traits and desires. A being that has no needs does not need intelligence to satisfy those needs. Humans have developed as they have to deal with survival in a world with limited resources and danger. I imagine once uploaded to a computer the end result will be extremely different than how a person thinks now-days, who knows what thoughts would be going on in the machine it might decide to play pong indefinitely.

 

It reminds me of Richard Dawkins reference to an immortal robot in his book the God delusion. I think that's where it is from. And how this robot would have no desire to do anything because it doesn't need to survive.

 

Mises has also reference something similar. I'll try to find the quote. (see below - I'm including the entire paragraph for context but I highlighted the relevant text)

 

Such is the myth of potential plenty and abundance. Economics may leave
it to the historians and psychologists to explain the popularity of this kind
of wishful thinking and indulgence in daydreams. All that economics has to
say about such idle talk is that economics deals with the problems man has
to face on account of the fact that his life is conditioned by natural factors.
It deals with action, i.e., with the conscious endeavors to remove as far as
possible felt uneasiness. It has nothing to assert with regard to the state of
affairs in an unrealizable and for human reason even inconceivable universe
of unlimited opportunities. In such a world, it may be admitted, there will
be no law of value, no scarcity, and no economic problems. These things
will be absent because there will be no choices to be made, no action, and
no tasks to be solved by reason. Beings which would have thrived in such a
world would never have developed reasoning and thinking. If ever such a
world were to be given to the descendants of the human race, these blessed
beings would see their power to think wither away and would cease to be
human. For the primary task of reason is to cope consciously with the
limitations imposed upon man by nature, is to fight against scarcity. Acting
and thinking man is the product of a universe of scarcity in which whatever
well-being can be attained is the prize of toil and trouble, of conduct
popularly called economic. (Human Action)
Link to comment
Share on other sites

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.