Jump to content

thinking machines

Recommended Posts

Bah, it would make a "better" movie, not the Independence Day-ripoff "invade earth, kill humans, exploit resources, leave".

On the other hand, if you mean that they were peaceful (in sence that they didn't find and exterminate every induvidual from the alien race that created them, and are now out to kill every living being in the universe), then, that would make it interesting. I don't know what I would think - would I think?

Link to comment
Share on other sites

This is my theory:

If we ever hit the new century, machines will NOT totaly takeover our lives or jobs.

Every factory could have robots and other hi-tech machinery.

So it would be 50% human 50% robots

The robots are only there to make work easier.

But think of the flaws that a fully automated factory could have.

"Spies or Terrorists could easily shut/bomb the whole place with a EMP bomb, or something similiar."

Maybe there could be security droids, but once again, they can be easily taken out by EMP related weapons or anything else that can easily take down a computer.(Cup of water ;D)

"There could be 'AI' malfunctions in the robot, and if one robot malfunctions, the whole production can be sabotaged by the blunders of the robot"

"Space travel by robots? You be the judge ::)"

"And 'hackers' could be the #1 problem in that century. It is like encountering the game Counter-Strike every day, hackers everywhere, they could even break into your bank account since everybody will try to learn how to hack because of the new century. So, your name or information is nowhere safe wherever you go."

Link to comment
Share on other sites

Artificial Intelligence is possible. Yeah, go Hofstadter!

But it isn't possible within 17 years. We don't even know what it is exactly. But it is possible for these simple reasons:

DNA is build out of rows of organic "bits". Now it aren't 1 and 0. But it are organic molecules.

So the base is the same.

a neuron is very simple too. The only thing it does is that it emmits a signal when it received a certain amount of signals from other neurons.

If the base is so simple, then there must finally be a way to organise it all, and build our own intelligent machines

Link to comment
Share on other sites

As any programmer would know, artificial intelligence is an impossible concept. The extent of a computer's possible intelligence is limited by what it can be programmed to do. Even though computers can do all sorts of things, they still have no intelligence. They are dumb. They are driven by the logic of their programming and have no escape from it. They are incapable of thinking abstractly and making value judgements on things that are not measurable. It's really nothing more than a big database of pre-programmed reactions to the things you input.

Fist off I believe many programmers would disagree with you here, namely the ones that actually work on the concept.

Secondly the model you seem to be attacking is the positronic model of articial intelligence, in which every possible contigency was put into place, that is improbable because that simply requires too much power.

However people interested in AI do not take the positronic approach anymore but instead are interested in machines that only have a few standard/vague rules and learn from such. This is closer to our own intelligence and indeed Deep Blue excercised this as the machine was not programmed with all possible chess movves beforehand(there are more possible chess combinations then there are atoms in the universe) but with a few basic rules that allowed it to learn.

Third, you are talking about modern day computational power, and indeed that is dumb compared to us. But machines are almost at the level of a snail, supercomputers anyways. The machines have advanced this far in less then a hundred years, it took biological organisms almost a billion to reach this level. It is thus estimated that machines are evolving 10 million times faster then biological organisms and will soon surpass us.

Fourth, you complain that a machine needs programming...but what are genes and enviromental experiences besides our own programming? Human beings and other life forms are hence just as "programmed" as a machine, just by the blind process of evolution, genetics, exegentic mechanisms and enviromental conditioning instead of by a person via software.

Last here is a link to AI research being done at MIT:


It has info on how machines are being programmed to learn among other things.

Link to comment
Share on other sites

1. Correct, but those same programmers are technically inaccurate in using the word intelligence. Even I have used the word too lightly in this thread.

2. And as I recall, the best chess player in the world consistently beat Deep Blue until fatigue became a factor. Anyway, the concept you are referring to is extrapolation reactional programming. From what I remember reading about it, the Deep Blue was programmed to extrapolate every possible move it and its opponent could make up to a certain number of moves ahead, and determine the possible losses, checks or checkmates from those moves, and use the move that was most favourable. It is not able to analyze the game critically. It is simply able to compute possible courses of action through programmed parameters and its selection is purely numerical. Obviously it's exceptionally good at what it does, but do you really think that qualifies as intelligence? Or our sense of intelligence?

3. Machines don't 'evolve', we program them. In order for them to be able to surpass us, we would have to surpass ourselves. I'm not quite sure what you mean by your snail comparison, but even the computers today can't make better snail intelligence than snails can.

4. Genes do program our biological functions (growth, metabolism, etc) and our environments do shape who we are, but people have a perception of reality that cannot be duplicated by a computer. We can perceive and observe circumstantially, not just numerically. Everything a computer can take as input has to be perceived as something recognizable by its programming, and even the supercomputers you referred to have to be able to perceive it with some kind of numerical value.

Link to comment
Share on other sites

1) That sounds like begging the question. And even if they do not think they have intelligence on our level now, remember we are not speaking of now but in the future.

2) Kasparov lost his second game and barely won his first. So much for fatigue. Also all 6 games were spaced apart by days. Game 1 was May 3rd, game 2 may 4th. In all Deep Blue won 2 games(the last by a large margin), Kasparov won the first(barely), and 3 were draws over the course of 8 days.


And keep in mind that Deep Blue is just what we have now, computer technology is doubling each year and the rate of increase is shown to be increasing. You think Kasparov could beat a computer twice as powerful as Deep Blue? I don't.

3) Is simply invalid. Machines can already do stuff much better then us and they do evolve even with our assistance. More importantly some machines are already learning how to design and program eachother, in fact some microchips are now designed by machines because the complexity is out of human hands. That argument is equivalent to saying that before we can make a machine which can fly: we will have to fly.

4) But we do percieve in a numerical value, with bits and data. We interpret certain amount of lightwaves with certain amounts of pigment that make a certain number of neurons fire off in a specific location. What we call sensations and this requires underlying programming, underlying software and hardware. We need eyes, neurons and the right brain structure in order to have and interpret perceptions in a sensible and self-aware manner.Remove that- and a human can no longer see or percieve. Machines will simply do the same with thier own hardware and software.

Link to comment
Share on other sites

For the good order:

Deep Blue didn't won because it was better. But because it used alot of calculation power. Which Kasparov hasn't.

Recently new chess programs have been designed who play alot more clever (they exclude alot of moves before they start calculating. Deep Blue didn't exclude much moves before starting to calculate)

But when a computer beats a human in chess, then it is still far from real Artificial Intelligence, but I promise that 'we' will get to that, one day...

Link to comment
Share on other sites

The higher the difficulty in a chess program, the more possible outcomes it calculates. Not every outcome is programmed into the chess program, but with a number of rules and conditions, etc, it can derive outcomes. Deep Blue is a powerful computer, able to think out millions of outcomes. It's a good start towards true AI.

Link to comment
Share on other sites

Can Deep Blue tie my shoes though? Deep Blue is good at one thing and thats it. I've yet to see or hear of computers that can do more then just basic tasks. Vision is also a huge problem as it takes a lot of processing power to figure out what something is and even then the programs are usually made to only understand certain objects not to identify any possible object like humans. We are still a far ways away from true thinking machines if it is even possible.

Link to comment
Share on other sites

1. I don't see how that is possible, but we may not hold the same perception of what constitutes intelligence. Mine, while probably not entirely accurate, crosses over with sentience, free will, and the capacity to learn. Show me a computer that can go against its own programming, "think" outside of its own parameters, and only then will I consider it a true intelligence.

2. Actually that wasn't the match I was referring to. ;) It doesn't really matter, though. I'm well aware that computers will perpetually outdo humans in a broad number of areas, but, by definition, there are some things they can never do.

3. And who designed the machines that can design and program each other? Who made the machines that make microchips? No matter how you trace it, people made the machines. As to your last statement, I was speaking about abstract concepts. You know, the addage that one can only create from within themself?

4. You refer to the ability to sense, I refer to the ability to perceive, assume, and understand, and then decide. We can understand things that are completely beyond our 'programming', be it genetic or environmental. And we can go against our 'programming' at will. Every day fire fighters go against every instinct and enter burning buildings completely aware that they may not make it out alive, and their efforts might be futile or pointless anyway.

Link to comment
Share on other sites

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey the orders given it by human beings except where such orders conflict with the first law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second law.

Link to comment
Share on other sites

Logical. But if robots can have intelligence, and can develop to more than they are, who's to say they cannot stray from those rules with any more difficulty than when humans stray from various kinds of law? (which, BTW, I think is impossible)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Create New...