Thursday, April 05, 2007

 

Required materials

1. Inputs
2. Memory
3. Forced action
4. Drive?

1. Inputs with values
Do you want X to happen again or do you not want X to happen again.

Outputs are not necessary for the learning process. Change is necessary. Learning can't take place in a vacuum. If nothing is happening then no learning can occur.

Thought experiments:
Take me and disconnect all of my inputs - I can still output. Like a darkroom in a sense. Hanging in pitch black. Except no feeling of hanging, or the ache from screaming or anything - no inputs / no physical feeling of any sort. Can I learn? Yes. I can solve puzzles in my mind.
Take me and disconnect all of my outputs - I can still input. Like being frozen, paralyzed, no movement whatsoever - not even eyes. No communication with outside world. I can see. Can I learn? Yes. I can solve puzzles in my mind, but I can also watch what is in front of me, or hear what is around me, and from this I can learn new things.

Outputs don't teach - only inputs do. Without inputs - outputs mean nothing.

Logic isn't built in, it is the memory of rules applied over and over and over again.

Tenet of logic (I think) - If all inputs to a situation are identical then the output from that situation has to be identical. Where is that learned from. After multiple situations in life where all inputs are the same, and output is the same, it is learned that this applies not only to said situation, but to other situations as well. This requires a very complex multi dimensional memory matching system. Other tenets of logic are surely built upon similar ideas.

What are thoughts - person is completely still in a vacuum - has no ability to input or output, yet he still has thoughts. What does that mean?

Most innate reactions are fastest - b/c higher level reactions a
 

Is anything truly random?

This question came up Seder night. I contend that nothing is random. When a deck of cards is thrown in the air the place, position, and even route where each card will land is determined by air flow, force, weather, and a million other small things. We call things random b/c we don't understand what causes them. A random thought is only a thought that we've no conscious understanding of where it came from. An example brought to prove that things can be random was Brownian Motion. This is the motion of molecules which both keep them in a fixed position; ie nothing falls apart b/c of their movement; and allows them to move in all sorts of directions with no possible explanation as to why. Of course this is still not random. Each molecule moves in response to a myriad of conditions which affect it. We are still incapable of analyzing those effects, and how they can affect a molecule, but their existence shouldn't be doubted.

Wednesday, July 21, 2004

 

Its possible

AI is possible. Its also easy.

The complexity if it actually exists is only in simulating or building a program that wants. Want drives us all. Idea of like versus dislike. If I can tell a program to like something, then I can teach it to want it.

Write a program that has 2 keyboard inputs. One of them is called joy (lets say the button 'j'), and one of them is called pain (lets say the button 'p'). If i push joy, then the program is happy (i'll define that later), and if I push pain, then the program is sad (and this). If I give the program the ability to input either of these keys (using any number of APIs), then I can let the program continuously be happy. This of course has one simple problem. AI is not just a program doing what makes it happy, but a program learning what makes it happy. How do you do that?? Well I can't necessarily say how from a program's point of view, but I like to try and look at a baby and figure out how a baby learns.

A baby is born (we are assumming no 9 months in the womb here...), and it doesn't know anything. Lets assume that it is hungry. It would now be feeling pain, so it would do whatever it can do to make that pain go away. It cries. The baby gets fed now, and finds that it is happy. How does it learn that it is happy when being fed? The baby has now learned that to feel happy, it needs to do whatever it did this first time, ie to cry.

How does this compare to a computer program? A few things were left out above, one of which is how does a baby know to cry? Interesting question... Well let me propose that a baby as an adult has certain involuntary reactions to different causes. Lets say that a computer has the same. I will tell the program that when it is in pain, it should cry (ie print the letter 'c'). When I the person writing the program sees the program print cry, I will soothe it by pushing joy. When it is no longer in pain, it stops printing cry, and I stop pusing joy.

What am I trying to do with this? I want to teach the program to crave joy. Perhaps this is too simple for AI, but what happens if we change the program's involuntary reaction to something different. Now when the program is in pain it prints either cry ('c'), sleep ('s'), or wait ('w'). Well what will the program learn now. It will learn that I will only push joy when the program prints cry.

Add to this an algorithm of simple proportion that sees what it printed, and sees what the programmer's reaction was, and therefore learns to print cry all the time, thereby keeping it from pain forever. If this can be achieved, then we have reached AI.

Of course the program would have to have a lot more involuntary reactions, allowing it to realize it has them, and allowing it to learn to control them. It would also need many more 'feelings' other than joy and pain.

Thats all for now.

Nachum Kanovsy

This page is powered by Blogger. Isn't yours?