Thieving Thursdays: I, Human

>> Thursday, July 30, 2009


So, I’m going to cheat and do what I do when I like the topic I’m on and want to say more on it. I’m Thieving Thursday my own comment from my own blog. But I read a letter to the editor in the NYT today (on a different article than the one I cited) that triggered a whole other area I’d intended to go into in yesterday’s post that I somehow forgot to go into. The letter to the editor, I might add, somewhat proves my point even though it was aimed at an article that had effectively nothing to do with the one I cited.

In the letter, the writer scoffs at the notion of computers or artificial intelligence displacing humans any time soon and uses, as a compelling example, spam and popup blocking software, how, no matter how sophisticated our software becomes, it doesn’t block it all when most people can tell at a glance when something is spam or an advertisement.

And that touched on the part I’d forgotten to talk about yesterday. Sensory input. Computers, even mega-important computers that run robotic facilities or spaceships, have limited inputs for several reasons. One is, each input adds can add appreciably to the software burden. If inputs, for example, are just cause and effect (temperature triggers a heater, humidity triggers a condensing unit, etc), new inputs are handled simply. However, if the combination of factors triggers different events to different degrees (like temperature, humidity and air quality being addressed by a complex system that triggers air handlers but a variety of different equipment, the number of factors increases software complexity geometrically, even factorially. If I add multiple source sensors, like say an inertial measurement unit value vs. a star tracker value for navigation, I need to add more complexity as I weigh the differences to ensure I’m taking the best data, if/then algorithms and the end result is dependent on how well the developer was able to envision all possible permutations and address them in the software. Another reason is that sensors are expensive and require a great deal of maintenance. If they fail, I want to detect it so I don’t make the wrong decision on inaccurate data, so I might want more than one, but then I have to do comparisons/voting (like the IMU/star tracker question). And my software continues to grow.

People don’t have that, someone might say. We just have two eyes, two ears, etc. Au contraire, mon ami. Our eyes don’t just provide “an” input – they provide a wealth of information: distance, color, shape, features, edges. Our brains don’t just record that data (like a digital camera) but analyze every frame, identify features and then figure out what those features mean, evaluate color and identify what that means in that context. Green plant leaves have a different significance than something green in a Tupperware container and our brains realize that. When I worked in robotics, I was amazed at the amount of software required to do the veriest minimum a human brain can do: depth perception and edge detection. The goal was to try to identify a white EVA tool against a black background (space) and how far away it was. Multiple cameras were used and millions of lines of code just to do that, yet a baby or someone severely mentally handicapped can do that readily. Brains also decipher the smells we smell, sounds we hear, things we touch, things we see, and put them altogether into a single coherent picture where each aspect, either consciously or unconsciously, factors into the final conclusion(s). It’s incredible and we don’t have any computers and systems that filter as much or as much data as we do and analyze it as thoroughly as we do. Note, there are systems that detect tons of data, like some of our orbiting satellites, but they don’t understand it, they just shunt it to us on the ground to make sense of, often taking dozens of people days to fully unravel.

There is nothing we have that is readily comparable.

For a computer controlled system to work, they must gather data, determine the appropriate response based on programmed algorithms and implement that response. If any of those pieces are broken – if the data is corrupt or missing, if the algorithms are incomplete, if the implementation is faulty – it won’t work and will either stop or do the wrong thing, depending on the default. It cannot react to an unforeseen circumstance except by either doing a default or critical software failure (stopping, error, endless logic loops, etc.) They have no judgment available for a “best guess”. They have no determinism. (Yes, I know there are “learning system” using limited inputs, but they are limited to learning from a specific lesson – extrapolating that lesson to pertinent but dissimilar situations are not yet possible to the best of my knowledge). People almost automatically (if not automatically) will try something else if the first works poorly, can readily evaluate conflicting data signals.

Which isn’t to say that robotic or automatic systems have no place. Their sensors are far less subjective (i.e. thermostat) and they always act logically, i.e. with the logic they were first programmed with. People, let’s face it, don’t. If you’re doing a repetitive task with identical conditions and materials each time, robots are the way to go to get consistency and exactness a human can’t master. In some cases, because their systems can be streamlined with key data instead of a constant wealth of data to sift, they can react faster and more exactly than a human. They're expendable and dependable.

Actually, brains do this, too, train bodies to do complex tasks through repetition or reaction training (like pilots). The number of joints and degrees of freedom are astronomical in a human body, beyond any computer I’m aware of. And we’re not just doing a programmed task (place this cup on this table the same way every time for all eternity) but picking up cups at various distances and putting them on various surfaces, cleaning a window with enough pressure to get the streaks off (and can tell how streaky it’s left) but not so hard as to break it. People really are incredible.

I also forgot to mention something important when I wrote this yesterday. On the whole, I think any human is smarter than any computer and know human beings take in gangs more sensory data than most computers could handle or do handle. Ever. But, when it comes to using that data and reaching a conclusion, let me be clear that I don’t make the assumption that all humans are created equal. On second thought, that might be worth expanding on in my next post.

4 comments:

  • flit
     

    when I used to teach intro to computers - mostly to people who were totally intimidated by them, I was constantly repeating that computers are dumb; all they know is 1 and 0.

    took a while to get through, but eventually they got it

  • The Mother
     

    I think the final solution will be a singularity state, where we add computational computers to the human brain. Imagine what we will be able to do then.

  • JJones
     

    Otty Sanchez was a Jehovah's Witness -- as was her own mother and other relatives. Yes, these are the same nuts who make a drama out of allowing themselves and their children to die rather than accept a blood transfusion because the WatchTower Society claims that accepting a blood transfusion is the same thing as eating blood.

    Why has the media deleted this irony from all reports?

    Here is the first of 10 webpages devoted to murders and other crimes committed by JWs:

    jwdivorces.bravehost.com/familicide.html

  • Stephanie B
     

    I think you left your comment on the wrong post.

    But not giving your children the blood they need when it could save their lives - that's whacked.

Post a Comment

Labels

Blog Makeover by LadyJava Creations