Monday, November 15, 2010

To think, or not to think; that is the Turing Test

Well, after spending my Sunday trying to read all the Wikipedia articles (and spending a miserable amount of time doing so), I've decided I'd like to talk about the chatbots. =P Perhaps it was because I already knew that they weren't people that I was able to recognize that they were 'bots', but something about them, I'm thinking the way they spoke, keyed me in on their robotic existence. I was going to say that they didn't seem very human, but upon further reflection, and a second go at chatting with them, I take it back. They seemed to respond in a way people would, which is uncanny, but more on that later.


We will begin with the first chatbot, the famous ELIZA, which "is based on a "script" consisting of patterns and corresponding responses." This explains why it responds the way it does, sometimes seemingly unrelated to the previous statement; if it recognizes a keyword, it will search through its list of appropriate responses and pick one. One quip about that is if you enter the same statement over and over, you will eventually begin seeing the same pattern of responses from it, its stores exhausted. I thought it was rather amusing. Of course, it is also fashioned after a Rogerian psychologist, which may also add to the repetitious replies of support - it makes the copying of my words almost acceptable. I can agree that ELIZA does that well. But for the same reason, after a few minutes of chatting, I was bored and you could say annoyed with ELIZA; nothing personal, but I just grew tired of being asked question after question and for more elaborations. So then I moved on to the next chatbot.


ALICE was different from the get-go, with its own little avatar on the left!! In addition to that, ALICE was also different from ELIZA because it asked questions to learn about me, but not in the same way that ELIZA did. ELIZA was trying to figure me out, more or less, like a psychologist, where ALICE was simply making conversation. ALICE would ask some weird questions about "Om" though, and it sometimes shouted things like "DIALOG HISTORY". ALICE felt more human because it would try to talk about things that it wanted to, instead of the one-way conversation with ELIZA. What made it less human was when it would blatantly say it was an artificial intelligence, and still the way it talked.  I could see how the 'intelligence' was growing from each chatbot. After talking with ALICE for a bit, I almost missed ELIZA.


Finally, there was Jabberwacky, which was the wackiest of them all. Certainly, Jabberwacky seemed the most human, not only with its inclusion of spelling mistakes and casual way of talking, but also with the tone it expressed, an ounce of what could be feeling poured into each of its responses. But could this be actual intelligence? Or was the tone simply implicit in the reply it responded with? I believe that the set of responses for Jabberwacky were just programmed much better than those of ALICE and ELIZA. Perhaps the way it could select its responses was also more advanced. Of the three, Jabberwacky was the 'most realistic', and it may have passed the Turing Test for me if I hadn't already know that it wasn't a person.


However, from what I've seen, I don't think these artificial intelligences are truly thinking yet. Certainly, they need to select which of their responses to spit out at the users, but isn't that more like an algorithm? Thinking involves much more, in my book; it has to do with learning, but more importantly, understanding. As with the Chinese Room, because the person can relay perfect Chinese doesn't mean that he (or she) can necessarily hold a conversation out of the room. He hasn't learned the language, he merely copies something and writes it down; he doesn't know how to speak it.


When a program has a specific goal in mind, and can learn, can adapt to different obstacles in its path and attain the goal, that's when I think programs will be thinking. I think the chess-playing computers are closer to thinking than the chatbots, or at least the three we were given (I've heard about a chatbot called Cleverbot that is supposed to be pretty..clever). True, the chess computer has a list of moves that it can execute, but as we learned in the class, the list for a whole game, with every move put into account, is much too huge for it to run efficiently. And so it looks ahead, we could say, five turns. From there it keeps modifying its list of moves with respect to each one the human player makes until it wins or the human wins. I say this is much closer to thinking because it requires a sort of learning; no two games of chess will end up being the same, but there are trends that the computer can recognize and therefore remember for future reference. Just like 20 Questions. That step, where we are now, is the one right before what we are striving for; a fully independent , fully functional, fully thinking program.


Talking about this made me think of the Pixar film, The Incredibles - specifically, the Omnidroid robot the family fights at the end, constantly learning and adapting.




Until we can build an A.I. that can do that, I don't believe what we're making is thinking. But then you have to wonder, would we have a robot revolution on our hands?

Wednesday, September 22, 2010

More about Moore I ever knew before

In 1965, a certain Gordon Moore wrote a certain paper that proposed that the number of transistors that could fit on an integrated circuit would double nearly every two years. This observation is what has come to be known as Moore's Law.

What does all that mumbo-jumbo mean, you may ask? Well, before getting to the root of the problem, I think the term "transistor" should be defined, mostly for my benefit. Bob Brown of Southern Polytechnic State University explains what a transistor is in his web lecture:

     "A transistor is an electronic device with three elements called the base,
     the collector, and the emitter. Transistors can function as analog devices
     and are used in that way in radios, amplifiers, and similar gear. By
     choosing suitable transistors and providing suitable inputs, they can also
     function as digital devices -- switches that are either on or off."

Simply put, as Brown goes on to say, transistors are what allow computers to "operate as fast as they do." With this in mind, we can easily see the magnitude of Moore's Law; the exponential increase of transistors means that computers will become more advanced and the gratification from them will be nearly instantaneous.

One point of interest brought up by many regarding Moore's Law is its perpetuity. An article from Absolute Astronomy (strikingly similar to the one on Wikipedia) states that in his paper, Moore observed that the trend he would popularize began in 1958 with the invention of the integrated transistor and remained true up to the time he wrote it. Why shouldn't he predict that it would continue for years to come and gain world-acclaim for it?

Though I am quite ignorant on the subject, I believe Moore's Law will persist for the next couple decades (but not if the world ends in 2012), the reason being that there is no reason to stop. An article published by Gregory Johansson addresses the issue that engineers face now, which is the "running-out" of room on a chip they can not shrink for all the transistors they keep doubling. It's like having a party when your parents aren't home and each friend brings two more; eventually, you won't be able to accommodate for all of them. This is also one of the ultimate limits of Moore's Law.

Fortunately, these engineers have also come up with a solution, and that would be memristors. Johansson defines this joint-project of HP and Hynix as "memory resistors, which means that they can store information even in the absence of any source of power." He explains that this is great news "because they open up hope of exceeding the current chip capacity limits thought to [be] inviolable." Though not fully developed, knowing that a design to work around one of the foreseen problems of Moore's Law has been thought up, and the fact that it is being worked on, shows just how indebted and dedicated people are to this trend, and to the technological world as well. In this way, Moore's Law will hold in the future.

The main limit that butts heads with Moore's Law has been discussed - my coined "house-party" phenomenon - so we can move on to the futurists.

Some futurists view Moore's Law in a positive light, believing that it will continue to prosper (directly or indirectly) in the coming years. According to Absolute Astronomy, futurist Ray Kurzweil's understands that some new technology will take the place of our current "tech", but "that the exponential growth of Moore's law will continue beyond the use of integrated circuits into technologies that will lead to the technological singularity." This graph by Kurzweil illustrates the trend Moore had noted and predicted, along with a few additions (e.g. vacuum tubes and electromechanical computers). Not really relevant. I just thought a little variety might help. But reading Kurzweil's thoughts really made me look brightly at the future of technology. If Moore's Law applies to me later in life, I can only begin to fathom how awesome everything else will be.

Of course, there are others who are not entirely on the bandwagon, predicting the worst for Moore's Law without change. One such critic is Penn State's Suman Datta. This article from the Daily Galaxy shares with its readers Datta's belief that if Moore's Law is to survive, "some new technology will have to take over from silicon", whether it be carbon nanotubes or superconductors. Datta's words are not necessarily an attack against Moore's Law, but more of a cautionary wagging of the finger that if nothing is changed now, it will cease to change ever in the future.

And then there are those who are against it entirely, such as Martin Ford, who basically warns about a passive robot revolution; passive in that menial work will be taken over by them, and so people will become lazy and fat and the economy will take a nose-dive. Though it sounds kind of cool, I could also see this future, brought up in the Absolute Astronomy article.

Even still, I am sure that Moore's Law will continue to hold true. And I am sure our generation is certainly with me on that one. All about the iPhone and touch-screens, practically hand-held computers on their own, the teenage population of this era is, and always will be, ready to whip something out in the moment and get what they want fast.

Tuesday, September 14, 2010

Well this is different..

This is my first time writing a blog. Hooray!!

As it turns out, I picked Blogger out of the three blogs I could have chosen because it was the most simple.

To me, at least.

WordPress's 10-step walk-through guide was, suffice it to say, rather intimidating. Though virtually all of the information in it was useful (for me, without the availability of the internet on my cellular phone, mobile blogging has no meaning), the sheer immensity of it being given to me right from the get-go forced me to shy away from it. One thing I really liked about WordPress was the amount of options that could be customized and the degree to which they could be customized. The attention to detail and step-by-step instruction would definitely have made it a good choice to start off however. I think I'll use it to help me edit this one. Really, I didn't pick WordPress because it was too professional for my taste; I'm just making a blog for my class. But if there ever comes a time when I do need a blog for the workplace, WordPress is the place I'll go. Especially because of its attempts at humor, even though they are surrounded by commands.

On the other hand, LiveJournal barely gave me anything to do before the creation of an account. One thing I did like about it was the 'Surprise me!' button on the Homepage; clicking it would send me to a random journal, akin to StumbleUpon in that aspect. On one page I was led to I watched a video about a penalty kick, initially saved by the keeper, that then rolled itself into the goal. What else is cool about LiveJournal is the fact that it is global; visiting around eight random journals, I can recall at least four of them were written in characters I couldn't interpret (Russian?). The main reason I decided against LiveJournal, though, other than the lack of a tour before sign up, was that it reminded me of the aol page that always opened up when I would sign onto AIM. I really didn't care for it.

As terrible as it sounds, Blogger reminds me of Facebook. By itself, a very basic thing that's easy enough to use. However, unlike Facebook, or at least in my experience with Blogger so far, it does not have bothersome advertisements on every side of the web page. One more bonus of LiveJournal was the fact that it had popular stories and such that you could look up. Blogger doesn't have such a thing; rather, it is just a blog. And for all intents and purposes, that's all I need for right now. It may not have all the bells and whistles of LiveJournal, but Blogger gets the job done. Nothing too crazy to have to worry about here, other than the annoying, constant automatic saving of this post.

----------------------------------------------------------------------------

Now then, on to the second topic, eh?

I'm supposed to be talking about "some topic related to the phenomenon of blogging", though all the good ones were already taken by Wikipedia. So I'll just settle for one I can talk about, which happens to be the negative social aspects of having a blog.

Certainly not from personal, first-hand experience can I speak on the subject, but after attending my FYCARE workshop, I feel more sensitive to the issue of privacy intrusion. With things like blogs, Facebook, Twitter, etc., out there on the web that can be accessed by nearly anyone, posting snippets of one's personal life can be dangerous. Without thinking, people (especially the younger users) may post personal information such as a street address or a phone number that others of ill-will may use for their own aims. Often times it is the group of users ages 13-19 (a guess-timate) who find themselves in the most trouble as victims of the not-so-nice people in the world.

Not only from older users may they be predated on, but may also be "turned in" by others of their same peer group; I had heard stories in high school of student athletes who, after pictures of them at a party drinking underage were released by their enemies, were ejected from the team. Such instances only motivate others to give up their virtual lives; if only the networking and communication aspect weren't so powerful.

The reason I have a Facebook is so I can keep in touch with my friends. It's great for doing so - the ability to leave something on their wall to have a conversation, to upload pictures to share experiences, and to write notes to share thoughts - it all seems so great. Until the threat of some malignant force turns from a threat to an action. Basically, don't be stupid. Say what you want to whoever you want on the internet, but look before you leap. Consider who else you may be making yourself available to and what about you you are practically handing them. That last second before you hit that 'Submit' button, take the moment to think to yourself, "Will this come back to bite me in the butt?"