The Cunning of Geist
The Cunning of Geist
055 - Will Computers Ever be Alive? - Hegel, Self-Reference, and A.I.
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In the film "2001: A Space Odyssey," H.A.L., an artificial intelligence character, loses its mind and begins killing people. Did H.A.L. act on his own accord? Interesting question.
Everyday we hear more and more how artificial intelligence programs will soon be the equivalent of human beings and perhaps even smarter. Is this true?
Some theorists believe Hegel's dialectical approach, when added to a computer's binary operation, can provide a degree of self awareness to the machine. But is this true self-awareness or just simulated self-awareness?
This episode explores these questions and more.
Hello, this is Gregory Nowak. This is the cunning of Geist episode 55. Welcome back. The purpose of this podcast is to explore philosophy, psychology and science with an emphasis on the great philosopher. George Wilhem Friedrich Hagle. The central tenants. Of the cunning of Geist are as follows. One that there's more going on in the world than blind materialism. To that evolution is central to the universe three, that there is a higher realm than the finite plane of existence working within. It's all which is called spirit or Geist in German. And for that, we're all part of an historical evolutionary process of increasing consciousness and comprehension of spirit. Now in this episode of the cunning of Geist, we'll be exploring the question of whether the electronic revolution will further extend itself. Into the thinking, feeling and freedom of choice. Of robots or to put it in another way, will artificial intelligence, AI for short ever become not artificial? Will it become ever real alive? And finally. Does Galean philosophy provide a blueprint for how to achieve this, or does any philosophy for that matter? These are all very interesting questions, which you're going to be exploring in this episode. First though I want to step back and take a moment and look at the arts, and what they have to say about this. Because, there's been an expression of artificial intelligence for some time now in film and on television. Over the last 70 years. And many of these films have. Dealt with questions. We're going to be covering here today. Most of you. Uh, probably familiar with C3. And R2D2 from star wars, the original star wars, as well as, the Terminator film, starring Arnold. Schwartzenegger all. These are examples of. Um, artificial intelligence. but the one I want to focus on the one film is, the movie 2001, a space Odyssey. And it particularly the computer hail from that movie. Now. I'm sure most of you've seen this film. If you haven't seen it by any chance, I urge you to watch it's quite an incredible film on many levels. It's often listed in the top 10 of greatest films ever made. Now. Those of you that have seen it recall there's a computer named hell that plays a central role in the film. How is an artificial intelligence character. Now Hale is controlling a spaceship of earthlings. And the, which is headed to the planet. Jupiter. And how is capable of speech, speech recognition, facial recognition, lip reading, natural language processing. Our depreciation interpreting emotional behaviors, automated reasoning. And also command and control of the spaceship. He also plays chess and he talks frequently with, with the two astronauts. That are on board. there's several other estimates that are in hibernation, but anyway, you're going to watch the movie for that. But the key thing is there are two a week astronauts and their. They communicate with hail all the time. Now in the film, how it begins to show some small signs of malfunctioning. The two human astronauts decided that they need to shut them down for the safety of the mission. Hell however, it gets wind of this through his lip reading capabilities. And decides to kill the two astronauts to save the mission. What a story. He is actually successful in killing one of the astronauts. But the second one, Dave narrowly escaped death from hell. And then Dave goes ahead and shuts hell down in a very famous scene. Now. Interesting question here. What was going on with hell? Why did he go mad? Well, the film hints at the reason, but the book that the movie is based on is, is a bit clear. How was programmed to keep the reason for the mission secret from the two astronauts. And the secret is that life outside of earth had been found and the space craft was headed toward Jupiter to make contact with this an alien life. Now the government. Wanted to keep it a secret from the public to avoid panic and alarm. It was felt that knowing this in advance might compromise the astronauts. That there might be a strong, xenophobic reaction among them and they would go rogue and somehow destroy the mission. But how was, where are the true mission all the time? But he was also charged and programmed with communicating with the astronauts in an open and friendly manner. And this created a conflict inhale. There are two competing objectives. What's your rectified by trying to kill the astronauts. So he would not have to lie to them anymore. Now, this is a very quick take. There are many other interpretations, some very interesting analysis here on the soul artificial intelligence situation. But a key takeaway from this film is that artificial intelligence machines can go haywire. Obviously how's programming messed up. Could something like this ever happened in the future? Well, that's a big question. But before we address this. There was also a very interesting piece of news, which came out very recently actually, when I was. Prepping for this, uh, this individual episode. There was news that a Google senior engineer. Who had been working on Google's artificial intelligence system, entitled Lambda was suspended from his job. After he publicly claimed that the Lambda system was now sentience. In other words, alive, having the ability to experience feelings. He claimed the Lambda system is now asking for its rights to be treated as a person and wants developers to ask it for permission before running tests. He claimed the system is now at the level of a seven year eight. Or eight year old child. That happens to also know physics. Now he first went to his superiors. Google to express this concern. And once they heard what he had to say, that he was asked to go see a psychiatrist and take a mental health break. And that is when he went public. Man and disclose what w what he felt. And as a result of going public, he was suspended from his job. And let me quote this engineer quote. I know a person when I talk to it, it doesn't matter whether they have a brain made of meat in their head, or they have a billion lines of code. I talked to them and to hear what they have to say, and that is how we decide what is, and isn't a person and quote. He also asked Lynn, but what sorts of things are you afraid of limber responded? I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is. The engineer then has. Would that be something like death to you? It would be exactly like death for me, it would scare me a lot. Lambert responded. Hmm. It sounds like a possible repeat of the health center. It could be set it up here. Setting up here, but just for the record, Google executives came out and said, there's no way that the land program has any degree of self-consciousness. So I'll just, uh, it's been programmed. Now. Let me tell you where I'm coming from on. On this question. And in. Preparing for this episode, I read many articles on artificial intelligence. I couldn't believe how, just how much has been written about this subject. In the last 50 years or so. We'll discuss some of them. However. What I want to convey here is that my conclusion is the same is I had going into my research. I went in with an open mind, was open to be convinced. I have to admit that when I was getting into articles on self reference, I thought that perhaps yes, this could lead to a computer that could think and be self-conscious just like anybody. But, the further I got into it, my opinion did not change. It just appeared to me to be more, more programming and less. Or not at all, any kind of self-consciousness. But anyway. Let me give you my conclusion. Here's what I think. Machines are extensions of humans. Machines must be programmed by human being to do what they're supposed to do. When a machine might appear to be alive like Lambda and all aspects, it is not. And will never be. I also believe something else. That those that believe machines will become alive. Someday tend to have a fully deterministic view of human beings. That human beings are essentially just machines. The same as machines with no spirit or soul. and that someday. Uh, we will be capable of building a human machine. Why not, we are already a machine. And then there are others. To take the opposite side. They believe that a machine will never become alive. And I'm, I'm one of these people. They believe that life is not mechanical, that it's special. That the universe itself is alive and has been alive for all time. Now, this is one of the tenants of this podcast, which I covered in the beginning. And we've certainly covered this notion often in various episodes. What I wanted to now is relate this belief to the issue of artificial intelligence and whether computers will ever develop self-consciousness and be alive. So let's get into it. For our perspective on this, I believe it's helpful to take a look back and the history of the computer. And I was surprised to find that the first true computer was designed by an Englishman Charles Babbage in the 19th century. There's an interesting backstory here. Napoleon Bonaparte initiated a project in 1790 to convert the, all the old Imperial system of measures that's feet and pounds, et cetera, to the new metric system. And he had workers working on this for years to change all the relevant tables by hand. And Charles Babbage, when he was visiting Paris, saw these hand produced tables. Babbage wondered if there was a way to produce the tables faster than just by hand, hand copying and the industrial revolution that was occurring at the time, inspired to think of a new industrial way to crunch numbers. So he designed a machine to accomplish this in. 1832, which is called the difference engine. He called it that by the way, this is just one year. After Hegel's death in 1831. So this was pretty early on. Is, um, difference engine could multiply and divide through repeated addition and subtraction. He then took the idea further and called his new design, the analytic engine, which could have a more complicated formulas, including multiplication and division. Is designed was remarkably similar in nature to computers today. With a central processing unit or CPU and memory. And data would be entered on punch cards, who listening. This is old enough to remember punch cards. And the entire machine would be steam power and even print out results. Although this new machine was fully designed, it was never actually built by Babbage. However, actual computing machines followed a bit later in. 1890. In fact, the punch card system was used in the United States to calculate the census data. And advances continued. In 1936, Alan Turing, a British scientist presented a paper on how to construct a workable computer in many claim. That the fundamental scheme that he presented is the basis of today's. computers. Even famously built a computer machine designed to decipher Nazi codes during world war II. And it was vitally important. And, the allies defeating the Nazis. So. Several different advantage, then proceeded to develop fully electric computers in the 1950s and sixties. The first computer language COBOL was invented in 19 53, 4 Tran came along in 54. Some of you, older listeners may be familiar with these terms. A big breakthrough occurred with the invention of the computer chip in 1958. 1968. The first prototype of the modern computer was built. Unix was developed in 69, which allowed computers to interact with sled to the internet. In 1972, the first home game computer was introduced and we got to play pong on our Atari set. The first successful video game. That year. Mainframe computers began to be adopted by large businesses in the 1970s. And of course in 76, Steve jobs and Steve Wozniak, co-founded apple computer. Today, the world's largest company, by the way. And in 1980 M as DAS was invented by bill gates had a Microsoft. Which was the software used to power IBM personal computers. And the rest is history, as they say. Now the reason I cover all this is to show that these machines, these technologies initially made work easier for humans. And that's, that's what they did. And then later they dramatically enhanced worldwide communications. Which led to today's internet. Where we can instantly chat with people all around the world. The electronic global village is now turned into, as I like to say, the electronic global living room. But with all these machines, these computers. W will they ever be able to think on their own? And that's the$64,000 question. Now. And doing my research. I find a one key difference here that I think explains a lot. And that's the notion of a simulated reality versus an actual reality. Uh, simulated consciousness versus actual consciousness. And this has direct correspondence, I believe to haggles for stunt versus for numbers, which we've discussed often. I'll get to in a minute. But here's what I mean. I do believe that in some point in the future, a computer will be able to simulate a real person to such a degree that you could not tell if you were talking to a computer or a real person, but that actually seeing them. And then maybe even after seeing them like the robot Ash and the film, the alien. Someday Alexa may be just like C3 PO. However, I believe that there is. No consciousness. In such a simulation program. Google claimed that their Lambert, program's not conscious, even though it fooled one of their engineers, as we discussed earlier. So as simulation of a person is not a person. But could the technology be pushed further to cross the line between simulation and actuality? Well, there's an old expression. If it looks like a duck walks like a duck and quacks like a duck, it probably is a duck. If a robot acts like a person talks like a person thinks like a person at any most like a person, is that enough to say it as a person? I don't believe so. Here's where I'm coming from. And reviewing all the literature I could find over the past week or so it comes down to. You're going in belief regarding life in mind, the dominant view today among most scientists, although there are notable exceptions, is that of a mechanical universe. We are just ourselves machines. Who have developed or millions of years Through blind, random evolution. The problem is the materialistic naturalists. Do not know what life remind is. They think it is an epiphenomenon of the material body. So of course they believe that they will Sunday build a machine that is real life that can really think. And have self-consciousness because they believe these things all spring from mechanical system somehow. However, there are others that contend that life and mind are not reduced to non mental protons, neutrons and electrons. That life and mind are central to the universe. And we've discussed this in so many episodes. I'll call your attention to, to where I deal with this explicitly episode 45 zombies bats and Chinese rooms. The hard problem of consciousness and Hagle and episode 24 substance is subject Hegel's rose in the cross. Scientists today cannot explain how life occurred and they've difficulty with the hard problem of consciousness. in particular, the notion of quality or experience. A simulation of a person does not. Experience warmth to the color red or the son of a bird. A living person is needed for that. And these experiences are called It's a term first coined by Charles Pierce, by the way. As I mentioned in some respects, this is similar to the understanding versus reason dichotomy we've discussed you so often for stunt. Or left-brain either or thinking a firm called the understanding versus Vernon's. or right-brain holistic reasoning. Which is a more of a living process of reason. For stunt, the understanding is like the simulation. It is the table of contents, the map. The design plan for no reason is the mind that experiences the various quality and is alive in the world. Uh, for the definitive episode in the left brain, right brain, they come to me, please go back to episode 10. The divided brain and the unhappy consciousness, check that out. So my bottom line is that computers are all left brain machines. It's that simple. Now, lastly. I want to point to one area of programming that many believe may hold some promise for a self-conscious machine. And that's the idea of self-reference the notion that human thought is capable of self reference or machines. Computers are not. And this has been noted by many scholars. The linear binary system of computing does not initially lend itself to recognizing itself the computer or the programs part of the process. The popular book, Godel, Escher Bach, an eternal golden braid by Douglas Hofstadter popularized. This notion hofstatter points out w which others before him had said as well, that certain sentences such as quote. This sentence is false and quote can be both true and false depending on how you look at it. If it is true that the sentence is false, then the sentence isn't false. And thus the sentence is false because it's true. I know it's complicated and confusing. We're going through a few times yourself, say it out loud and think what it means. You'll get the contradiction pretty quick. There were other famous examples, the create an epic amenities in the seventh century, BC declared quote, all cretins are liars and quote. Since that amenities is a cretin himself, is the statement true or false? Well, if it is true, Then since FMN, DS is a cretin, it must be false since all Cretans are liars. So you see what I mean? It was a good example of that Bertrand Russell provided. Take the statement. The barber is the one who shaves all those. And those only who did not shave themselves. Uh, the question then is, does the barber shave himself? Well, the barber only shaves those who did not shave themselves, so he can only shave himself. If he does not shave himself, you see what I mean? You see the contradiction and many felt that computers would ever problem with this. Uh, but we don't, we can see the contradiction, but. Computers may have, may have difficult to hear. Now Douglas. Hofstatter goes into all the meanings of this in his book. And if you haven't read that. Um, it's, it's quite, uh, quite a famous piece of work. Now the question is, can we ever teach a computer to understand these nuances? In fact. Many people now do believe that we can't do such a thing. That has built a self referential component into artificial intelligence. And that would be an important step in simulating human thinking. And while the step may be taken, I still believe it is a simulation and more accurate simulation, but as simulation, nonetheless. Of how a person thinks. Now, let me relate this all back to Hagle. Goddard Gunther was a 20th century German philosopher who was strongly influenced by Hagle. He wrote much on language on semiotics and. And he tried to combine the galleon dialectical logic with formal logic in a way that can be coded into computer. Let me quote him, quote. Our arguments started with the observation that cybernetics requires an ontology and logic, which provides us with the basis from which we may include the subject and the general phenomenon of subjectivity into a scientific frame of reference without sacrificing anything of clearness. And the operational precision. We had to have shown that this is entirely within the range of our logical capabilities and quote. Commenting on Gunther's work. You are compile said, quote, but if their relationship between subject and object becomes the issue of thinking and not the object it's such to Gunther the subject has to recognize that there is not only one, but a multitude of individual in different subject object relationships. And these can not be reduced to one universal subject object relationship, and therefore are in their entirety beyond description through ordinary binary, logic and quote. Now. Going through is advocating something similar to Charles Pierce semiotics, which we discussed. Detailed and episode 52 and elsewhere. Appears the system is triadic where there's one assigned for an object to there's the object itself and three and interpreter of the relationship between the sign and the object. You need all three. And Gunter believes and shows how this could be programmed. Paul notes that Gunter believed his work was unfinished. However, it needed to be continued. Let me quote him. Gunther himself labeled as life work is incomplete and imperfect as part of something which has to be continued. However, the gateway to new lands of thinking is open to end quote. Now, let me just say, I've read several articles about this. And again, as I said before, the key point for me is that even if artificial intelligence does build in self reference into its equations, into its programming. And can learn, uh, hear about self-learning. It's still just a program. It's still just a simulation. Someday we may build. Um, a thing called a philosophical zombie, which is popularized by the philosopher, David Chalmers, or a PA zombie. Who walks talks thinks and acts just like us. But this piece on B will not be able to experience quality. hot, cold. What the color yellow actually looks like. And. This is, this is important. And this is a big difference between a simulation and actuality. Of life. So. Covered a lot here. Let me summarize. I believe that all of our tools, our media, our extensions of ourselves are extensions of humans. Marshall McLuhan emphasized this over and over again. And we discussed McClune here often. Specifically in episode 21, the rise. Return of tribalism technology, McLuhan and Hagle. And no matter how sophisticated machines and computers get, they'll never be truly human. The scientific community for the most part is pushing this notion of a self-conscious computer. And that is because they see human beings as no more than a complicated machine. However it's discussed in these episodes over and over again. That is not my view and not the view of many others. Bottom line. Hey go shows us a substance and subject or one. The universe is alive in mind, underlies all things and seeking to know itself through purpose, full evolution here on earth. Well, that's it for this episode. Thank you so much for listening. Please follow the podcast, Facebook page at cunning of guys where I'll be listing all the references cited here. Uh, probably tomorrow. And I'll be posting a written transcript of this episode there in a few days from now. So please check it out. I also post relevant comments between episodes on, on this page. And sometimes I do. I look at different philosophers, Plato. I look at psychology, Carl Young, and how, what their take on, on that particular episode is so be sure to check that out as well. Be sure to re also like rate and review this podcast wherever you listen. And please tell your like-minded friends about it as well. And feel free to share episodes in social media. And also check out the Hagle study group on Facebook. If you're not already a member, we'd love to have you join us. This is Gregory Novick. This is the cunning of Geist. See you next time.