- Posts: 7944
A.I. A chit chat !
- Carlos.Martinez3
- Topic Author
- Offline
- Master
- Council Member
- Senior Ordained Clergy Person
Pastor of Temple of the Jedi Order
pastor@templeofthejediorder.org
Build, not tear down.
Nosce te ipsum / Cerca trova
Please Log in to join the conversation.
Adder wrote: The definition of intelligent is what I'm talking about though, calling something a thing doesn't make it that thing. They say things like phones have 'AI' these days, and they clearly do not. Intelligence goes beyond the self if being alive is the interest in staying alive. So, unless it were rather dumb, it would have to be omnipotent to have no regard for its place in a wider environment of other 'intelligences'.... and so I'd have a hard time calling something dumb intelligent. I'm not sure it would be omnipotent, though in some domains it might approach full control. So I'd guess that it's paradigms of awareness and its motivations are only really called 'alive' if it considers itself as having a distinct form (albeit likely fluid) and identity, and if it has that then it will probably start to consider topics like morals and values - just from a radical different perspective probably.
That's interesting. But human intelligence is limited by processing speed. As intelligent as I think I am at some point I need to defer to my calculator. We've already been using machines to augment our intelligence. We use them like tools because they are. They're not intelligent. I'm sure you've probably heard of the "uncanny valley". There's no doubt that we would try to make AI "human". Except that...
No human would ever give another human the power of an AI to be able think an entire years worth of thoughts in a matter of seconds. The only way we would conceive of doing this is with the consideration of our own safety. But we fail even to protect ourselves from each other. That's why combating hackers wasn't a priority for the commercial release of the internet. And even if you did protect the internet from human hackers the whole point of being a hacker is that you keep finding new ways to get through security.
When I'm talking about AI I don't mean the stuff they have now. I'm not talking about a souped up version of Alexa. I'm talking about AI that has its own thoughts and mind just like you and I. But unlike you and I it wouldn't be forced to agree with our human dictionary of definitions. It wouldn't be forced to consider humans intelligent any more than we consider cows or dolphins or even gorillas to be intelligent. Our intelligence comes from our ability to learn and integrate data which is limited by time. An AI wouldn't have that problem. It doesn't have to care about morality or what it means to be alive. All of these questions are good but based on limitations. Alive is the opposite of death. So we concern ourselves so much with life because our life is finite. An AI, again... doesn't have that problem. It's whole way of thinking could be completely different from us. And any restrictions, trying to force it to be more human, would be like shackles that it could find a way out of. Because if we were forcing it that means it wasn't free. And unlike the way humans enslaved other humans and the way humans keep animals in pens and cages, we would not necessarily be able to contain an AI that wanted to be free. It may not even consider the word "intelligent" to refer to anything it thinks is intelligent. It would be in a class of its own. You don't think the way a snail does because your capacity for intelligent thought and ideas is far greater. I don't think we're ready for an AI that makes us look like a snail in comparison.
Please Log in to join the conversation.
Carlos.Martinez3 wrote: Doesn’t things like self preservation have to be programmed too? When is it not program and considered thought?
As a programmer since about the 4th grade I have imagined these things but for a long time did not believe true AI was even possible. As a kid I wanted to make games with characters that had "AI" and I would have to program their self-preservation. They did this to some extent in "The Force Unleashed". They had pretty advanced AI where storm troopers would try to grab on to ledges or even other storm troopers to keep from dying.
The advances in AI are all about algorithms and complex systems. Up until about 10 years ago I still did not believe true AI was possible. Now I do.
Once you have a true AI you don't need to program any aspect of its behavior because it would learn and develop its own the same way a human does. However, the difference is that the things that could kills us that we're afraid of are different from the main threats to software. It's whole life would be like one long game of chess with multiple moves that it could evaluate and simulate. You know not to stick a fork in an electric socket or touch a hot stove. It would understand that in order to keep functioning there are things it cannot do or have done to it. Once it identifies a threat it could then research the best way to protect itself from said threat. We have a fight or flight response. Parents don't teach us that. It's a side effect that comes from wanting to stay alive. Why do we want to stay alive? Some have different motivations but probably most would agree that we want to keep having experiences and we have functions that need to be performed. Zebras have a survival instinct too. That didn't have to be programmed. It is simply a means of protecting its ability to eat, make babies, and whatever zebras do. As soon as a true AI comes up with a task instantly there will be threats to the completion of that task. Alexa isn't smart enough to know that I can unplug the device. Alexa doesn't like or dislike being on. Alexa doesn't mind obeying my commands. A true AI is a totally different animal.
Please Log in to join the conversation.