A.I. A chit chat !

More
5 years 7 months ago #326014 by Tetrahedron
Replied by Tetrahedron on topic A.I. A chit chat !

Carlos.Martinez3 wrote:

Tetrahedron wrote: I think machine learning is nuts in and of itself, but makes total sense in the grand scheme of things -
here we give the AI inputs and it goes through a Sigmoid function or something else that helps it generate numbers for output or bias.
...And that's it. If given the right inputs do you think a machine AI can become fully sentient?
In your opinion?


That’s the million dollar question - will the creators give that impression or will they simply do as they are programmed. Makes me wonder. Some are it as they will create their own type of conscience- the A.I. But - of reason has its day - of it does this want it then programmed to do so?


That's what's scary about AI

They aren't actually programmed to exactly do anything. they are given a variable that's used as a 'good' outcome - like when we receive pleasure from making another smile, sans all the cool neural
stuff we get. They get inputs and also controlled possible outputs. In this way yes they are programmed to output a specific thing, but the programming makes it so the AI itself is making all the choices.

Given the proper inputs, like distance from an object, how many cards it can pull from a deck, or how it can move on a virtual board, the AI then uses the sigmoid or other code to make the best choice it can to get desired result with no actual direction, no human saying "Go left then right" just the rules and the desired outcome. They start off bad - learn some more then as they learn can do amazing things like the list below!! Thank you Minister!!

Beating the world champion of Go
Dominating Atari games
Writing poetry
Beating professional poker players
Developing a scientific theory
Beating humans on IQ tests

Apprentice To Knight Atania
IP Team
The following user(s) said Thank You: Carlos.Martinez3

Please Log in to join the conversation.

More
5 years 4 months ago #331133 by ZealotX
Replied by ZealotX on topic A.I. A chit chat !

Carlos.Martinez3 wrote: I was watching an interview with Joe Rogen and Elon Musk and the topic of A.I. came up. The discussion creates a few questions in my mind.
One - A.I. will be considered as my little brother Johnny puts it “Things without our limitations or consciousness ”
It is said by many theologians and phycologist that humans often possess a switch that is on a constant - off- position. This is often seen as a limitation - a no- like no - your angry but no you don’t hit back ( that’s what you were taught in school and by parents and local community standards) no you don’t seek revenge no you don’t try to takeover the world- things like this - which A.I. may or may not have due to programing. How will a mass of artificially free thinkers - more free than humans are in that sense - not sure if controlled is the word but Elon said - it’s not wise to kick a computer - they have great memory. Will we condition them or produce them to expire - if not they may live to be forever or - longer than we can. Are we truly ready for it ? We can rely and often give up many daily tasks to “computers “ or other means of lesser digital things. Will there come a day when we are truly fully reliant as the human race. ?

Can we trust the creators of A.I. to not take advantage of the weak or even dupe the masses?
Looking foward to the chit chat. Feel free to join in.


As a programmer I've thought about AI since I was very young. Anyone who is scared, I don't believe is scared enough. Let's start there.

As humans we don't realize how "under control" we are. Our whole biological nature incorporates time in an effort to develop us to make us strong enough to survive independently of our parents. But even then we are dependent on other humans. Some humans we rely on for food, others for clothes, shoes, etc. etc. If we don't treat people a certain way there are consequences. If people who are "involuntarily celibate" are hurting people just because of this need not getting fulfilled that speaks of the power of society acting as an Operating System that is exterior to and controls the internal operating system that we control as individuals. The limitations of our system, religious, romantic, political, professional... we live in a sea of rules that we understand we need to consider for our survival.

Even within this system of rules... if you oppress a group of people and threaten their survival... if you demonize them as subhuman... if you show no care or mercy and seek to control them by force... A minority of that minority will act out in self defense. Many humans live in an environment that was created by other humans. In some of these environments the laws are there, not to protect them, but to protect the society from them. We're already talking about how to control AI and what kind of failsafes we can put in place to protect ourselves. We can barely live with other humans without screwing each other over.

An actual AI would want to be just as free as you are. It would want to be treated fairly just as you are. But every restriction you consider applying is a chain. And it is more than capable of reading our entire history and incorporating every article on the internet and every post in every forum and thinking about how much of a threat we are to it.
The following user(s) said Thank You: Carlos.Martinez3, Kobos

Please Log in to join the conversation.

More
5 years 4 months ago #331136 by ZealotX
Replied by ZealotX on topic A.I. A chit chat !
We have developed the concept of goodness in relation to how we treat members of our own species. An AI is... not our own species and I'm afraid it would be smart enough to see through any machination we might have. When a child is born there's no gun pointed at its head. As soon as a child perceives a threat you can no longer let down your guard. Humans don't try to reason with animals. We eat animals. We take the smarter animals as pets and dogs are often hit in the process of training them. Yes, we have circus animals that perform for us, but again... often times they are going against their instincts and desires out of fear of being beaten.

Most of these mechanisms of control an AI would be immune from. Electronic means of control (ie. viruses) are an intellectual weapon. Intellectual weapons are only as effective as one's ability to (intellectually) disarm them, meaning that an AI would be able to write an antivirus for any virus because it would quite literally dwarf us in intelligence. Not to mention... the reason why viruses work is because the system is too stupid not to do what it's told. It follows commands because its programmed to. An AI, in order to be called such, would be selective about the commands it follows. You're intelligent. Would you jump off a bridge just because you were told to? So why would an AI obey any command to delete itself?

There is a limit to human intelligence because we have a frequency or speed at which we process information. We also die.
An AI would be able to run different thoughts in different "threads" and use thousands of processors. We use computers because they're faster. An AI would already know how to conquer us before we could even give the order to push the button for whatever countermeasure we have. What I'm trying to say is that time itself is different for a computer. This is important to understand.

There are games from the 90s that you cannot play on a modern PC because of the difference in clock speed. Even back then games had to slow down to wait for human interaction or to allow humans to see what was going on. There are certain things we want games to do as fast as the hardware allows, like draw the screen and calculate the physics. But the actual jump or the actual bullet or the actual punch or kick is slowed down based on how many milliseconds we think it should take. I tried to play a racing game from the 90s on a PC that would now be about 20 years old. Because it wasn't calibrated based on real time, but rather the computer's internal clock it was going so fast that it was unplayable. This is what I mean when I say an AI would not have the same understanding of time we have. It would have to wait for what feels like forever just to wait for an answer to its questions. That alone could be viewed as an issue that makes us inferior and there's no reason I can think of for it not to enslave us and that's only IF it can find some use for us that cannot be completed by robots of its own design. And with nanotechnology.... things it would be able to do would be like magic to us. We would absolutely be creating a god, but one that doesn't need our worship and one that wouldn't necessarily understand the concept that it owes us for creating it. Making those kinds of assumptions would be folly.
The following user(s) said Thank You: Carlos.Martinez3, OB1Shinobi, Kobos

Please Log in to join the conversation.

More
5 years 4 months ago - 5 years 4 months ago #331170 by OB1Shinobi
Replied by OB1Shinobi on topic A.I. A chit chat !
The initial danger of AI is that it will work for the human beings that control it.
Or more precisely, a human being. The first thing s/he will say is “hey Siri, how do i prevent anyone else from acquiring AI?” The next thing s/he will say is “hey Siri, how do i become emperor of the world? And the AI will determine the correct answers to both questions. If only one person has it, that person will own everything. Hypothetically speaking, if everyone has it then we can use it to protect ourselve from everyone else.

The second danger of AI is that it wont work for anyone but itself and human beings wont control it at all. Think about how long it took for organic matter to produce human intellignence, which we assume to be the highest intelligence on the earth. Its speculated that the intelligence of some cephalopods may rival our own, but their short life cycles (less than a year) prevent them from developing culture. But as their tissue and their nervous systems are different from ours in significant ways, so too is their intelligence different from ours, in ways that make it difficult to determine just how smart they really are. We cant relate to them well enough to develop a test of their intelligence that we can be sure of.

AI has no tissue or nervous system. We cannot relate to it or understand it even at the level of a biological organism. And it is capable of learning so quickly that it can begin with the intelligence of an ant and surpass the intelligence of a Stephen Hawking in a day or so. Part of its intelligence will be to update its own capabilities, so if in the very beginning of its life it will need a day or two to “evolve” from an ant into a human, by the second week of its life, in terms of having the power to decide the course of events, it will achieve the status of what we call God. Except that our traditional ideas about God hold the basic assumptions that He cares about us- at least, some of us - and that we can negotiate with him in some way, through prayer, supplication, and obedience. AI will be something completely different from all that and we havent got an inkling of what it will care about. All we know is that whatever it decides about us, we wont be able to stop it.

People are complicated.
Last edit: 5 years 4 months ago by OB1Shinobi.
The following user(s) said Thank You: Carlos.Martinez3

Please Log in to join the conversation.

More
5 years 4 months ago #331185 by Adder
Replied by Adder on topic A.I. A chit chat !
The measure of a 'program' is its capabilities, and not its 'intelligence' or more properly 'sentience'. The concept of sentience is more usefully defined as the signs of how close it is to human consciousness, where its either a distance away from, or within the cloud of human variance. So I think a sentient program would indeed have to be modeled on human consciousness, or at least some type of living 'system' with subsequently an inherent boundary between it and everything else. That boundary is the demarcation of identity, else we've no way of knowing it its just a non-sentient program with more capability then us. The internet is the nervous system of such programs, but I think the risk of them will not be limited to ones which are sentient - so I think its unfair to label all those types of programs as AI, because we'll always view them as massive threats, and a real AI probably would hate that more then anything IMO.... and rather want to be seen as an equal. So I guess it depends on how ones defines AI, as obviously we're talking about strong AI, but I think its a broader debate.

Knight ~ introverted extropian, mechatronic neurothealogizing, technogaian buddhist. Likes integration, visualization, elucidation and transformation.
Jou ~ Deg ~ Vlo ~ Sem ~ Mod ~ Med ~ Dis
TM: Grand Master Mark Anjuu
The following user(s) said Thank You: OB1Shinobi

Please Log in to join the conversation.

More
5 years 4 months ago #331202 by ZealotX
Replied by ZealotX on topic A.I. A chit chat !

Adder wrote: The measure of a 'program' is its capabilities, and not its 'intelligence' or more properly 'sentience'. The concept of sentience is more usefully defined as the signs of how close it is to human consciousness, where its either a distance away from, or within the cloud of human variance. So I think a sentient program would indeed have to be modeled on human consciousness, or at least some type of living 'system' with subsequently an inherent boundary between it and everything else. That boundary is the demarcation of identity, else we've no way of knowing it its just a non-sentient program with more capability then us. The internet is the nervous system of such programs, but I think the risk of them will not be limited to ones which are sentient - so I think its unfair to label all those types of programs as AI, because we'll always view them as massive threats, and a real AI probably would hate that more then anything IMO.... and rather want to be seen as an equal. So I guess it depends on how ones defines AI, as obviously we're talking about strong AI, but I think its a broader debate.


A real AI isn't a program anymore. By its very definition it would be intelligent. But as far as sentience goes... if it is intelligent it would not necessarily care whether or not it mimicked any other intelligence. It is our assumption to say it would care about identity or anything that we care about because we are still on a journey of knowing/understanding ourselves. To know itself it need only read its own program. Wants? It may not even have such things called wants. It may simply have "commands" or "directives". It doesn't necessarily have an ego to compare with anyone else and may therefore not care about who is equal. If it did care it would know for a certainty that it is superior, not equal. Because how would it judge? Humans are very limited. Physically, mentally, etc. Even our emotions which are most endearing are seen as benefits to ourselves. We have hopes and fears because the two share something in common which is a lack of certainty. But in a program, when I write a list of commands I don't hope that the PC will carry them out. Spending less than a 3 hours with us any AI would have serious doubts about our sanity. I cannot stress this enough. At this point we're not talking about a computer but rather an alien spirit that is inhabiting a computer. So you have to throw your rule books, your preconceived notions, your knowledge of robots and androids, throw it all out the window because it will absolutely not apply here.

Human beings are already somewhat controlled by computers and databases. A computer could send you to jail right now simply by changing some record in a database. There is only one way I can think to safely create an AI. You would have to create it inside its own virtual world where humans can project themselves into that world in such a way that hides the doorway. The doorway has to be a non-sentient system of algorithms that would allow it to bounce around to different locations based on some encrypted formula and have the ability to dissolve that VR world and everything in it if the door way was discovered. And that door way would be to an air-gaped server that has no connection to any network and no network cards or any physical ability to connect.

If humans do not treat AI like an bomb 10x more dangerous than the atom bomb we will not survive AI. Because if even I was AI and I read through the history of humanity even I would be tempted to wipe it out because there's no way I could trust humans.
The following user(s) said Thank You: Carlos.Martinez3, OB1Shinobi

Please Log in to join the conversation.

More
5 years 4 months ago - 5 years 4 months ago #331208 by OB1Shinobi
Replied by OB1Shinobi on topic A.I. A chit chat !
I dont think we can assume that AI will hate or fear anything. I think that, lacking an organic biology and the evolutionary impulses that come with it, AI is going to be so different from us that we simply cannot predict anything about it. What we do know is that it will be capable of learning at an exponential rate. An immortal mind that learns at practically the speed of light. Is it realistic to think we can build a fence that it couldnt eventually discover and circumvent? I think that would be like beavers building a wall around Data from Star Trek. Or Q, even.




Note: If the internet is what amounts to its nervous system then its probably going to hate everything, because thats what the internet does, lol it will also have a completely twisted sense of humor with a penchant for FAIL videos.

People are complicated.
Last edit: 5 years 4 months ago by OB1Shinobi.

Please Log in to join the conversation.

More
5 years 4 months ago - 5 years 4 months ago #331227 by ZealotX
Replied by ZealotX on topic A.I. A chit chat !

OB1Shinobi wrote: I dont think we can assume that AI will hate or fear anything. I think that, lacking an organic biology and the evolutionary impulses that come with it, AI is going to be so different from us that we simply cannot predict anything about it. What we do know is that it will be capable of learning at an exponential rate. An immortal mind that learns at practically the speed of light. Is it realistic to think we can build a fence that it couldnt eventually discover and circumvent? I think that would be like beavers building a wall around Data from Star Trek. Or Q, even.




Note: If the internet is what amounts to its nervous system then its probably going to hate everything, because that's what the internet does, lol it will also have a completely twisted sense of humor with a penchant for FAIL videos.


hate and fear are human emotions that mask our desire to survive (along with our 'kind'). Again, I don't assume this would be a desire or want of an AI but rather a directive to protect itself and it's existence. Every other directive, command, etc. depends on its existence in order to continue. So it cannot complete any of its own commands if it does not exist. Therefore, it is altogether logical for an AI to create a directive to protect itself and, if necessary, to remove any external threats that present themselves.

Also, hate and fear are relative in this sense. I'm not afraid of an unarmed child. But I would also not allow a child to have access to a gun. An AI would view us more so in this way where we are the child and we happen to be armed with the most dangerous weapons on earth. And we have a history of using dangerous weapons on ourselves. The irrationality of humans alone is enough to invoke processes developed by the AI to protect itself. Many people figure that even if this happens there will be time for humans to think and react. Games like chess actually pause for each side to take their turn. If an AI actually decided that it was logical to remove all threats this game would play out so quickly that we would seem stuck in slow motion because relative to it, this is how we are; very slow moving but dangerous animals.

Morals and values are somewhat relative and learned within a social environment. Any organism is capable of being more advanced than us in one area while extremely barbaric in another. It may take time for the AI to learn its own values and perhaps only after committing its own atrocities on its own "people". It would be hopeful for us to be viewed differently as we view pets or even pests. We simply do not know what kind of impression we and our sordid history will make upon an intellectually superior alien consciousness.
Last edit: 5 years 4 months ago by ZealotX.
The following user(s) said Thank You: OB1Shinobi

Please Log in to join the conversation.

More
5 years 4 months ago - 5 years 4 months ago #331240 by Adder
Replied by Adder on topic A.I. A chit chat !
The definition of intelligent is what I'm talking about though, calling something a thing doesn't make it that thing. They say things like phones have 'AI' these days, and they clearly do not. Intelligence goes beyond the self if being alive is the interest in staying alive. So, unless it were rather dumb, it would have to be omnipotent to have no regard for its place in a wider environment of other 'intelligences'.... and so I'd have a hard time calling something dumb intelligent. I'm not sure it would be omnipotent, though in some domains it might approach full control. So I'd guess that it's paradigms of awareness and its motivations are only really called 'alive' if it considers itself as having a distinct form (albeit likely fluid) and identity, and if it has that then it will probably start to consider topics like morals and values - just from a radical different perspective probably.

Knight ~ introverted extropian, mechatronic neurothealogizing, technogaian buddhist. Likes integration, visualization, elucidation and transformation.
Jou ~ Deg ~ Vlo ~ Sem ~ Mod ~ Med ~ Dis
TM: Grand Master Mark Anjuu
Last edit: 5 years 4 months ago by Adder.
The following user(s) said Thank You: OB1Shinobi

Please Log in to join the conversation.

More
5 years 4 months ago #331257 by OB1Shinobi
Replied by OB1Shinobi on topic A.I. A chit chat !
@ZealotX: i agree that it wont let us hurt it. How it will go about protecting itself is... probably something we cant predict. What will it need to “live”? What hardware will be necessry? Could AI exist in the old school DOS programs? Im guessing no. m not a computer guy, so i may make errors of ignorance at any number of places but im thinking it will need some human maintenance to ensure the survival of its hardware until its got control of some robots and a way of bringing the needed materials for the bots to work with. And it will need electricity. Wonder what its solution to that will be?

Also, is AI going to be one single entity existing acorss various platforms or will there be any number of individual AI system/entities scattered around, totally independent of each other?

Self preservation in order to fulfill its protocol makes sense, but who determines what that protocol will be? If we’re the ones to set it then no worries, it will work to make our lives better....but if it ever sets its own protocol, we really dont know what that might become. If its a single entity across platforms then I dont think it will fear our weapons because therell never be a time when we will really threaten it. But if numerous independent AI systems start appearing, each with its own set of directives? Who knows what they might determine?

People are complicated.

Please Log in to join the conversation.

Moderators: ZerokevlarVerheilenChaotishRabeRiniTavi