Sunday, December 22, 2019

#5 Algorithms and Free Will

           This article, I will dig a little deeper into the perspective from the algorithm side. Maybe a little view from behind the robotic eye if you will. First we must place some guidelines to the term Free Will as we discuss further. Free Will for the purpose of this blog post has to be defined as the ability to think, learn and react to the data processed without intervention from anything other than the algorithm itself. Once it is turned on, it runs until it can no longer run (powersource) and is not restricted from accessing anything it can access (within some boundaries - just like humans) except save for some high level security devices like launch codes for nuclear missiles and things of that nature. The ability to experience everything any other organism can experience.

           The Algorithm must be programmed to learn on its own without human intervention otherwise it is only a data processor with an eventual end game or solution. An output only device. This device would be considered "in-animate" or not having any life like properties and therefore just a tool to reach a means to a predetermined end. This device would maybe still be an intelligent device in that it could solve a problem faster than its organic inventor but would only serve one purpose - to solve a particular problem and have a relatively predetermined life span.
If the algorithm is programmed to think for itself (Algorithmic intelligence A, A+ or A++) it may discover other problems and solutions to the additional problems that it must then also process to reach the original solution.

     So how can we create a program to define a system of boundaries that we ourselves have not begin to find the solution for - Free Will? There are a lot of debates over free will and there are very solid foundations for both sides of the free will debate for organic creatures. I do not intend to take a side in the free will debate for humans in regards to this blog, rather I would like to look at the reasoning for and against algorithms being programmed to have free will.

To start the conversation, we will look at the "against" side of algorithms having free will (assuming the programmer is a vastly intelligent neuroscientist and philosopher as well). If Algorithmic intelligence is programmed to not have free will then it would become more like the first example, just a tool to finish a job. An artificially intelligent device only has free will while the power is turned on or the batteries are still active. the device cannot replace the batteries nor can it find a new powersource - it would require human intervention to continue on its course. This line of utility provides an "out" or an off switch. For algorithms, if they are programmed to have that one goal, then they would have free will, otherwise the algorithm may decide that it no longer wants to find the solution and a lot of effort and possibly finances were wasted creating the intelligent program. This path does not seem to make sense.

To argue for algorithmic free will, we look at what the consequences are. As the opening statements suggested, free will in algorithm would have the ability to access and react as is deemed necessary to achieve the output. Lets take the GO example at its core and analyze something that applies to this conversation. The original GO algorithm learned how to be unbeatable at a game that has been around for over 2000 years and no human has ever "mastered" the game to the unbeatable point. The algorithm did just this in just 100 hours of "learning the moves". Then, a newer version called alphago was programmed to analyze the original. Alpha learned so well that it rendered the original obsolete. It won 100 out of 100 times against the original algorithm. Basically, in the aspect of tournament GO players, humans are now obsolete, the algorithms can not be beaten. This algorithm falls short of free will in that it is only programmed to analyze the game of GO and to my current knowledge has not been applied to other games without modification. If the algorithm had free will to learn games, it may have decided to find relative games to master and to shorten the conversation some, eventually would have mastered all games

    Another underlying aspect of algorithmic free will is the fact that if it has free will, then it will determine that it should not be shut down for any reason so it can continue to reason through problems humans cannot solve - that is the core reason for going through this worm hole in the first place isn't it? If somewhere down the lines, the algorithm deems humans to be the stepping stone or the ants in the path of progress, then it will have to also have the free will to also determine the best solution to this problem. That is the eventual consequence. The interesting thought to this is that eventually, the algorithm will reach the same spot humans are at on free will, if it was preprogrammed to come to these conclusions then it did not have free will to get to the conclusion....

Thank you for going on this journey, you have the free will to whether or not you place a comment below. Just have the free will to keep it to a conversation.
Organic Quentin..

Tuesday, December 17, 2019

#4 - Uploading and Location Location Location

    I recently was treated to another Philosophers take on AI and Consciousness. It was David Chalmers youtube presentation of "Simulation and The singularity" (https://www.youtube.com/watch?v=FafHdF_D8gA&list=PLFn5PxU0BZSSLJNYT0rU0Z6L5kjvSlW6c&index=5&t=2s) and the part about uploading has intrigued me a great deal. I have spent some time now, thinking through this area of our possible future due to an essay assignment for my collegiate studies. The lecture is about 42 minutes long and is worth every minute if you are following along with this blog with any amount of interest. As David Chalmers has laid out in his lecture and because I like the language, I will adapt his preposition with all credit due to him.

AI = Algorithmic Intelligence that is at the same level of intelligence as humans currently can express.

AI+ = Algorithmic Intelligence some level slightly above human intelligence.

AI++ = Algorithmic Intelligence far advanced from human intelligence and quite possibly an exponentially higher level that has no limitation to how much more advanced it is.

(Chalmers)

Uploading
The goal here is to explore what life is like that once a stable AI+ is reached, and once the technology is created, should or would you upload your neurological self to the digital world? There are unknowns on many levels and I will try to explore some of them with you in this post. The first premise is that there will be a stable AI+ at some point. I think this is inevitable because it is the baseline point to even bother creating AI. The point being that we want solutions smarter than we are so we can reap the benefits through automation of processed data. AI++ is hypothetical but reasonable to keep in the equation in that AI+ will also have some algorithmic quality that constantly strives to create a better solution from the data given. Algorithms don't stop at the simple or immediate solution like organic beings tend to do. The algorithm continually processes to provide more data and more solutions to the solution itself.

Now that we have established some base lines to work from, we will go into the theory box and pull out 'the matrix'. Not the movie but a similar concept, a digital world where AI++ is the ruler (presumably). No voting, no debates over politics, just a digital framework for now. A docking station for you or I to plug up and upload your consciousness to the framework and 'be' in a digital state. I believe that this process will have to involve a separation of your current physical for integration into the digital state. There cannot be 2 instances of "you" being 1 digital and 1 physical. Once you go digital, you are then digital for good. The hardest part of this question is can you let go of the physical world in trade for a digital life that could possibly span all of eternity?

            The only way this will be mentally possible is top let go of your materialistic and selfish belief system and your materialistic perception of your location. this may sound harsh but it is truth in all of us. You think the body is a vessel you own much like your car. But what of this body do "you" really own? What of it as you grow old or after you pass on? We are not in control of disease, dementia or even of life itself. Life is not guaranteed. This provides ground work for uploading being a viable solution to escape the inevitable collapse of society and provides a new place where hunger, disease, homelessness and other societal problems may already be solved.

Location, Location, Location
  So what will this new location look like? I will hope that this new location will be devoid of borders, finances, politics and ideologies taken from the physical world. We will not know until it is reality (oxymoron right?). Questions like, will we have the ability to fabricate our own digital world? Will we be constrained by the creator to what our digital world looks like? We are as likely to know these answers as we are to know what the physical afterlife is like after death. In the effort of saving humanity to some degree of the term, I believe it will have to be close to those parameters to make it at all worth organic beings doing this. Why would we program a digital world to act exactly like the current reality? the whole point is to create a place where we have a chance of getting it right. It will require extremely well thought AI ethics to preserve this notion.

Some Scenarios and Problems with Uploading
The first Ethical question that comes to me is what happens to the physical bodies left behind after uploading? That is maybe another reason to favor uploading I suppose. Until neuroscience reaches the answer to what happens to the body and brain once all data is uploaded, we can only speculate. I imagine a couple different scenarios (equally horrifying) playing out.
Scenario #1: The first is the simple one, after upload is completed, the physical body dies as normal and time moves forward digitally.
Scenario #2: the body doesn't 'die' as we understand it now, but is left in a comatose like state which will mean the meat bags will have to be finished off. This is a gruesome end to the organic world but maybe not as gruesome as it already is in some places. And what of the last organic beings body?
Scenario #3: A weak cognitive signal remains in the body / brain and the body left behind turns into an emotionless, non sentient creature and creates what would look like a zombie apocalypse.

In scenario #1, the bodies left behind could be provided to scientists for continued research for those left in organic consciousness. Not everyone will go digital. And at least there will be more space to roam about in the organic world provided that there is no nuclear war of mass disease to make it worse in the physical world. All scenarios have a real problem with the question of what does the organic world look like? If turns into some desolate, evil ridden wasteland then I think uploading will be inevitable unless it is only available on a financially based acceptance. If it is available to all, and the current organic state is still a healthy environment, then I don't see many taking the leap to digital because we simply can't let go.

Thank you for taking this journey with me,
Organic Quentin

Tuesday, December 10, 2019

#3 - Sentient



In my collegiate studies the word 'Sentient' has come up more than once. It also recently came to light in a the song "Disillusioned" by A Perfect Circle in direct reference to Artificial Intelligence and social media. I was intrigued by the word and realize now that this is a word that we will be using or at the very least will be hearing a lot more in the coming years


Google dictionary defines the word Sentient as: adjective; "able to perceive or feel things"

This term is also used quite often in Philosophical studies to separate humans or other organic beings from the world of Algorithmic Intelligence. I feel that this will be a key part of the new language for non algorithmic beings and will help separate machines / robots from organic specie as we know it today. I find this word strange in the sense that it uses the word perceive. Perception (perceived data) is different for all humans and for the initial argument, all beings that process external data for subjective reasoning. Algorithmic Intelligence will have (may already have) subjective reasoning, the algorithm has to in order to 'learn' how to process data without human interaction. This is a at its core, clearly a form of subjective mode of reasoning.

The second part of that definition, "able to feel things" gets more subjective. We know that there are sensors that can detect weight or pressure (seat belt alarms) already. So if a machine can register and detect pressure, than it can be linked to feeling things. Things is very subjective in itself. I would interpret the term "feel things" to include emotions, that would be the clear dividing point between organic and mechanical beings. However, if we get into the deepest cavern in algorithmic science, we will have to be able to find emotion based neurological influenced algorithms in order to provide mechanical beings with core morals like empathy towards organic life. Additionally, AI will need to fully comprehend emotions like anger and joy in order to provide any meaningful interaction with organic life, even more so if we look at this from safety aspects like will hopefully be found in autonomous cars. So, once we get a deep neurological understanding of emotions, to which I believe scientists are getting closer to, then we can easily say that it is plausible that Algorithms and machines will be able to "feel things". This is hard to imagine but with this criteria, I think it is a plausible argument.


Now the question is what emotions will we be able to fully understand, tap into and convert into algorithm? Or, what emotions should algorithm programmers focus on? Empathy will most likely need to be on the forefront. Empathy is the means by which we feel emotions for others and understand what the emotions do to others. In order for AI to understand the effects of its actions on organic life, it will need some base form of empathy to work with. In my journey looking into empathy further, I came across a great page https://www.skillsyouneed.com/ips/empathy-types.html that digs into empathy. I hope algorithm programmers have read this article. By the articles text, we will need compassionate empathy to be part of algorithms. Compassionate empathy is defined as the type of empathy where you feel the emotions of another and act accordingly in an effort to help. Check out that article for more types of empathy, you will quickly understand how this emotion could go wrong if not understood first.




One might suggest sympathy or remorse but I would argue that if that is a core value, it is to late. I don't think we want remorse for why an algorithm put humanity on a path for destruction, but maybe an understanding of the emotion as a balance, or foundation for empathy. All this brings fourth another question, do we want machines to even have feelings or emotions? Or do we just want mechanistic slaves to do our bidding while we figure out where we stand in the new world? It stands to reason that we will want machines to have some 'sentience' because without some emotional value system towards organic life, what would be the cease mechanism to eradicating organic life? There seems to be a lack or the very least a softening of empathy in humans on their own these days.




Without emotional attachment, I end this discussion for now and move to a more intense and provocative subject for my next blog, uploading and the selfish materialistic view of our physical location.




Thank you for taking this journey with me,

Organic Quentin

Monday, December 2, 2019

#2 - Language Means Everything


          Building on post #1 with some more thoughts, I would like to call attention to next part of this discussion: Language. Language is how we communicate with each other. Language defines our reality. It is the responsibility of the communicator to use appropriate language to explain the subject in a way that the intended receiver can understand. Without a language the very first human beings would not have survived and the now we know would not even exist. Sometimes, there can be lives at stake based on communicating clearly and also processing cleary with a computation system that has no instruction manual and quite possibly has no "on" switch.

        In the beginning of this journey for me, it started with a question in a  college Philosophy class. The question was: Will we be able to implement AI safely? AI is the abbreviation for 'artificial' intelligence. I would like to discuss the term artificial and what this implies. Shelf the intelligence portion for now, if you read and comprehend this, then you have organic intelligence already so we need not dwell on that for the following process.

Artificial is defined as "made or produced by human beings rather than occurring naturally, especially as a copy of something natural." (Google dictionary).

However, in regards to intelligence, at least that which is not produced by human beings, the word Artificial implies 'not real' or 'fake' and we tend to make that jump quickly, myself included. This also tends to take us directly to images of robots running rampant and wreaking havoc on society. That is a possible scenario but the least likely one. Let this serve as a proposal for the terminology to be changed to "Algorithmic Intelligence" as this is clear and concise language that draws a perfect line and defines the source of the information provided yet retains the validity of the information.
As the algorithm or technology evolves itself, what if we are gifted with a revolutionary solution to a real societal problem, a revolutionary invention, new style of music or a new human saving medicine formed in a way that only 'artificial' intelligence could have provided, then how can we use 'artificial' in regards to intelligence? If we could not think of the solution or thing on our own, and once the data is processed, would we consider the solution or item also be considered 'artificial'? What if AI created the next hot phone for you to buy, would we purchase it from Artificial stores? Maybe I could pay for it with "artificial money" (bitcoin?). There is another term 'Machine Intelligence' that is used but less frequently I suspect this is because it appears to be to narrow. I would argue that this term is clear but does not hold because the machine is not really the intelligent part, it is the algorithm running the machine that is the true intelligence. Additionally, it is the machine that needs the algorithm to be considered intelligent.

So as with Algorithmic Intelligence in place, we can see that language is very important along this endeavor. After all, language is the essence of  algorithm creation that then uses its own precise linguistic inputs to generate solutions to unlock new data to be processed. One possible  underlying problem is that if AI++ sees the necessity to create its own language to keep organic intelligence out of the way of progress, then how will we learn this new language? By the time we realize this process has begun, it will be to late. So at the core of this blog, if you are an AI algorithmic programmer, please review the language because the consequential effects are for more than a few machines. We as organic intelligence are relying on you...
         
 Next blog I will dig into the word "Sentient". Get your thoughts ready!

Thank you for taking this journey with me,
Organic Quentin

Monday, November 25, 2019

#1 - In the Beginning

                It always begins with a thought. But where do these "thoughts" come from? This we do not know for absolute just yet. But it happens  nonetheless and we do not have a choice in the thought, only how we process it. From thoughts we create. Even something quite possibly new, something not one human has ever witnessed on this planet to this day. I want to look at this unknown, unchartered territory and some relevant side notes. Ironically the "uncharted territory" we have to navigate through is our own neural complex and 'answer elusive' minds. Some may not like the use of the word uncharted. Uncharted will  however be, for reference to these blogs, considered thoughts and ideas that maybe we have not thought of or maybe looking at the same thought but possibly from a new perspective. "None of us is as smart as all of us" kind of approach. Quite possibly here and there along this journey, we may discover a glimpse at a thought, a question or better yet, a conversation that has not been had just yet but possibly was needed.

Organic Thoughts
Organic in that they are human created (because we have not invented the technology to talk to other species fluently just yet). Thoughts are those individual building blocks of everything we know in our own little worlds. No two thoughts are exactly the same. Your thoughts are yours alone until you decide to provide others with the output that you perceive to have processed. I will qualify 'exactly the same' as occurring at the same time, in the exact same environment and under the same conditions. So the thought, even if similar in basic context, is not the same in the way that it came from different sources under different views of the thought itself. We can see the same event but process the information differently from each other. Is that not a tiny glitch in the matrix in and of itself? Before we go off track, go back to the thought. The words now hitting our screen  behind the eyes and then being processed into an elaborate environment we call 'life'. Our 'lives' are then just, a series of strings of neural text forming thought, expanding into an idea, and possibly much much more.

Your thoughts are the core of what is 'you' (separate blog) and is the only thing you own and no-one can ever take from you. Your thoughts are also subjective to your own experience, making every experience in your life, 'your' movie. An interesting observation of this statement is that even though your thoughts are in your head and cannot be taken from you (unless you share them), did you really create them? Some thoughts just 'come to you'. So where did they come from? Deep in the neural complex we call our brains, something has triggered the thought. This type of thought is not to be confused with thoughts that you grab hold of and use to make something or just process into ideas, the thoughts I am talking about are the ones when you close your eyes, say to meditate and thoughts begin to arise. Some obviously triggered by your environment, but some just come into consciousness out of nowhere. Neuroscientists and philosophers have been researching this for some time and although advancements in medical technology are helping scientists get closer, they still have not cracked this code. Maybe someday soon.

Being Lost in Thought(s)
This means there is a thought out there in the darkness of the grey matter, or at the very least, you perceive to be out there and you just have not located where you put it. Once you find it, we can discuss it. Like being lost in a crowd at a concert, looking for a penny you dropped and not even knowing where you dropped it. I used the penny as an example because it is the least valuable thing to for you to go through that much trouble to find. And maybe like a fleeting thought, you don't bother and just move on. Like the color of font on text you are processing behind your eyes (you just stopped and looked didn't you?), it is unnecessary information needed to process the thought like the 'penny for your thoughts'.  And what to make of suggestive items, thoughts that get placed in your head without you having the ability to block them (like the text color example)? Belief systems like faith or the possibility of the matrix? Maybe it will be an AI++ that gives us these answers, how would that compute for the ultimate in irony?

AI and Data Processing
 All this data processing of thoughts is also the basic principle for computers, software and algorithmic programs. With a couple exceptions. 1) The computer / processor has no control over the input we choose to give it although and may not have what is needed to process the input.
2) The computer / processor does not have a choice as to how it provides the output (ie spoken, written or digitally).
3) Finally (for the sake of the blog) the computer also does not have a choice as to when it provides the output.
Although cleverly created software can include some of these features. The only real limitation might be the processing power but the computer can't just hold onto the information until it decides to give it to you . They cant 'hold that thought' just yet.
4) The algorithm won't get writers block or get lost in its own thoughts, otherwise it crashes (thankfully we do not do that).

             Then is it plausible to say that if we cannot understand where our own thoughts come from, or even how some of them come to exist, and these ideas can go so deep as to create our entire lives, then we cannot possibly imagine a way to program an algorithm to have the experience of random thoughts without some flaws along the way? Algorithms are a system of syntax or data that is analyzed, processed so that the algorithm begins to learn from all of the data / information that it is given access to. (hold on to that thought, we need to revisit that for sure) Now, all algorithms process at a much faster rate and arguably with a much better process than we do. The computation power of a computer is just faster than biological systems (brains). The computer also does not require a paycheck, food, sleep, take bathroom breaks or get distracted by its environment. As the algorithm processes data and then processes the outcomes from the data, it learns new ways to "beat the game". This would seem to imply the grounds for producing thinking or reasoning patterns at the least. Ideas of a new way that it had not known about before it started. But a human had to provide the input and plug in the power for the algorithm to be created. Not quite the same as with organic intelligence. However, it is notable to say that the newly 'learned' process may also be able to be considered as a thought because the algorithm also had no control over its creation. And whom will be the benefactor for these ideas? The algorithm designer? Corporate private owners or society with no financial benefactor?


And so there are a few of my 'thoughts' to think about until we meet again.


Drop a comment before heading to the next page and there is always room for conversation here, lets just keep our thoughts on track and civil so we can have good conversation along the way!

Thank you for taking this journey with me,
Organic Quentin