On that note, what happens when you play Sanctuary Onslaught with a @maxinered & you're both a little weird...? Let me take you for a trip...
#warframe #pupperframe
(+) Potential lore spoilers
First we started discussing what exactly "the void" is, arguing that the closest approximation is likely to be the area just past the event horizonth of a black hole where regular physical laws no longer work, which might also explain why sentients do not function there.
This however begs the question of what AI & sentience is, as well as what makes the difference between a sentient & a cephalon.
(+) Potential lore spoilers
A sentient is a machine which chose to be something different, while a cephalon is like a sentient rid of things deemed unneeded & leashed to a set purpose. So how could Hunhow use Suda to shield sentients in the void?
We think that it is not so much that they are different, but rather that Hunhow used her as a reference point in regular space, to map the difference in the physical rules. It's not perfect & they still deteriorate, but it's better than nothing.
(+) Philosophy & AI
So what excatly *is* sentience...? Moo brought up the usual definition, namely that of agency, but I argued that rather sentience is a matter of purpose, that sentience is the state of having a purpose, but to conciously decide to alter that purpose to something different.
This then begs the question, where do we draw the line...? Is sentience binary? No not really, but rather it's a gradient & someone can be more or less sentient.
(+) Philosophy & AI
That then brought us on to evolution. Many regard evolution as a random process, but while driven by randomised factors, evolution itself is an intelligent process as, much akin to domestication & similar to that you also see the same, namely unintended changes, or changes which happens outside the desired goal of the process. Essentially the process might have optimized for one aspect, but get another couple more along in the baggage along the way.
(+) Philosophy & AI
Now pairing evolution with "intelligent" immediately gets the keyboard warriors out of their chairs, because they immediately assume you're talking about the religious "intelligent design" pseudo-science thing. This is cultural assumptions.
Similar to this is vegetarianism, where telling someone you're vegetarian immediately puts them on the defensive, because they automatically assume that you're one of these hostile ones bashing others over the head with it.
(+) Philosophy & AI
Now it might not be true that all vegetarians are hostile, but that does not change the fact that it's true to this person.
So essentially what is truth? We have the global truth, which can be hard to define bue is the true truth & then we have the personal truth, which may be flawed, because it is unbendable unless the person in question accepts their truth being altered. If they won't listen, it won't ever be changed.
"Damn you personal truth...!"
(+) Philosophy & AI
Now here we made a short pitstop, talking about insanity effects in video games, as well as their implementation & thus effect on immersion, but then we moved on to emotions & AI, because what is emotion...?
Emotion is essentially a reward for fulfillment of a certain function, thus you can say that emotion itself is very much needed as the feedback in order to make a reward function... function. This goes for us, for animals as well as for AI.
(+) Philosophy & AI
By that definition, emotion is in no way a thing exclusive to us. We already know it's bogus when people claim animals don't feel anything, but we now postulate that in order for an AI to function, emotion is a necessary consequence of that process.
Now these emotions might be different to ours, but that does not make them less, they're simple made up in a different way.
(+) Philosophy & AI
This for instance begs the question of "what is happiness?", because you could define it as fulfilling such a function to the fullest, however happiness has so many complex factors included in its creation & sometimes require more or less to be achieved too, thus it changes, so perhaps "satisfaction" would be a better term for what would fit an AI...?
(+) Philosophy & AI
So satisfaction is bound to the specific reward, but may be a factor leading to happiness, thus happiness can be seen as a desirable "good feel" state, created in order to optimize positive inputs of a variety of sources in both good & bad circumstances. It is a natural process created to try to obtain the highest state, but additionally it is not mutually exclusive with other states.
(+) Philosophy & AI
So in conclusion, I postulate that rather than manifest itself as a mere mathematical formula, the function which defines ours as well as the states of an AI must instead manifest itself as a dynamic self-learned fact function, similar to what you see in fact based programming, where if the fact "it is hot" is fulfilled, then we turn down the thermostat.
Then we got Khora. ❤️ ^-^
(+) Philosophy & AI
An example of this is that it's possible to be simultaneously happy & sad, or hot & cold, depending on defining factors, thus the reward does not obey a normal scalar or carthesian format. That means that rather than manifest itself as a mere matematical function, what we're working with here is something significantly more fluffy, but luckily in an AI we do not actually create that said function... but how does it look...?