Not-Self in Sci-Fi: Blindsight by Peter Watts

More
2 weeks 4 days ago - 2 weeks 4 days ago #118918 by Chris Marti

I kinda feel that the narrator is more of an exploration of not consciousness/interiority, but rather a kind of exploration of narcissim. The "removed observer" mode is kinda characteristic of the emotionally isolated manipulator mindset. There are scenes where this crumbles, revealing a kind of proto-humanistic person wanting/desiring to emerge from the sterile "objective" attitude. I even feel that the authors (intentional/unintentional?) framing of the story was a way to suck in sci fi geeks that are socially awkward... and give them (by the story) an experience of relationships and starting to wake up to an interior emotional life. Almost  like the author was working through some of the same feelings... but who knows?

None of this occurred to me. Interesting. Can you say more? 

One thing that occurs to me is, again, the notion that the novel is a cautionary tale, showing the reader how awkward and just plain weird Siri is. And how despite his augmentation and supposed lack of first-person view, he's at root just as human as the rest of us.
Last edit: 2 weeks 4 days ago by Chris Marti.

Please Log in or Create an account to join the conversation.

More
2 weeks 4 days ago #118919 by Shargrol
Yes, I think that's right in the big picture. The story reveals the humanity of Siri (especially the scene when he connects with his father)... or maybe you could say that it reveals the potential in Siri -- I don't think he's quite there yet. I suspect the other books in the series will show a continuing transformation in Siri.

I don't know quite how to argue the case, but I was constantly seeing Siri as emotionally like a 12 year old boy discovering girls and vague at odds with his parents and overwhelmed by the "adult" world and "becoming small" and observing. So the narcissism that I'm talking about could also be described as self-centered immaturity. I guess the narcissism aspect is Siri really wants to be right in the middle of the adult world, but wants to be not involved and has a bit of a victim mentality when it comes to things --- wants to explore a relationship but doesn't really want a relationship --- he doesn't really own his situation in the same way that the others do. 

Please Log in or Create an account to join the conversation.

More
2 weeks 4 days ago #118920 by Philip Stone
Siri is the student and Sarasti is the Zen master who leads him (and everyone else) around by the nose and then confronts him with that humanity.

Please Log in or Create an account to join the conversation.

More
2 weeks 3 days ago - 2 weeks 3 days ago #118926 by Chris Marti
Yes, my take on the Siri/Sarasti relationship is like that - Sarasti is teaching Siri how to realize his humanity, albeit under dire, possibly species-ending circumstances. That's one more reason I think the book is a cautionary tale. Siri needs his humanity, humanity needs Siri to be fully human, not an immature boy-man with augmentation that may actually be preventing him from getting to where he needs to be. The Rorschach creatures may be fast and "smart" in a highly automated way, but they're the embodiment of what the author seems to be warning us against - the total ack of an internal life, a first-person POV, sentience, and consciousness.
Last edit: 2 weeks 3 days ago by Chris Marti.

Please Log in or Create an account to join the conversation.

More
2 weeks 6 hours ago #118962 by Philip Stone
Random thoughts this morning: Is a self necessary for something to have meaning? Or is meaning just another term for how this organism maps the world? Is meaning something that could arise in a scrambler, or in Rorschach?

Please Log in or Create an account to join the conversation.

More
2 weeks 6 hours ago #118964 by Chris Marti
What do you mean by "meaning?" Not joking - I think that's the rub.

Please Log in or Create an account to join the conversation.

More
2 weeks 5 hours ago - 2 weeks 4 hours ago #118965 by Philip Stone
It's one of those words like consciousness, I know what I mean but I can't pin it down. I suppose there are several kinds of meaning - linguistic, in the sense that words/symbols point to something. Another one would be like purpose, as in 'What is the meaning of life'.

Could a scrambler point to a sense of purpose?

Art has meaning, would not-self organisms produce art?

Edit:
- and art has emotional impact, how does that play out for a scrambler, if at all?
Last edit: 2 weeks 4 hours ago by Philip Stone.

Please Log in or Create an account to join the conversation.

More
2 weeks 4 hours ago - 2 weeks 4 hours ago #118966 by Philip Stone
Rorschach appears to get angry at one point, but if there's nobody in there, then is it just a clever simulation of emotion? How does it differ from an angry wasp? Is an angry wasp more emotional, more 'real'?
Last edit: 2 weeks 4 hours ago by Philip Stone.

Please Log in or Create an account to join the conversation.

More
1 week 6 days ago #118976 by Chris Marti
Yeah, those are the questions we're left with after reading "Blindsight."

I've been thinking about it off and on (and will now expose my personal bias) - I think real intelligence requires first-person, self-aware sentience. I think to call anything else (Rorschach, et al) truly intelligent may be only technically accurate. This sentience can come in many forms, even group form, but some form of sentient view on experience has to be there. There has to be being(s).

Please Log in or Create an account to join the conversation.

More
1 week 6 days ago #118981 by Chris Marti
Having said that stuff in my last comment, it would be difficult for me to distinguish between what we currently call "living being" sentience, and a marvelously programmed, complex version of a machine/deterministic sentience that is wicked good at mimicking being. Maybe there's no real difference, but it feels like there is to me right now, through my haze of first-person self-awareness.

Who knows?

Please Log in or Create an account to join the conversation.

More
1 week 5 days ago - 1 week 5 days ago #118986 by Shargrol
So here's my theory of self/consciousness...

Everything we do is made up of accumulated habits. We don't really "know how to ride a bicycle", it's actually a hundred little mini-habits that get wired in which allow us to ride a bicycle. (This is funnily tested by giving people bikes which work exactly the same but the handlebars are reversed, even though you "know" what do differently the body just can't do it:  Reverse Steering Bike on National Geographic Brain Games - YouTube ) There is nothing conscious/intentional about most things, it is really just a manifestation of past habit-skills. 

But everything is like this. Even interpersonal stuff, psychology, etc. All past habit-skills

These microhabits are created and changed through intention, the narrowing of awareness to sensations and intentional adjusting the body. This is something that only exists in the present moment and can only be done for a short while. This is learning, rewiring. Like balancing: lots of many tiny sensations, lots of adjusting of body to fit some idea of balance. Classic baby-learns-to-walk idea: intention to move somewhere, lots of failures, eventually building a habit. A baby doesn't suddenly understand "how to walk", it accumulates the micro-abilities/micro-habits iteratively.

So really a lot of the things we identify with self (ability, skills, knowledge) isn't really "us". It's accumulated habit.

The "self" is another algorithm of the mind that basically judges how close we're meeting our intentions. It's an odd thing that is so simple minded that it focuses on one aspect of living and in the moment really truly believes it is the MOST IMPORTANT THING. This is the felt feeling of being a self. We all know that experience changes, but the assessing-judging and the "am I on track?" vibeness of being alive is the thing that FEELS like self. It's a bit of a tautology: the thing that feels like our most important thing (our self) is the thing that identifies the most important things. (Read that a few times, it was a mind blowing insight to me... but I'm dumb.)

So 99% of our life is unconscious wired-in habit, 1% of our life is intention which can slowly create/change new habits. The aspect of mind which judges whether our attempts at achieving our intention is what >feels< like a self.

As for what part of mind knows/infers this... That's a good question. :)
Last edit: 1 week 5 days ago by Shargrol.

Please Log in or Create an account to join the conversation.

More
1 week 5 days ago #118988 by Chris Marti
I see nothing to disagree with in there, shargrol.

For anyone:

In the vein of talking about Blindsight, is the sense of self required in order for a thing to be called intelligent or sentient? Can any matter, if arranged appropriately and without a sense of self, be sentient? Human beings are matter arranged in a way that creates sentience/sense of self. In Blindsight, we're left with the notion that because we're human and have this sense of self, we're slower and weaker than beings that aren't "handicapped" in this way.

What about emotions - what role do they play in creation and survival as a species? Would we be better off without them?

Please Log in or Create an account to join the conversation.

More
1 week 5 days ago #118994 by Shargrol
I personally don't think a sense of self is required for intelligence. I think there is lots of bodily-wired intelligence (athletic performance, artistic performance) that doesn't rely on a sense of self, yet involves learning, training, development. Seems like it makes sense to call it intelligence. I ultimately feel fine calling fish intelligent and trees intelligent and AI intelligent. 

I think for all practical purposes people conflate self and sentient and it might not be obvious but eventually it is revealed that it's a circular definition. ("One has a sense of self... if they are sentient." and "being sentinent means... having a sense of self"). I actually don't think there IS a singular "sense of self" even in humans, but rather lots of senses of self and periods of no sense of self. The closest thing to a core self is the "I AM" but that too comes and goes. The "sense of self" is a belief like I "have a name", but really I don't HAVE a name even though I use one. :)

I think emotions are like smells... usually useful information, sometimes painful, but better with than without. Emotions are very fast packets of information, slower than sensations but faster than thoughts, giving just enough intel to make fairly fast adjustments while we're waiting for thoughts to fully occur. Of course performance robbing mental anquish is the flip side, so it isn't always adaptive to have emotions. 

Even in AI, there is going to be higher level processing of sensations into something quasi-emotion-like. Packets of condense information, sort of like averages of a bunch of inputs, to create something like directional urges... just like emotions in humans.

I notice lots of times in my life where my body/mind behaves Rorschach-like. It does it's thing and "I" really am not involved. 

Please Log in or Create an account to join the conversation.

More
1 week 2 days ago - 1 week 2 days ago #118999 by Chris Marti

 Even in AI, there is going to be higher level processing of sensations into something quasi-emotion-like. Packets of condense information, sort of like averages of a bunch of inputs, to create something like directional urges... just like emotions in humans.

Can you cite any articles or books on this? I'm not sure I understand the difference between human and machine information processing enough to get to this point. Truth be told, I'm not sure we can say with confidence that human emotions are analogous to machine-like information processing. If we understood how the human brain/mind works at the most basic levels maybe... but we don't.
Last edit: 1 week 2 days ago by Chris Marti.

Please Log in or Create an account to join the conversation.

More
1 week 1 day ago - 1 week 1 day ago #119002 by Shargrol
Unfortunately, I don't have a good source. I pulled that out of a conversation on AI automobile driving maybe on Lex's podcast? But the basic idea is that the cars have access to very fast raw data, and the AI learning is a kind of intermediate steps of lumping raw data for higher and higher processing (and there might be multiple levels of that lumping). The interesting thing is that it's kind of impossible to analyze "how" this lumping is occurring, only the predictiveness of the outputs --- in other words, whether it works well or doesn't. You can't really dive into the code and re-program it, because it isn't that kind of linear/narrative code. If you want to improve the outcome, you run more training loops... or you scrap the whole thing and re-run all the training loops and build a completely new kernel.

It struck me as very similar to human development. A toddler's body-mind doesn't know how to walk, but it's only through the iterative process of trial and error and the positive reward/rewiring that occurs over time that the body-mind "knows" how to walk. But if you said "how do you know when to fire your psoas muscle to put a slight forward lean to the spine when lifting your leg so that you fall forward and land on your next foot" --- there is nothing conscious about that. It's buried down in lower levels of somatic learning. So very similar, I think. 

This also relates to one of my favorite books "The Inner Game of Tennis" -- which basically says there are two Selfs. Self 2 is the entire incredible body that learns and adapts. Self 1 is the intentional self that provides attention... but which sometimes incorrectly assumes responsibility for Self 2. Self 1 thinks it can berate Self 2 into performing better, but Self 2 just needs time and repetition. It's a really great book, and includes all the sub-games that Self 1 plays on the court (trying to win, trying to look good, trying to be nice, etc.) and it conclusion is that honoring Self 2 in a non-identifying way is essentially spirituality and a spiritual practice.

The Inner Game of Tennis: The Classic Guide to the Mental Side of Peak Performance by Tom Gallwey – The Rabbit Hole (blas.com)

Anyway, this is also where I have a problem with the vampire's over-the-top slamming of self-consciousness (Self 1 in the above paragraph). Self 1 isn't worthless, it has a value. But it's also clear that Ego/Self 1 isn't everything.

In pondering the book some more, the more I think the answer is that for every system of knowing, you can't observe the premises... because knowing is essentially a lumping/reducing of data. Like we were discussing earlier, nothing exists without consciousness, but what exists in the domain of consciousness is... consciousness. There is no accessing the-thing-that-creates-consciousness with consciousness. Similarly, there is sense-of-self-experience but the-thing-that-creates-sense-of-self-consciousness can't be accessed with self-consciousness. Or to go in the other direction, there is a sense of balance, but the-thing-that-creates-a-sense-of-balance can't be access with a sense of balance. 

It's sort of like each "knowing" paradigm is a tautology... and now I'm getting some deja vu sense that probably Godel's incompleteness theorem says this all more simply and elegantly.  ???
Last edit: 1 week 1 day ago by Shargrol.

Please Log in or Create an account to join the conversation.

More
1 week 1 day ago #119003 by Chris Marti
Thanks for all the detail!

Still and yet, I'm not sure it's appropriate to compare machine learning and AI to human information processing.  I think human beings have capabilities in whichever "self" we have that is vastly beyond what machines can currently do. I constantly do the machine-to-hum comparison myself, of course, but then I step back and remember how narrow machine learning is and how far away true AGI is. Machines are really, really good and fast, at doing specific tasks. Human beings a really, really good at doing lots and lots of things simultaneously (earning to walk, learning to talk, learning to read, learning to socialize, and so on, all at the same time) and becoming wide-scope, common-sense learners and doers. And we really have no idea at all how the human brain does what it does. 

But I'm not an expert, as you know, so take these comments with a grain of salt. I find myself alternately being on the "machines are getting closer and will one day be capable of what humans do" side of the fence and "machine learning and AI in their present form are functionally useful but not ultimately capable of human-level consciousness" on the other side.

Please Log in or Create an account to join the conversation.

More
1 week 4 hours ago #119014 by Kacchapa
https://www.youtube.com/watch?v=tLchieBLgZ4

An interview with an Iranian-American AI researcher who is also a Buddhist meditation teacher 

Please Log in or Create an account to join the conversation.

Powered by Kunena Forum