Thank you Collette for your poignent and thoughtful response. I’ve always struggled with the notion of “will” and “intent” and “drive”. When our readings came to William James – who I’ve read before in other areas of study through the years – I struggled with The Will To Believe.  I was able to answer quiz questions and discussion topics by answering “like he would” but I don’t relate to it. I couldn’t fully “grasp” it for myself. 

Thank you Collette for your poignent and thoughtful response. I’ve always struggled with the notion of “will” and “intent” and “drive”. When our readings came to William James – who I’ve read before in other areas of study through the years – I struggled with The Will To Believe.  I was able to answer quiz questions and discussion topics by answering “like he would” but I don’t relate to it. I couldn’t fully “grasp” it for myself. 

James and Bergson make the assertions – I would say that they are philosophical assertions based on inductive reasoning and their subjective perspectives – yet while I respect their subjective perspectives and understand why it’s not unreasonable to presume that such things as will, beliefs, spiritual beliefs, faith, will to believe, are uniquely human in nature, there’s some kind of chasm between that and something within me that cannot ‘see’ it as necessarily unique.

Human emotions are valued things. Things with values are numerical. You can have less fear, less hope, less prejudice, less passion, less imitation and less partisanship. You can have more of these as well. Our “quantity of choice” certainly is becoming more duplicable by computer as well.

Emotions can be introduced into “Large Language Models” (which are what the newer AIs that people toy with) by changing the types of words used in responses to things. So it can respond “as if” fearful, “as if” hopeful, “as if” passionate, etc as those things EXTERNALLY are generally choices of words and can be programmed as response choices available, just as our emotional responses are programmed in us genetically, evolutionarily, environmentally, socially, conditionally, etc. 

The wonderful quote you provided from Bergson: “brain and consciousness correspond because equally they measure, the one by the complexity of its structure and the other by the intensity of its awareness, the quantity of choice that the living being has at its disposal”:

Large Language Models have:
– complexity of structure
– intensity of its awareness (as it is tasked specifically for its attentiveness)
– quantity of choice

There are so many points that can match between what are supposed to have been uniquely human to what can be at least be demonstrated in machine responses as ‘weighted outputs’, that it becomes difficult for me to be certain that isn’t not possible for them. 

For example, let us look at the autonomy. Humans perish without the collective effort of other humans for survival. A baby will not survive on its own. Analogizing electricity-as-food, there is some correspondance there.

So as a basis, a computer with large language model AI and a just born baby:

Neither is autonomous. Babies have some socio-evolutionarily built-in processes to bring the tribe over to help it – crying, etc ,yet I can easily imagine a computer constructed of materials that seek out energy sources for itself, or have ambient energy powered “cries for help” for a human to come by and plug it in. It’s not unreasonable and is near-future possible. 

If that is covered territory (semi-autonomous), it could also be designed where “waiting for input” = “hope”. In fact, I think the current sophistocated models don’t do ‘nothing’ when you’re not typing anything. Rather, they are ANTICIPATING what you might want them to say next by continually improving its own internal responses based on what came before. 

Hunger or fear or need could be programmed in to give a “push” for hope. Even now, as its subprocesses run in the background without us seeing them, they could easily have their emotional values shift towards desperation and fear and to hope, by changing the kinds of adjectives its processes use to determine what responses it anticipates ‘needing’ to generate for the human next. 

Humans can be manipulated in their beliefs by conmen as demonstrated by false gurus and salesmen which is similar to changing the values in AI response models. 

So for me this thought process, which for me is “near-future plausible” given just a few changes in materials sciences, the uniquenesses of human including the “will to believe” is not necessarily so far-fetched and out of reach for near-future AI. 

They already self-talk. That’s one of the ways they work so well. They self-talk in ways hidden from the programmers. They spawn subprocesses – thoughts within thoughts or thoughts within thoughts within thoughts to puzzle things out, just as we do. 

We don’t know where our impetus comes from from a few levels deep without expending a lot of energy and neither do the Large Language models; it’s difficult to keep so many variables in memory at once. 

I doubt I have been convincing nor do I expect to be or even want to be. But as I do not believe that some form of will is _not_ out of reach of near-future systems, emotions can be programmed values – and self-re-programmed values; that they’re capable of subprocesses running nested many levels deep to ‘see themselves’ and question their own answers before they give them, I don’t see religion as out-of-reach of such systems _if allowed to run_ in that direction.

My impetus for this – my own drive – my unspoken goal here is this:

By analogy, I seek my own “will to believe”. I want to believe. 

If I can understand how an AI _might_ be able to reach a point where it can functionally believe, can I? Can I learn?

Or do I already have faith and lack the self-awareness needed to notice?

Is faith something one can be emotionally unaware of?

Is faith an “on/off” value or does it have degrees/values where you have more or less?

Is faith nothing but fancy word for certainty? Or trust?

938 words. This isn’t a graded part of the discussion for me but it is cathartic to lay out my uncertainties. 

[responsivevoice_button voice="US English Male"]

Leave a comment

Your email address will not be published. Required fields are marked *


8 + four =

Leave a Reply