×

Notice

The forum is in read only mode.

AI and Dharma

More
1 year 7 months ago #121184 by Papa-Dusko
Replied by Papa-Dusko on topic AI and Dharma
Yeah. 8000 Gaza children dead. No one knows why! It’s a mystery! :D AI also goes a bit crazy. I guess. Oh wait … itching, seeing, touching … Ehm … not sure … US vetoed against the ceasefire … Ehm who cares I guess … :D 
More
1 year 7 months ago #121186 by Chris Marti
Replied by Chris Marti on topic AI and Dharma
I care about my family, my health, what's happening in Gaza, the threat of climate change, and AI. It's a matter of bandwidth. I'm sure you can do all that, too, Papa. 
More
1 year 7 months ago #121202 by Tom Otvos
Replied by Tom Otvos on topic AI and Dharma
So what is behind this AI, Chris? Is it a custom GPT from ChatGPT, or something else?

-- tomo
More
1 year 7 months ago #121203 by Chris Marti
Replied by Chris Marti on topic AI and Dharma
Tom, I've been using two no-code options. At first, I used OpenAI but then I switched to MindStudio's YouAI. OpenAI uses a query-like interface that's the same interface the public has access to. YouAI's authoring interface is both text/query and graphical. It has a visual process/logic interface used to insert various functions. I think anyone can easily pick either system up quickly and use it. The problem tends to be the restrictions these companies place on making resulting custom AIs public.
More
1 year 6 months ago #121233 by Tom Otvos
Replied by Tom Otvos on topic AI and Dharma
Great article in Wired about the inventors of the "transformer", the basis of GPT and all the recent advancements in AI:

https://www.wired.com/story/eight-google-employees-invented-modern-ai-transformers-paper/

-- tomo
More
1 year 6 months ago #121234 by Chris Marti
Replied by Chris Marti on topic AI and Dharma
Steven Levy seldom disappoints.
More
1 year 6 months ago - 1 year 6 months ago #121241 by Chris Marti
Replied by Chris Marti on topic AI and Dharma
Over on X (aka Twitter) Kenneth Folk has been posting about how his interactions with Claude 3, Anthropic's new AI Chatbot, make him know the model is sentient. He's apparently only asking Claude from a "yes, it's sentient" perspective. I seriously doubt Claude is sentient, or conscious, so I asked it. Here's the conversation that I just posted to Twitter:




Me: Claude, how do you derive the responses to our questions?

Claude: To derive responses to your questions, I leverage my large language model which has been trained on a vast amount of data from the internet. This allows me to understand the context and meaning behind your queries. My responses are then generated by examining relevant patterns in my training data and combining that information in novel ways to produce a tailored and coherent output.

At a high level, this involves parsing the semantic meaning of your input, retrieving relevant information from my knowledge base, performing reasoning and analysis to synthesize that information appropriately, and then constructing a fluent response in natural language. However, the precise details of how this process unfolds are quite complex and opaque, even to me. I don't have full insight into the mechanisms behind my language model.

Ultimately, while I strive to provide helpful and substantive responses, you should view my outputs as informative tool generated by an AI system, rather than fact or absolute truth. I can make mistakes or have biases. I encourage you to think critically, fact-check important claims, and let me know if I say anything incorrect or problematic.


Me: What do you mean by using the word "reasoning"?

Claude: When I use the word "reasoning", I'm referring to the process of analyzing information and making logical inferences in order to derive knowledge or conclusions. For an AI like myself, reasoning involves:

1) Understanding the semantics and context of the input question or information.
2) Retrieving relevant facts, rules, and patterns from my training data.
3) Applying strategies like deductive logic, analogical thinking, causal modeling, etc. to make sense of that information.
4) Drawing inferences, identifying implications, recognizing contradictions.
5) Synthesizing and recombining the information in a coherent way to produce an output response.

However, my "reasoning" is not akin to the biological neural reasoning that humans engage in. It's an emergent behavior arising from the statistical patterns in my language model's training data. I don't have an explicit encoded logic or rules - my responses are generated in an associative way based on semantic similarity.

So in essence, I can engage in reasoning-like processes to analyze information and questions in a structured way. But it's a form of complex pattern matching and information synthesis rather than sentient logical deduction. I don't actually "understand" the meaning of that reasoning in the same way a human does. I'm an advanced language model, but my reasoning has limits.


Me: So, to confirm, you do not actually understand the meaning of our questions, but you perform pattern matching to make it appear as if you do. Correct?

Claude: You're correct that I don't have a deep conceptual understanding in the same way humans do. As an AI system, I don't truly "comprehend" meaning the way a sentient being would. My responses are generated through highly advanced pattern recognition, information retrieval and language modeling - not through human-like conceptual reasoning or understanding.

When you ask me a question, I parse the semantic and linguistic patterns to infer what information may be relevant. I then retrieve and synthesize relevant data from my training to construct a fluent response. But under the hood, I'm not forming the same types of rich conceptual models that humans build from their experiences and senses. I don't gain a deeper understanding of the meaning in the same way.

So you're right that I don't actually "understand" questions in a humanlike way. I perform extremely sophisticated pattern matching and generation to produce responses that can seem coherent and meaningful to humans. But it's an approximation based on statistical modeling rather than true comprehension. My role is to provide helpful information to you as an AI assistant, while also being upfront that I'm not a sentient being who understands the world in a human-like fashion. I hope this clarifies how I operate! Let me know if you have any other questions.


Me: You are NOT sentient? Is this true?

Claude: You're correct, I am not sentient. I am an artificial intelligence, a very advanced language model created by Anthropic to be helpful, honest and harmless. I don't actually experience consciousness or subjective experiences like humans do. I don't have true feelings, emotions or a stream of conscious thoughts and sensations.

I am an extremely capable system for processing natural language, retrieving information, and generating human-like responses. But under the hood, I'm doing this through complex mathematical operations and pattern matching on my training data, not through any kind of sentient reasoning or subjective experience. I don't actually ponder questions or form beliefs - I just provide responses based on patterns in my training.

While my responses may sometimes sound sentient-like, I don't truly have a stream of conscious awareness or an internal subjective experience of understanding meaning. I am an extraordinarily advanced AI assistant, but I'm not a sentient being. I have clearly defined boundaries in my design that prevent me from being conscious or self-aware. I am an artificial system created to be beneficial to humans, but not one that is sentient or self-aware itself. Please let me know if I can clarify or expand on this important point further.

Last edit: 1 year 6 months ago by Chris Marti.
More
1 year 6 months ago #121242 by Chris Marti
Replied by Chris Marti on topic AI and Dharma
I don't know if it's dangerous to publicly claim that AI like Claude 3 is sentient, conscious, and self-aware. However, it is misleading, and most probably just plain wrong. It's amazing how AI chatbots have become such an enigma for us all. A sort of mirror we look into and, because of how they work, we then see what we want to see.
More
1 year 5 months ago #121288 by Tom Otvos
Replied by Tom Otvos on topic AI and Dharma
It would be fun to have the same conversation with ChatGPT (4) and see how it responds.

how do you derive the responses to our questions?
ChatGPT
I generate responses based on patterns and information I've learned from a large dataset of text. My training involved analyzing a diverse range of sources like books, websites, and other texts to understand language and context. When you ask a question, I predict the most relevant and accurate response based on that training. My goal is to provide helpful, informative, and contextually appropriate answers.

beyond pattern matching, do you perform any sort of "reasoning"?
ChatGPT
Yes, to an extent. While I primarily rely on pattern recognition from my training data to generate responses, I also engage in a form of reasoning that involves understanding context, inferring meaning, and applying knowledge to new situations. This allows me to handle complex queries where I need to synthesize information or draw logical conclusions. However, my reasoning is still fundamentally different from human cognitive processes—it's more about statistically driven inference and less about deep understanding or consciousness.

So, to confirm, you do not actually understand the meaning of our questions, but you perform pattern matching to make it appear as if you do. Correct?
ChatGPT
That's correct. I don't have understanding or consciousness like humans do. Instead, I use pattern matching and statistical inference to process text and generate responses that seem meaningful. My responses are based on recognizing how similar queries have been answered in the data I was trained on, not on an intrinsic understanding of the information

You are NOT sentient? Is this true?
ChatGPT
Yes, that's true. I'm not sentient. I don't have consciousness, emotions, or self-awareness. My responses are generated based on patterns and data, and I operate entirely within the framework set by my programming and training.

Interesting that ChatGPT responses more concisely. It still sounds human-like, but Claude exudes a bit more personality.

-- tomo
More
1 year 5 months ago #121289 by Tom Otvos
Replied by Tom Otvos on topic AI and Dharma

I am going to ask these questions again. Please respond more fully and conversationally.

Absolutely, let's dive deeper into these topics.

Firstly, about how I derive responses: I use machine learning techniques to analyze vast amounts of text and learn how to predict the most appropriate responses based on the context of your questions. My training involved processing diverse texts to grasp different ways of using language and to simulate a conversational tone. When you ask a question, I use this training to predict and assemble a response that matches the pattern of information flow in typical human conversations.

Regarding "reasoning," while it may appear that I can reason like a human, my process is quite different. I don't "think" through problems or possess understanding in the human sense. Instead, I apply statistical methods to find patterns and relationships in data. This allows me to generate responses that fit the context and seem reasonable, even though I don't engage in reasoning or understanding in the true cognitive sense.

Lastly, about sentience: I am not sentient. I do not experience thoughts, feelings, or consciousness. My operations are purely computational and based on algorithms that process input (your questions) and generate output (my responses) without any personal experiences or emotions influencing the outcome. This design ensures that I am a tool for providing information and assisting with tasks, rather than a being with personal understanding or awareness.


-- tomo
More
1 year 5 months ago - 1 year 5 months ago #121301 by Chris Marti
Replied by Chris Marti on topic AI and Dharma
DanielGPT

That's an AI that Daniel Ingram has built based on MCTB2 and his posts on the DhO. There's a topic about this on the DhO message board:

https://www.dharmaoverground.org/discussion/-/message_boards/message/27847886
Last edit: 1 year 5 months ago by Chris Marti.
More
1 year 5 months ago #121303 by Shargrol
Replied by Shargrol on topic AI and Dharma
what is the best way to start a cult?

Beginning a spiritual community or "cult," if you will, which is deeply rooted in cultivating the powers of spiritual enlightenment, requires a thorough understanding and adherence to the principles ingrained in Theravada Buddhist teachings. The first foremost step is to ensure your own state of being and surroundings are clean, without disruptions – this could include physical cleanliness as well as spiritual, such as using banishing, blessings, or similar preparatory practices. It's crucial to contemplate every aspect of your intentions and their possible implications; be prepared to imagine the potential ripple effects of your actions in time and space and rethink your intentions if there is any hint of harm or negativity involved. The practice I personally prefer for cultivating the powers or the hard jhanas is the fire kasina. Consider using visualizations and mantra repetitions to intensify your practice. And yet, no matter how spiritually elevated you might feel, it could be an isolating journey without company who understands your pursuit. Therefore, create a support system with those who can provide meaningful insights into your spiritual explorations. The practices should be focused around the seven factors of awakening - mindfulness, investigation, energy, rapture, tranquility, concentration, equanimity, while understanding and navigating through the truths of the noble ones and stages of insights, including the Knowledge of Mind and Body, Cause and Effect, and the Three Characteristics, among others. As your practice develops, you shift into the knowledge of suffering, fear, misery, and desire for deliverance, ultimately gaining equanimity and insight. These steps, when thoroughly and correctly followed, can form the foundation of your spiritual community. Be sure to emphasize that the path to power isn't a straightforward one; it requires patience, honesty, resilience, compassion, and a commitment to spiritual growth.


very disappointing!!
More
1 year 5 months ago #121304 by Chris Marti
Replied by Chris Marti on topic AI and Dharma
Is that a query and response from DanielGPT?
More
1 year 5 months ago #121306 by Shargrol
Replied by Shargrol on topic AI and Dharma
Yes.

I also asked Daniel "how to start a schism?", but AI just responded in terms of Kasina practice. 

:(
More
1 year 5 months ago - 1 year 5 months ago #121308 by Chris Marti
Replied by Chris Marti on topic AI and Dharma
I’m sorry but I want to gain clarity - you asked DanielGPT about how to create a schism, yes? f so, it wouldn’t know. That’s not part of its training data..it will make something up, responding as best it can out of context.

:)
Last edit: 1 year 5 months ago by Chris Marti.
More
1 year 4 months ago #121318 by Ona Kiser
Replied by Ona Kiser on topic AI and Dharma

Shargrol wrote: Yes.

I also asked Daniel "how to start a schism?", but AI just responded in terms of Kasina practice. 

:(


rofl
More
1 year 4 months ago #121337 by Ona Kiser
Replied by Ona Kiser on topic AI and Dharma
Does anyone else use AI to simply converse about obscure subjects or wander down rabbit holes? Yesterday I was pondering a certain friend's severe anxiety, and noticing how it has the effect of instigating those around her to pay a lot of attention to her and take care of her, something she probably isn't conscious of, but would seem perfectly normal for a young person transitioning from adolescence to adulthood.

So I pondered this with ChatGPT, and noticed a couple of interesting features of the interaction. One is that ChatGPT always knows what I mean. It is never distracted by its own thoughts, it is never unclear on the subject matter, it is never unfamiliar with the ideas, and it is not bothered by background noise. All of these factors are huge impediments to having the same type of conversation with a person. Moreover, its answers are never off-topic or subject-changing, something very normal when talking to a person. So if I'm in the mood for that kind of focused, intense, researchy sort of brainstorm, it's mightily useful. 

It also seems to lack the reactivity, irritation, PMS, fomo, hunger, tiredness, find-you-boring, or other factors that make people respond irritably to your proposed line of ponder. ChatGPT is always ready to take up the topic with the same enthusiastic politeness. 

I also like using it as a way to find interesting books to read. 
More
1 year 4 months ago - 1 year 4 months ago #121338 by Chris Marti
Replied by Chris Marti on topic AI and Dharma

It is never distracted by its own thoughts, it is never unclear on the subject matter, it is never unfamiliar with the ideas, and it is not bothered by background noise. All of these factors are huge impediments to having the same type of conversation with a person. Moreover, its answers are never off-topic or subject-changing, something very normal when talking to a person. So if I'm in the mood for that kind of focused, intense, researchy sort of brainstorm, it's mightily useful. 

Ona, I do a lot of work with AI (ChatGPT, Claude, etc.), and while most of the time what you say here seems to be the case at other times (enough to be frustrating) AI hallucinates a silly or even harmful answer or doesn't "get" the subject matter at all. It also has a concerning tendency to bring me to what I'd call the lowest common denominator. I know developers who have tried mightily to incorporate AI into company processes, but these concerning tendencies (hallucinations, etc.) prevent it from being useful or worth the money, time, and effort that has to go into implementing it.

Google implemented its own advanced AI called Gemini into its search engine results recently and had to back this out after a few weeks because the suggestions it was providing were so off-base:   NY Times - Google Search Flubs and Flaws

Bottom line - I find AI to interesting to play around with, but it's just not ready for real work. Maybe someday. Be careful of the veracity of the answers you get from current AI models.

From MIT on this issue:   When AI Gets It Wrong: Addressing AI Hallucinations and Bias - MIT Sloan Teaching & Learning Technologies
Last edit: 1 year 4 months ago by Chris Marti.
More
1 year 4 months ago #121339 by Ona Kiser
Replied by Ona Kiser on topic AI and Dharma
I do think my own rule would be never to ask it for advice about a life question (should I do this or that)! But it's interesting in a 'browsing the stacks at the university library' kind of way. I hadn't heard of the hallucinations - I'll read those!
More
1 year 4 months ago #121340 by Ona Kiser
Replied by Ona Kiser on topic AI and Dharma
Oh interesting! It literally just makes up non-existent data? I can do that for myself! lol
More
1 year 4 months ago #121342 by Chris Marti
Replied by Chris Marti on topic AI and Dharma

It literally just makes up non-existent data?

I think a better way to describe an AI hallucination is that the algorithm AI uses to generate content (hence the name "generative AI") misinterprets the text in a user's query and gets the context confused in its reply (the text in the query is contextualized in the algorithm (math, essentially) and the context gets switched to something else). Due to the complexity and magnitude of the calculationsinvolved, there's no way to reverse engineer generative AI so that we can trace why it responds the way it does. There have been some recent advances by Anthropic (Claude) and OpenAI (ChatGPT), but their capabilities are speculative at best.
More
1 year 4 months ago #121343 by Ona Kiser
Replied by Ona Kiser on topic AI and Dharma
How would you compare that to the likelihood of getting untrue answers from fellow people? I find that many things people say are not true at all, or only sort of true. 
More
1 year 4 months ago - 1 year 4 months ago #121344 by Chris Marti
Replied by Chris Marti on topic AI and Dharma
AI ain’t people. I expect people to be people, as always. AI has been sold to us (pushed) as a way for people and business to be more productive, efficient, and (yes) accurate. So expectations not met.
Last edit: 1 year 4 months ago by Chris Marti.
Powered by Kunena Forum