- Forum
- Sanghas
- Dharma Forum Refugees Camp
- Dharma Refugees Forum Topics
- Meditation Practice
- AI and Dharma
AI and Dharma
- Posts: 906


- Chris Marti
-
Topic Author
- Offline
- Posts: 6503
- Karma: 2
-- tomo
- Chris Marti
-
Topic Author
- Offline
- Posts: 6503
- Karma: 2
https://www.wired.com/story/eight-google-employees-invented-modern-ai-transformers-paper/
-- tomo
- Chris Marti
-
Topic Author
- Offline
- Posts: 6503
- Karma: 2
- Chris Marti
-
Topic Author
- Offline
- Posts: 6503
- Karma: 2
Me: Claude, how do you derive the responses to our questions?
Claude: To derive responses to your questions, I leverage my large language model which has been trained on a vast amount of data from the internet. This allows me to understand the context and meaning behind your queries. My responses are then generated by examining relevant patterns in my training data and combining that information in novel ways to produce a tailored and coherent output.
At a high level, this involves parsing the semantic meaning of your input, retrieving relevant information from my knowledge base, performing reasoning and analysis to synthesize that information appropriately, and then constructing a fluent response in natural language. However, the precise details of how this process unfolds are quite complex and opaque, even to me. I don't have full insight into the mechanisms behind my language model.
Ultimately, while I strive to provide helpful and substantive responses, you should view my outputs as informative tool generated by an AI system, rather than fact or absolute truth. I can make mistakes or have biases. I encourage you to think critically, fact-check important claims, and let me know if I say anything incorrect or problematic.
Me: What do you mean by using the word "reasoning"?
Claude: When I use the word "reasoning", I'm referring to the process of analyzing information and making logical inferences in order to derive knowledge or conclusions. For an AI like myself, reasoning involves:
1) Understanding the semantics and context of the input question or information.
2) Retrieving relevant facts, rules, and patterns from my training data.
3) Applying strategies like deductive logic, analogical thinking, causal modeling, etc. to make sense of that information.
4) Drawing inferences, identifying implications, recognizing contradictions.
5) Synthesizing and recombining the information in a coherent way to produce an output response.
However, my "reasoning" is not akin to the biological neural reasoning that humans engage in. It's an emergent behavior arising from the statistical patterns in my language model's training data. I don't have an explicit encoded logic or rules - my responses are generated in an associative way based on semantic similarity.
So in essence, I can engage in reasoning-like processes to analyze information and questions in a structured way. But it's a form of complex pattern matching and information synthesis rather than sentient logical deduction. I don't actually "understand" the meaning of that reasoning in the same way a human does. I'm an advanced language model, but my reasoning has limits.
Me: So, to confirm, you do not actually understand the meaning of our questions, but you perform pattern matching to make it appear as if you do. Correct?
Claude: You're correct that I don't have a deep conceptual understanding in the same way humans do. As an AI system, I don't truly "comprehend" meaning the way a sentient being would. My responses are generated through highly advanced pattern recognition, information retrieval and language modeling - not through human-like conceptual reasoning or understanding.
When you ask me a question, I parse the semantic and linguistic patterns to infer what information may be relevant. I then retrieve and synthesize relevant data from my training to construct a fluent response. But under the hood, I'm not forming the same types of rich conceptual models that humans build from their experiences and senses. I don't gain a deeper understanding of the meaning in the same way.
So you're right that I don't actually "understand" questions in a humanlike way. I perform extremely sophisticated pattern matching and generation to produce responses that can seem coherent and meaningful to humans. But it's an approximation based on statistical modeling rather than true comprehension. My role is to provide helpful information to you as an AI assistant, while also being upfront that I'm not a sentient being who understands the world in a human-like fashion. I hope this clarifies how I operate! Let me know if you have any other questions.
Me: You are NOT sentient? Is this true?
Claude: You're correct, I am not sentient. I am an artificial intelligence, a very advanced language model created by Anthropic to be helpful, honest and harmless. I don't actually experience consciousness or subjective experiences like humans do. I don't have true feelings, emotions or a stream of conscious thoughts and sensations.
I am an extremely capable system for processing natural language, retrieving information, and generating human-like responses. But under the hood, I'm doing this through complex mathematical operations and pattern matching on my training data, not through any kind of sentient reasoning or subjective experience. I don't actually ponder questions or form beliefs - I just provide responses based on patterns in my training.
While my responses may sometimes sound sentient-like, I don't truly have a stream of conscious awareness or an internal subjective experience of understanding meaning. I am an extraordinarily advanced AI assistant, but I'm not a sentient being. I have clearly defined boundaries in my design that prevent me from being conscious or self-aware. I am an artificial system created to be beneficial to humans, but not one that is sentient or self-aware itself. Please let me know if I can clarify or expand on this important point further.
- Chris Marti
-
Topic Author
- Offline
- Posts: 6503
- Karma: 2
Interesting that ChatGPT responses more concisely. It still sounds human-like, but Claude exudes a bit more personality.how do you derive the responses to our questions?
ChatGPT
I generate responses based on patterns and information I've learned from a large dataset of text. My training involved analyzing a diverse range of sources like books, websites, and other texts to understand language and context. When you ask a question, I predict the most relevant and accurate response based on that training. My goal is to provide helpful, informative, and contextually appropriate answers.
beyond pattern matching, do you perform any sort of "reasoning"?
ChatGPT
Yes, to an extent. While I primarily rely on pattern recognition from my training data to generate responses, I also engage in a form of reasoning that involves understanding context, inferring meaning, and applying knowledge to new situations. This allows me to handle complex queries where I need to synthesize information or draw logical conclusions. However, my reasoning is still fundamentally different from human cognitive processes—it's more about statistically driven inference and less about deep understanding or consciousness.
So, to confirm, you do not actually understand the meaning of our questions, but you perform pattern matching to make it appear as if you do. Correct?
ChatGPT
That's correct. I don't have understanding or consciousness like humans do. Instead, I use pattern matching and statistical inference to process text and generate responses that seem meaningful. My responses are based on recognizing how similar queries have been answered in the data I was trained on, not on an intrinsic understanding of the information
You are NOT sentient? Is this true?
ChatGPT
Yes, that's true. I'm not sentient. I don't have consciousness, emotions, or self-awareness. My responses are generated based on patterns and data, and I operate entirely within the framework set by my programming and training.
-- tomo
I am going to ask these questions again. Please respond more fully and conversationally.
Absolutely, let's dive deeper into these topics.
Firstly, about how I derive responses: I use machine learning techniques to analyze vast amounts of text and learn how to predict the most appropriate responses based on the context of your questions. My training involved processing diverse texts to grasp different ways of using language and to simulate a conversational tone. When you ask a question, I use this training to predict and assemble a response that matches the pattern of information flow in typical human conversations.
Regarding "reasoning," while it may appear that I can reason like a human, my process is quite different. I don't "think" through problems or possess understanding in the human sense. Instead, I apply statistical methods to find patterns and relationships in data. This allows me to generate responses that fit the context and seem reasonable, even though I don't engage in reasoning or understanding in the true cognitive sense.
Lastly, about sentience: I am not sentient. I do not experience thoughts, feelings, or consciousness. My operations are purely computational and based on algorithms that process input (your questions) and generate output (my responses) without any personal experiences or emotions influencing the outcome. This design ensures that I am a tool for providing information and assisting with tasks, rather than a being with personal understanding or awareness.
-- tomo
- Chris Marti
-
Topic Author
- Offline
- Posts: 6503
- Karma: 2
That's an AI that Daniel Ingram has built based on MCTB2 and his posts on the DhO. There's a topic about this on the DhO message board:
https://www.dharmaoverground.org/discussion/-/message_boards/message/27847886
Beginning a spiritual community or "cult," if you will, which is deeply rooted in cultivating the powers of spiritual enlightenment, requires a thorough understanding and adherence to the principles ingrained in Theravada Buddhist teachings. The first foremost step is to ensure your own state of being and surroundings are clean, without disruptions – this could include physical cleanliness as well as spiritual, such as using banishing, blessings, or similar preparatory practices. It's crucial to contemplate every aspect of your intentions and their possible implications; be prepared to imagine the potential ripple effects of your actions in time and space and rethink your intentions if there is any hint of harm or negativity involved. The practice I personally prefer for cultivating the powers or the hard jhanas is the fire kasina. Consider using visualizations and mantra repetitions to intensify your practice. And yet, no matter how spiritually elevated you might feel, it could be an isolating journey without company who understands your pursuit. Therefore, create a support system with those who can provide meaningful insights into your spiritual explorations. The practices should be focused around the seven factors of awakening - mindfulness, investigation, energy, rapture, tranquility, concentration, equanimity, while understanding and navigating through the truths of the noble ones and stages of insights, including the Knowledge of Mind and Body, Cause and Effect, and the Three Characteristics, among others. As your practice develops, you shift into the knowledge of suffering, fear, misery, and desire for deliverance, ultimately gaining equanimity and insight. These steps, when thoroughly and correctly followed, can form the foundation of your spiritual community. Be sure to emphasize that the path to power isn't a straightforward one; it requires patience, honesty, resilience, compassion, and a commitment to spiritual growth.
very disappointing!!
- Chris Marti
-
Topic Author
- Offline
- Posts: 6503
- Karma: 2
I also asked Daniel "how to start a schism?", but AI just responded in terms of Kasina practice.

- Chris Marti
-
Topic Author
- Offline
- Posts: 6503
- Karma: 2

Shargrol wrote: Yes.
I also asked Daniel "how to start a schism?", but AI just responded in terms of Kasina practice.
rofl
So I pondered this with ChatGPT, and noticed a couple of interesting features of the interaction. One is that ChatGPT always knows what I mean. It is never distracted by its own thoughts, it is never unclear on the subject matter, it is never unfamiliar with the ideas, and it is not bothered by background noise. All of these factors are huge impediments to having the same type of conversation with a person. Moreover, its answers are never off-topic or subject-changing, something very normal when talking to a person. So if I'm in the mood for that kind of focused, intense, researchy sort of brainstorm, it's mightily useful.
It also seems to lack the reactivity, irritation, PMS, fomo, hunger, tiredness, find-you-boring, or other factors that make people respond irritably to your proposed line of ponder. ChatGPT is always ready to take up the topic with the same enthusiastic politeness.
I also like using it as a way to find interesting books to read.
- Chris Marti
-
Topic Author
- Offline
- Posts: 6503
- Karma: 2
Ona, I do a lot of work with AI (ChatGPT, Claude, etc.), and while most of the time what you say here seems to be the case at other times (enough to be frustrating) AI hallucinates a silly or even harmful answer or doesn't "get" the subject matter at all. It also has a concerning tendency to bring me to what I'd call the lowest common denominator. I know developers who have tried mightily to incorporate AI into company processes, but these concerning tendencies (hallucinations, etc.) prevent it from being useful or worth the money, time, and effort that has to go into implementing it.It is never distracted by its own thoughts, it is never unclear on the subject matter, it is never unfamiliar with the ideas, and it is not bothered by background noise. All of these factors are huge impediments to having the same type of conversation with a person. Moreover, its answers are never off-topic or subject-changing, something very normal when talking to a person. So if I'm in the mood for that kind of focused, intense, researchy sort of brainstorm, it's mightily useful.
Google implemented its own advanced AI called Gemini into its search engine results recently and had to back this out after a few weeks because the suggestions it was providing were so off-base: NY Times - Google Search Flubs and Flaws
Bottom line - I find AI to interesting to play around with, but it's just not ready for real work. Maybe someday. Be careful of the veracity of the answers you get from current AI models.
From MIT on this issue: When AI Gets It Wrong: Addressing AI Hallucinations and Bias - MIT Sloan Teaching & Learning Technologies
- Chris Marti
-
Topic Author
- Offline
- Posts: 6503
- Karma: 2
I think a better way to describe an AI hallucination is that the algorithm AI uses to generate content (hence the name "generative AI") misinterprets the text in a user's query and gets the context confused in its reply (the text in the query is contextualized in the algorithm (math, essentially) and the context gets switched to something else). Due to the complexity and magnitude of the calculationsinvolved, there's no way to reverse engineer generative AI so that we can trace why it responds the way it does. There have been some recent advances by Anthropic (Claude) and OpenAI (ChatGPT), but their capabilities are speculative at best.It literally just makes up non-existent data?
- Chris Marti
-
Topic Author
- Offline
- Posts: 6503
- Karma: 2