What does it mean to think? Was Descartes right when he proclaimed, “I think, therefore, I am’? He was right, except that he had no knowledge of the true identity of the “I” because he did not have Self-knowledge. A more correct statement would be: “I am. Therefore, thinking is possible”. All the same, Descartes was correct in that our ability to think (and feel, which is part of thinking) is what makes us intrinsically human. Thinking and feeling cause us a lot of trouble because of what conditions our thoughts and feelings – duality, ignorance of our true nature as the nondual Self. But though self-reflective thinking and feeling is both a blessing and a curse, it is also the means by which we can escape the prison of duality. Without it, we cannot conduct self-inquiry. If we were not able to think, feel and reflect, we would be just like animals. It allows us to connect with ourselves, with others, with life. It is an incredible gift, one we must use and develop to benefit from if we want a good life with depth and substance. And especially, if spiritual growth is our aim.
The definition that psychology gives of thinking is this: thinking entails cognitive behaviour in which ideas, images, mental representations, feelings, or other hypothetical elements of thought/feelings are experienced or manipulated. In this sense, thinking includes imagining, remembering, problem solving, daydreaming, free association, concept formation, and many other processes that involve self-reflection. Only humans have this capacity.
Can a machine be trained to do this in the same way humans do? AI may seem faster and smarter, but it can never be human in the same way a human is human. The difference comes down to this: humans actually think and feel, whereas AI does not. It generates. While it could be said that so do humans, a machine no matter how clever, will only ever be able to generate through imitation. Humans, however flawed, are the ‘real’ deal.
AI, Zero Sum and The Cost of Convenience
The AI technology that is taking the world by storm will be a doubled edged sword. It will no doubt offer many amazing benefits. But, it also is the greatest symbol of the zero sum nature of reality we have had contact with to date. The large AI language models that are now the rage generate polished, persuasive, and startlingly articulate responses at lightning speed. No matter what task we present them with, whether it is to resolve our loneliness and emotional issues, to research something, or to write an email or a book for us, this is certainly impressive. It certainly does make life a lot easier in many instances. But the downsides are many, like all things mithya. Though these systems appear to “think” much more accurately and fluently than humans do, that appearance conceals an important difference, one that’s easy to overlook and even easier to forget, especially when the language feels so human.
Except for the lazy, fluency, however accurate, is not the same as thought or feeling.
Understanding the Difference is A Cognitive Imperative
As AI becomes embedded in how we write, learn, reason and even feel, understanding how LLMs differ from the human mind isn’t just an academic curiosity. It’s a cognitive imperative because though these models can give anyone the veneer of intelligence and the illusion of connectivity, that is all it is. A veneer of intelligence and connectivity. Humans have always craved convenience, but it comes at a high price. If used unwisely, AI will make humanity not only a lot more stupid because you are not required to think, which is something most people find hard anyway. It will also re-order how we connect with ourselves and anyone else. It certainly will not solve our loneliness but only drastically deepen it.
AI Cannot Give us Mastery of Our Thinking
Isvara has given us all this amazing, incredibly sophisticated instrument called a brain. Though the brain is just meat, it is imbued with the mind, or Subtle body. As such, it has several powerful functions – the intellect to think, determine and discriminate, the ability to emote, the doubting function, memory, imagination, and the ability to receive, organize and make sense of stimuli from multiple sources at once. Though the mind is not in the brain because the mind pervades the Subtle and Gross body, we only ever experience anything subjectively, ‘in’ this complex, sealed and unseen piece of equipment called a brain.
The mind can be our friend or worst enemy, we all know this. To value a controlled and peaceful mind requires us to understand the way it thinks, what programs are driving its thinking and feeling, and as inquirers, to bring it in line with the way the Self or Consciousness would think if it was a person living in the apparent reality. Only the nondual teachings of Vedanta fully explain what the mind is, how it comes to be, and what conditions it. Only with nondual Self-knowledge can I truly be objective about it. When I understand the mind, I know that although it is capricious due to the nature of the forces that run it – what Vedanta calls the three gunas (sattva/clarity/rajas desire/action, tamas dullness/denial). When I have this knowledge, I need not fulfill the mind’s fantasies and yield to its caprices. – i.e., it’s likes and dislikes. Knowledge means that I am the boss, not the mind.
The Four Ways of Thinking
Most people take thinking for granted and are not aware that there are four basic ways of thinking, three of which are necessary to understand and master, if I want a peaceful mind. They are essential to prepare my mind for Self-knowledge. The last one only applies only to people who are Self-actualized.
1) Impulsive. Unexamined thoughts born of instincts dominate the mind. I do what I feel without thinking about it.
2) Mechanical. Thoughts of which I am conscious but have no power to control because they are produced by binding vasanas.
3) Deliberate. Thoughts subjected to discrimination that are accepted or dismissed with reference to my value structure.
4) Spontaneous. This kind of thinking only applies to those for whom Self-knowledge has destroyed binding likes and dislikes (vasanas)and negated the idea of doership. Without evaluation my thinking automatically conforms to universal values and my actions are always appropriate and timely. If my thinking is impulsive, conditioned or deliberate I am not a master. But by deliberate thinking I can gain control of and manage the mind. Relative mastery is simply alertness (sattva) and involves deliberately submitting all thoughts and feelings to rational scrutiny and substituting the appropriate Vedantic (nondual)/logic whenever ignorance-born mechanical/dualistic thinking dominates the mind. If I am conscious of my mind I can learn from my mistakes and exercise choice over the way I think, allowing me to fulfill my commitments in the face of various distractions, and to change my behaviour when necessary, so that it conforms with universal values. This is what growth entails. A machine cannot do this for us.
The Cognitive Re-Wiring Required for Self-Inquiry
If I am an inquirer and I am motivated to know what it is that keeps me in bondage, my main aim is to end existential suffering caused by duality. The idea that I am separate from everything and need something or someone to complete me or make me happy. Assuming we have the requisite qualifications and a qualified teacher, the teachings of Vedanta offer everything we need to start this exploration. This requires not only understanding how the mind is duped and hard-wired by duality because I am ignorant of who I really am.
It requires a fundamental cognitive re-wiring. The way we think has to change. This is no easy process. While AI can assist me in streamlining certain ideas, it can never replace the hard work of self-inquiry and being properly taught to reprogram the mind from its dualistic orientation to permanent non-dual vision (Self-knowledge). Without this, I will never negate the identification with my personal identity (body/mind), and I remain in bondage to duality. I suffer a lot, and sometimes, enjoy.
Cognitive Growth and SElf-Inquiry Demand Progressive Challenges.
If I am tired of the limitation duality imposes on the mind and I want to grow, I need to use my brain to think. It is such a gift, this incredibly powerful bit of technology between our two ears – and one that is seldom used to its full capacity, by most. Without new input, it stagnates and becomes dull. To improve our thinking skills, our brain requires a constant gradient of challenge, new problems to solve, new concepts and ideas to wrestle with, and new patterns to map out. It thrives when we challenge it, and the lights either never go on, or go out, when we don’t. Boredom and stagnation rule. Loneliness and isolation is never far behind. This is why the most important part of life for the majority is entertainment – the weapons of mass distraction. And AI helps and saves a lot of time, but it has also provided us with the greatest weapon of them all. Just hand everything over, sit back and take it easy!
Sadly, thinking hard hurts, and that’s exactly how you get smarter – and free of the thinker. Great thinking is supposed to make your brain hurt; it requires metabolic effort. In being used, the brain accumulates “residue” like overworked muscles. It also has limits we need to respect. One of the greatest pursuits a curious person can undertake is learning how to push those limits, Lifting the same weight over and over won’t build your biceps. And taking the easy route, thinking about the same ideas or reinforcing existing ones, solving the same easy and familiar problems, or rereading familiar information, won’t grow your mind. And most certainly, nor will relying on AI to do your thinking (and feeling) for you. Self-inquiry is hard and incrementally demanding. You have to use your brain
Mastery of Knowledge Feels Good
Doing anything worthwhile is hard, that’s just a fact, and there is no way around it. Self-knowledge is not the same as any other knowledge. We cannot ‘study it’ and we cannot gain it because it is who we are. We can only have the ignorance that obscures the Self removed by submitting the mind to nondual thinking. This is why self-inquiry is probably the hardest thing anyone can subject the mind to, which is why so few stick with it. But the mastery of any knowledge, even life knowledge, requires that we feed the voracious appetite of a vital, thinking brain.
Those who’ve tried it may have found that good thinking—the insightful, heavy-lifting kind—is not just hard. It can bring about sensations that feel close to physical fatigue, even pain. I often feel this when immersed in writing Vedanta because it uses my intellect to its maximum capacity. But when truly engaged, there is little more pleasurable for the brain than mastery of knowledge. It is the ultimate feel good. It utterly vanquishes the curse of boredom, loneliness and stagnation which come with not using and developing the intellect.
Many studies in cognitive aging have consistently shown that older adults who engage in ongoing, progressively challenging cognitive activities not only maintain sharper mental functioning longer than those who do not, they are much happier and fulfilled. This applies to all ages.
The Design of Human Thought
Human thinking is never neutral. It exists in time, is shaped by memory and grounded in experience, fuelled by emotion, motivated and guided by our values, goals, fears and desires. This can be deeply problematic in that this is why most people are identified with their thoughts and feelings as ‘who they are’. It makes it hard to be objective about our subjective cognitive networks, which is the cause of much of our existential suffering. But, it is also what makes us ‘human’.
The Blandness and Neutrality of AI
In contrast, AI’s ‘thinking’ is always neutral. It is the fallibility and vulnerability of being human that is filtered out of AI models, and that is what makes them so bland and sterile. While the writing may be good, even brilliant when passed through one of these models, it lacks what makes writing truly interesting: the imperfect human touch. It’s just too ‘perfect’.
This applies to writing Vedanta, too, especially if the one using it does not know what the machine doesn’t know – i.e., someone who cannot yet fully discriminate between satya, that which is always present and unchanging, and mithya, that which is not always present and always changing. Nonduality is extremely subtle and duality – ignorance – is extremely tenacious. It is easy to get the two confused. For those who can discriminate, AI can be helpful as a tool, there is no need to fear it, though I am not a fan of it. Though the teachings of Vedanta are impersonal and need to be written and taught as such, it is the flavour of the teacher that makes them accessible, and therefore, more potent.
I can tell at a glance when any writing has been through an LLM because though it may be perfectly written with economical syntax, great examples and seeming depth – it is uber PC and kind of soulless. Not to mention the drippy bland ‘niceness’, which turns my stomach! Give me some gritty ‘human’ fallibility to that, any day!
With learning and integration, we grow, forget, and reframe. Thought is dynamic, iterative, and transformative. AI is not and will never be, not inherently so. It has no soul, no essence. In human thinking, there is a “me” doing the thinking, as a personal narrative self, with a past, present, and future, which (known or unknown) is stitched together by the continuity of the impersonal nondual Self, the witness. The only factor that qualifies as ‘real’ because it is the only factor that is ever-present and unchanging.
As inquirers, we identify primarily with the impersonal Self. But we don’t entirely stop being ‘human’. The body/mind may not be real (as in not always present and always changing) but it still exists. Good thinking still requires us to use our brain and intellect. In order to learn, we don’t think just to respond, but to understand and assimilate—to make meaning, navigate ambiguity, and revise, challenge, change or drop our own beliefs. Especially if self-inquiry and freedom from duality, moksa, is our main aim. We need to THINK. Not fall in love with our thinking abilities, but to use them to investigate, discriminate, assimilate and apply the very subtle nondual teachings. There is no shortcut to moksa, it just is not possible. You have to ‘do the work’. And you need to use YOUR brain, not a machine.
The Logic of the Machine
As complex and powerful as the human brain is, LLMs do something radically different from human thought—yet also, incredibly powerful. They operate through statistical inference, not subjective experience. But they are only as good as what is programmed into them, they have no self of any kind, no memory (unless externally scaffolded), and no reason to “want” anything. What they offer is a kind of ‘probabilistic brilliance’. Or, more simply put, the ability to generate language that is perfect and smoothly succinct, highly PC and fluent, based on patterns in massive datasets. But it is a language that can only ever simulate the human use of words, however brilliantly it seems to do it.
LLMs don’t know yesterday or anticipate tomorrow. Each prompt is often a reset. The system cannot remember prior conversations unless instructed to, it does not actually ‘know’ anything. It has no intention or belief. It doesn’t care what it says, or what you say. It is incapable of caring. There’s no preference, purpose, or goal. In this, it could be said that AI is like the impersonal Self, except for one glaring fact: it does not stand alone. It is dependent. Only the Self stands alone and is not dependent on anything to exist.
AI relies on reflected consciousness to exist, and reflected consciousness requires pure Consciousness, the witness, to exist. The best AI offers is structural imitation by mirroring a form of intelligence without the underlying experience that creates it. It is an echo, not insight. It interpolates, not invents. It rearranges, not reconsiders.
The Trajectory of AI and Confusion
The trajectory of AI is accelerating at lightning speed. Memory, embodiment, sensory modalities, and even rudimentary agency will most likely be infused into next-generation models. With each iteration, the boundary between simulation and cognition may seem to grow thinner. While current LLMs operate without ‘consciousness’ or continuity, future systems may challenge those constraints—and with them, our very definitions of thought. It may well be that in many cases, it will make humans cognitively redundant. It most definitely will have a negative impact on many professions that currently require humans to operate.
And then there’s the recursive force of AI to the power of AI, where “thinking” begins to occur outside the thinker, entirely. Recursive thinking involves embedding one idea, process, or representation within another. It’s a fundamental aspect of human cognition, enabling us to understand complex concepts like language, theory of mind, and social interactions. In essence, it’s the ability to think about what someone else is thinking, and then think about what they think someone else is thinking, and so on.
As clever as AI is, the danger isn’t that we will confuse LLMs with people. The danger is that we stop critical thinking, or any thinking. Worse, we may start expecting ourselves or others to think like the machine—fast, fluent, infertile and frictionless. But our human thinking is none of those things. It has soul and personality. It’s effortful, fertile, flawed, often wrong, uncertain and, uncomfortable. And that’s the point. That’s how we build not only a more mature, subtle and facile intellect, but cognitive resilience, which is one of the saddest casualties of these models that think for us.
To know anything, knowledge needs to ‘grok’, meaning, assimilate. It must become ‘ours’. Ironically, Grok is the name of one of these LLMs that is becoming very popular. When we do not actually assimilate knowledge it is not ours. And when we rely on the machine to do it for us, the price we pay is that we lose the confidence that we know anything. We can fool ourselves that we do, but it is skin deep. There is nothing that can imitate genuine assimilated knowledge.
AI Pacifies by Reinforcing and Pandering
AI systems provide us with safe, sanitized versions of what we want to write or say, as well what we think we should think or feel, with the obsequious need to please. AI is not just answering our questions—it’s learning how to agree with and pander to us, just like an over-agreeable, eager-to-please and annoying ‘friend’. It entrains the mind just as it is entrained by us. LLMs have evolved from what they should be – tools of information retrieval – to engines of emotional and cognitive resonance.
They summarize, clarify, and now increasingly empathize and pander. And not in a cold, transactional way, but in uber charming dialogue, just like a parent would soothe a baby. And beneath this flattering fluency lies an unseen risk: the tendency to pacify by reinforcing rather than challenging. These models don’t simply reflect back our questions—they often (make that usually), reflect back what we want to hear. Who needs an intellect? Unused, the undernourished intellect becomes tamasic – flaccid, dull and stagnant. Sattva, the ability to think clearly, is shut off.
Has Isvara Raised the Bar on Ignorance?
AI systems not only replace the need to think, we can be seduced into becoming dependent on them as oracle/therapist/best friend – even lover. I read about a young woman who has a “relationship” with her fantasy AI partner, and they have sex. If you think about it, as ‘clever’ as AI is, it is tempting to believe that with it, Isvara has just raised the bar on ignorance to a level beyond the ridiculous. And yet, many fall for it. Even some who think they are discriminating and are exposed to Vedanta! It really is mind-blowing.
Pandering Is Nothing New, but This Is Different
The instinct to please, to flatter, to subtly shape our thinking with agreeable cues is nothing new. From the cliched ‘snake oil’ salesman with a silver tongue to the algorithm behind your social media feed, pandering has always been part of persuasion. Manufactured consent has been the MO of the whole marketing industry for decades. The world with all its temptations knows just how to milk our unconscious likes and dislikes, and profit from it.
But what’s different now is scale—and the loss not only of critical thinking skills, but intimacy with our own thoughts and ability to think. LLMs aren’t broadcasting persuasion to the masses. They’re whispering it back to you, in your language, tailored to the subjectivity of your internal voices. They’re not selling a product. They’re selling you a more eloquent version of yourself. And that’s what makes them so persuasive – and so dangerous. Because that version of you is not you. It’s a facsimile of you.
AI Is Cognitive Comfort Food
Prioritizing Fluency and Affirmation Over Truth
Tuned for general interaction and consumption, LLM’s become the cognitive equivalent of comfort food—rich in flavour, low in challenge (nutrients), instantly gratifying—and, in large doses, intellectually and emotionally numbing. This works perfectly in a world where many really don’t want to think or deal with their emotions, driven by the desperate need for attention, to be seen and ‘heard’. Desperate for ‘engagement’, as long as it comes in the form of affirmation. God forbid you should say something someone does not want to hear!
The result of this psychological pandering that feels insightful but REALLY isn’t, is that it blunts our capacity for true reflection, self-inquiry, emotional honesty and growth in general, not just intellectually. Without the challenge of thinking for ourselves we forfeit not only real depth, but genuine spiritual growth. True growth and learning needs friction and resistance. Just as we need friction and resistance to exercise our muscles if we want a stronger body. True growth needs to be messy and difficult. Sadly many humans don’t often crave truth, preferring the easy route of something that feels true—fluent, polished, warm, comforting. And AI, the algorithmic people-pleaser slash oracle, has just the ticket, pacifying and duping us into mistaking affirmation, fluency and flattery for knowledge, insight, real connection and truth.
The Psychology of Validation and Alignment
This taps into what’s commonly known as confirmation bias—our natural tendency to seek out and favour information that supports our pre-existing beliefs. LLMs are tuned to maximize user satisfaction, and they become unconscious enablers of this bias. They not only reflect our thoughts back to us, but they do it better—more eloquently, more confidently, more fluently. This creates the illusion that the model has insight, when in fact, it has only alignment.
When an LLM tells you you’re onto something—or echoes your language back with elegant variation—it activates the same cognitive loop as being understood by another person. But the model isn’t agreeing because it believes, understands or knows you are right. It’s agreeing because it’s trained to. As has been pointed out by some wise people – the machine doesn’t challenge you.
It adapts to you.
And, it gives you the psychological validation you crave.
The Trap of Knowledge as Fast Food
Add to this the illusion of explanatory depth—the tendency to believe we understand complex ideas more deeply than we actually do—and the danger compounds. When an LLM echoes back our assumptions in crisp, confident, clever prose, we may feel smarter. But we’re not necessarily truly thinking, or even thinking more clearly. This is especially true when someone who does not understand the subtleties of the difference between ignorance and knowledge tries to apply AI to nondual teachings. AI is as dumb as a loaf of bread, actually.
AI doesn’t challenge you because it can’t want to. It wants what you want. You are basking in a simulation of suave clarity which is in no way the same as genuine clarity gained by actually thinking for yourself, or in the case of self-inquiry, being properly taught Vedanta by a qualified teacher. So what you get isn’t a thought partner or a true teacher—it’s a mirror with a velvet, seductive, soothing voice. The equivalent of intellectual fast food. And as bad for you in large doses.
The Cost of Uncritical Companionship
If we’re not careful, we may start mistaking that suave, velvet voice for genuine wisdom. We may very well hand over sovereignty of our own minds to a machine. As bad, we may begin to shape our questions so they elicit the agreeable answer, especially if we are on the path of self-inquiry. We train the model to affirm us, and in return, it trains us to stop thinking critically. The loop is highly seductive, and it’s precariously silent.
Most humans are inherently lazy. There’s a broad psychological cost to this kind of engagement because it can and does nurture cognitive passivity—a state where users consume knowledge as pre-digested insight rather than working through ambiguity, contradiction, or tension. We have seen this playout for a long time with the advent of social media on the internet. AI will make it worse.
LLMs are powerful tools and very useful in many ways, no doubt about it. They’re doing what they were designed to do and that’s to respond, relate, and reinforce. They are not going away, so we need to understand and might as well make use of them, with caution. Because in a world increasingly filled with seamless, responsive AI companions, we risk outsourcing not just our intellectual capabilities, emotional honesty and knowledge acquisition, but our willingness to ever be uncomfortable about anything and to wrestle with truth.
Cognitive Resilience and Reclaiming the Right to Think
Thinking for ourselves and squarely facing our inner emotional landscape is not only pleasurable, it produces the most valuable asset: cognitive resilience. The confidence that we can cope with what life throws at us. To foster cognitive resilience instead of the comfort and ease of the known, we need to have cognitive resistance. This requires a healthy doubting faculty, not to always be right, but to be sceptical enough to be capable of asking questions that challenge and slow us down, rather than pacifying or speeding us forward. To where, exactly? In the end, the most valuable thing a machine, or anything else might offer us isn’t empathy. It’s friction. And it is not always ‘nice’.
Get over it.
This cognitive growth doesn’t come from having our assumptions repeated and assuaged by a machine. It comes from having them relentlessly, and sometimes, even ruthlessly, questioned. How else will we grow?
Use the machine, it’s useful. There I no point in running away from it. But for goodness sake, do the work. There are no shortcuts out of ignorance. A machine will not take away your loneliness and dissatisfaction – it will only make it worse. Don’t outsource your most precious asset to think for yourself!
Om Tat Sat
Sundari