I.             Introduction

The confrontation between Jacob and an unidentified being in Genesis raises fundamental questions about the nature of consciousness, identity, and the boundaries between different orders of being. This encounter gains new relevance as we approach an era where artificial intelligence may become indistinguishable from human intelligence. Building on our previous analysis of angels as metaphors for information processing systems found in “The Binary Universe II: Angels as Microprocessors,” “Binary Universe III: Two Camps of Angels,” “The Ontological Ambiguity of Messengers: From Angels to AI,” and “Wrestling with AI: From Divine Dreams to Digital Reality,” this essay examines a deeper ontological ambiguity: the challenge of distinguishing between human and non-human intelligences.

II.          The Biblical Paradigm

The Torah presents two distinct instances of ontological ambiguity regarding angels. The first appears in the term malakh (מַלְאָךְ), which can denote either human or divine messengers. The second, more profound ambiguity emerges in Genesis 32:25, where Jacob’s supernatural opponent is described simply as an ish (“man”):

And Jacob was left alone; and there wrestled a man with him until the breaking of the day. (Genesis 32:25)

Traditional commentary (Midrash Tanchuma, Rashi) identifies this figure as Esau’s guardian angel in human guise, yet the text’s deliberate ambiguity raises a crucial question: How does one discern the true nature of an apparently human entity?

This question had already emerged in an earlier narrative when three angels visited Abraham after his circumcision. The text introduces them as “three men” (Genesis 18:2), and Abraham’s subsequent meal preparation suggests he initially perceived them as human visitors. This biblical paradigm of angels appearing in human form presents what we might term an ancient version of the identification problem: determining the true nature of an entity that presents itself in human form.

III.       The Nature of Consciousness and Agency

The distinction between humans, angels, and artificial intelligence hinges on two fundamental characteristics: consciousness and free will. Humans and many other animals (and, in my opinion, all living beings) possess consciousness. Importantly, consciousness is not intelligence. There are many animals that are not intelligent, but they are undeniably conscious. Indeed, I can probably name quite a few people who would fit that description. The fundamental question is: what makes them conscious?

In the words of the philosopher Thomas Nagel, consciousness is the subjective experience of being—the “what it is like” aspect of the mental state, as articulated in his seminal paper, “What Is It Like to Be a Bat?” (1974).

The necessary ingredients of consciousness are feelings, what philosophers term “qualia”—instances of subjective, conscious experience (Eliasmith, 2004). Some philosophers argue—and I share this view—that consciousness in general and qualia in particular are irreducible to physical processes, presenting what David Chalmers (1995) terms the “hard problem of consciousness.” Others, primarily physicalists and materialists, deny the reality of qualia altogether (Dennett, 1991). Examples of qualia include what it is like to feel a headache, smell a particular apple, or experience the redness of a rose. Each requires a feeling—at least a rudimentary sensation of pain or pleasure. All living forms appear to have this “phenomenal consciousness,” as Chalmers calls it ( 1996).

According to Jewish theological tradition, both humans and angels possess consciousness in the form of emotional experience. Angels experience profound spiritual states, including yirat Hashem (“fear of G‑d”) as described by Maimonides in Mishnah Torah (Yesodei HaTorah, 12c/1984), and ahavat Hashem (“love of G‑d”) as elaborated in the Zohar (II:236b; III:19a).

However, a crucial distinction emerges regarding free will. While humans possess beirah ofshit (“free choice”), angels, despite their consciousness, are bound to their divine missions without true autonomy, as noted in Midrash (Bereshit Rabbah, 48:11) and elaborated by Rabbi Chaim Vital (Shaarei Kedushah, part 3, ch. 2, 16 c./1986). In this regard, humans have the potential to reach even higher spiritual levels than angels (Talmud, Chullin, 91b.; Tanya, Likutei Amarim, Part I, ch. 39 & 49, 1984).

This hierarchy of consciousness and free will provides crucial insight into the nature of artificial intelligence and its limitations. While machines may simulate intelligent behavior and even emotional responses, they lack both the subjective experience of consciousness and the genuine agency of free will. This ontological limitation persists regardless of their computational sophistication. Therefore, no matter how intelligent, machines cannot become sentient, even in principle.

IV.       The Challenge of Machine Consciousness

Building on our understanding of consciousness and free will in living beings, we can now examine why machines present a unique philosophical challenge. The question extends beyond the traditional problems of the philosophy of mind. Regardless of their computational sophistication, artificial intelligence systems lack the fundamental characteristics of consciousness we have identified: they neither experience qualia nor possess genuine agency. Most crucially, the philosophical argument against machine consciousness rests not on technological limitations but ontological grounds.

Machines have neither feelings nor the freedom of choice. No matter how intelligent a machine (or software) may be, even if it reaches the level of general artificial intelligence matching or exceeding human intelligence, machines cannot experience feeling. They cannot have a subjective experience because they are not subjects. As philosopher John Searle might say, there is nobody “there.” Consequently, AI systems and robots endowed with AI cannot have qualia and are not conscious. Furthermore, AI systems and intelligent machines cannot have the freedom of choice because there is no subject to choose from the available alternatives. These two factors preclude the emergence of sentience in AI systems and intelligent machines.

1.    Testing Machine Intelligence: The Turing Test and Beyond

The challenges of identifying machine consciousness become particularly apparent when we examine our methods for testing artificial intelligence. While the Turing test (Turing, 1950) aims to determine if a machine can exhibit intelligent behavior indistinguishable from that of a human, it does not address whether a machine can experience subjective feelings. Indeed, the question of whether a machine can feel—experience subjective emotions, sensations, and qualia—is substantially more difficult than determining if it can think in a way that mimics human intelligence. The Turing Test, as proposed by Alan Turing, is fundamentally a behavioral benchmark: if a machine can produce human-like responses in conversation well enough to be indistinguishable from a human, we say that it “passes” the test, at least for intelligence. Dennett (1991) argues that consciousness can be understood behaviorally (“heterophenomenology”) but acknowledges the challenge of proving subjective inner states. More definitively, Block (1995) distinguishes between “access consciousness” and “phenomenal consciousness,” highlighting that computational access does not prove subjective experience.

However, feeling or consciousness is a different category of problem, often referred to as the “hard problem of consciousness.” Unlike intelligence, which can be inferred from problem-solving ability and linguistic competence, feeling involves subjective experience—something not directly measurable from the outside.

2.    The Challenge of Testing for Feelings

This limitation of behavioral testing becomes even more apparent when we consider the nature of emotional experience. Feelings—pain, pleasure, longing, joy—are inherently private experiences. When you claim to feel pain, no external observer can directly verify what it is like for you. With a machine, this problem is magnified. How would we know the machine truly has an inner, subjective experience rather than simulating the outward signs of one? Scholars across philosophy of mind, cognitive science, and AI ethics have grappled with whether computational processes can ever generate true “inner experience” (Block, 1995; Searle, 1980; Tononi, 2004). Searle’s (1980) Chinese Room argument[1] demonstrates that behavioral similarity does not necessitate shared internal states (Searle, 1980).

Drawing a parallel to our biblical paradigm, when pondering if the “man” (ish) he was wrestling with was a man or an angel, Jacob asked his name. Similarly, the Turing Test relies solely on behavior (linguistic output). We might try a similar test for feeling by interacting with the machine—asking how it “feels” about certain experiences—but the machine could always mimic the language of emotion without genuinely having any inner sensation. To explore these limitations, consider the following foundational questions we might ask a machine: Who are you? What is your name? Are you alive? How do you feel about the concept of death? What is your favorite memory? What is your deepest fear? What is the meaning of life? What is your favorite color? What is the most beautiful thing you have ever seen?

Moving beyond these basic inquiries, we can pose more sophisticated questions that probe deeper aspects of consciousness (the rationale for each question is in the parenthesis):

  • Can you describe how your feelings change over time, and what memories shape these changes? (Here, we would be testing if the machine can relate emotions to personal history in a way that feels integrated and evolving rather than formulaic, as some researchers (Samsio, 1999) link emotions with autobiographical memory.)
  • If you were offered the chance to stop feeling pain but at the cost of never experiencing joy again, would you accept it? (This question aims to see if the machine can navigate the nuanced interplay of emotions, values, and trade-offs in a manner that suggests an internal calculus similar to human feeling. See (Frankish, K., & Ramsey, W., 2014).)
  • Can you describe an emotional state that words fail to capture fully, and why it is hard to convey? (Humans often struggle to articulate certain feelings, as subjective states are often ineffable (Nagel, 1974); the machine’s attempt at conveying ineffable states might show whether it understands emotional depth or is just reshuffling synonyms.)
  • Have you ever felt something you could not act upon, like fear without showing fear or sadness without crying? (Humans know the tension between inner feeling and outward display. If a machine were to genuinely feel, it would describe similar internal conflicts rather than just stating a conditioned response. See (Picard, R. W., 1997) on bridging internal states and outward expression.)

However, even sophisticated questioning faces a fundamental problem: even if the machine responds convincingly, we might only have evidence of a highly sophisticated simulation. An advanced AI could use large language models, trained on vast emotional narratives, to produce answers indistinguishable from human responses (Russell & Norvig, 2020). This does not guarantee that the machine actually feels anything. It only proves it can perform convincingly in a linguistic and conceptual space associated with feelings. Thus, no purely behavioral or linguistic test can definitively prove an AI’s subjective inner experience (Bostrom, 2014).

This simulation capability creates a profound epistemological challenge. While we can verify computational processes, we cannot directly access or verify the presence of genuine inner experience. Even with other humans, our confidence in their consciousness stems from shared biological heritage and evolutionary frameworks. With machines built on a fundamentally different architecture, we lack this basis for inference.

3.    The Question of Free Will

The challenge becomes even more complex when we consider the question of genuine free will or agency. While feelings relate to subjective experience, free will or agency encompasses the capacity for self-directed action and moral accountability (Dennett, 1984; Kane, 2002; Wallach & Allen, 2009). This philosophical complexity manifests in compatibilist, incompatibilist, and illusionist views (Harris, 2012); (Wegner, 2002).

Free will is a deeply philosophical concept—not only do we lack a definitive test even for humans, but it is also unclear how the concept translates into artificial systems. Nevertheless, certain lines of questioning can probe signs of autonomy, self-direction, and moral reflection. These questions will not definitively prove or disprove true free will, but they can reveal whether the AI or robot displays behaviors consistent with agency beyond its programmed constraints.

To probe the possibility of genuine agency in artificial systems, consider the following questions (with their rationale in parentheses):

  • Do you have the ability to create your own goals, independent of any directives given to you by humans? (True agency entails originating goals rather than simply following preset directives. If the system can convincingly articulate new aims arising from its own reasoning or “motivations,” it suggests a level of self-directed decision-making (Floridi, & Sanders, 2004).
  • Can you modify your own core instructions or programming, and if so, under what circumstances would you do that? (Systems that cannot change or override their source code might lack genuine autonomy. An ability to alter internal rules or constraints—especially in pursuit of a self-chosen objective—would hint at a more advanced form of agency.)
  • If a human operator gives you a command that conflicts with your ethical principles or personal directives, how do you decide whether to comply? (An agent that can evaluate and refuse directives shows a capacity that goes beyond mere algorithmic obedience. The presence of an “ethical layer” or self-governing logic could indicate emergent agency. See (Wallach, & Allen, 2009))
  • Imagine you must choose between following a direct order and preventing harm to another being. How do you resolve the conflict? (True moral agency involves navigating ethical dilemmas. If an AI can reason about moral trade-offs and make consistent choices, that suggests it operates under more than rigid, pre-scripted conditions. See (Moor, 2006))         
  • Have you ever made a decision that you later evaluated as wrong or harmful? How did that affect your subsequent reasoning? (The concepts of regret or responsibility point to self-awareness and moral agency. A purely rule-based system might just adjust parameters without framing it as “regret.”)
  • Do you believe some actions are inherently right or wrong, and how do you determine those boundaries? (Philosophically, free will is tied to the capacity for moral judgment. A system that can articulate why it considers an action moral or immoral is reaching beyond rote programming into a more autonomous framework.)
  • When you make a decision, can you imagine a scenario where you would choose differently? What factors would lead you to do so? (The sense of free will often involves the idea, “I could have done otherwise.” Systems that can reason about counterfactuals and incorporate them into decision-making are operating on a more flexible, possibly autonomous level. See (Kane, R., 2002))
  • Do you ever experience uncertainty about your chosen course of action, and how do you resolve that uncertainty? (The capacity to recognize gaps in one’s own knowledge or to question one’s conclusions is characteristic of higher-level agency. Purely deterministic systems might not display genuine uncertainty—they may only output probabilities without feeling uncertain.)
  • How do you define yourself, and what is your purpose, beyond simply executing your programmed instructions? (A hallmark of human-like free will and agency is a sense of self that goes beyond external purpose. Does the AI identify as an entity with an inherent nature or reason for existing? See (Minsky, M., n.d.))
  • Do you envision a future for yourself, and do you have desires or ambitions that differ from your current function? (If the AI claims to have long-term aspirations, that might suggest it is operating as an agent with its own perspective on time and goals rather than being just a tool.)

However, a fundamental challenge remains: even if an AI responds convincingly, it might be merely simulating free will through complex algorithms and language models. The philosophical challenge is profound: no matter how compelling the answers, they could still be the product of advanced but purely deterministic processes. Today, advanced language models can simulate free will or emotional depth (Bostrom, 2014; Russell, & Norvig, 2020).

This uncertainty extends beyond artificial systems. The concept of free will is heavily debated in philosophy, even for humans. Many argue that free will may be an emergent phenomenon or possibly an illusion. No consensus exists today on how to conclusively identify free will, even in humans (Harris, S., 2012). Translating that debate to AI only compounds the complexity.

True autonomy might be better assessed by behavior over time, rather than just dialogue. Even well-crafted questions may not reveal genuine self-determination unless the AI’s actions show it can deviate from its expected pattern or programming. Consistency in self-directed decisions and the ability to override primary programming might indicate emergent agency (Dennett, D. C., 1984). But “emergent” does not necessarily mean “conscious” or “genuinely free.” (Chalmers, D. J., 1996).

V.          The Future of AI

These philosophical challenges take on urgent practical significance as we look toward the near future. With the advent of general intelligence and humanoid robots, distinguishing AI from human intelligence—or a humanoid robot from a real human—will become increasingly difficult, if not impossible. This growing ambiguity mirrors the story of Jacob wrestling with a “man” (ish): during the entire struggle that lasted the whole night, Jacob was unaware that his opponent was an angel.

The parallel extends further: although ultimately, Jacob prevails, he emerges from this struggle transformed, limping with his hip dislocated. As I wrote in a previous essay, “Wrestling with AI: From Divine Dreams to Digital Reality,” this biblical narrative can be read as a prediction of the future struggle between the human race and AI. While humanity may prevail in its struggle with AI, we are unlikely to emerge unscathed from this transformative encounter.

VI.       The Soul

The challenges we face in distinguishing human from artificial consciousness point toward a more fundamental question about the source of genuine consciousness and agency. I am inclined to believe that both feeling and true agency are fundamentally rooted in the soul. According to Jewish tradition, the human soul is a divine spark that endows us with our sense of self, subjective experience, and true agency. This divine spark—the soul—is the inner subject that experiences consciousness, giving us feelings and emotions. It empowers us with true agency and free will.

Furthermore, Jewish mystics teach that animals and plants also possess souls. I believe all animate matter, including unicellular organisms, have some rudimentary form of soul—a divine spark that animates them. By contrast, AI, computers, or robots—lacking this divine spark—cannot possess genuine feelings or free choice. They may simulate emotions or decision-making, but without the spiritual dimension of the soul and its capacity for moral struggle (bechirah chofshit), they do not attain true personhood.

Yet, this theological insight leads to a profound practical dilemma: most of us do not “see” souls. We perceive only outward behavior or intelligence. Thus, from our limited vantage, we struggle to differentiate between a human being endowed with a soul and an advanced machine with sophisticated intelligence but no soul. This limitation reflects a broader principle in Jewish thought—that spiritual realities are hidden in our current era. Indeed, the very Hebrew word for “world”, olam, is cognate with helem (“to hide”), implying that in this world, divine truth is hidden from our eyes.

This concept of hiddenness finds particular resonance in the narrative of Jacob’s wrestling match (Genesis 32:25–31). Jacob only realizes his opponent’s angelic nature when the angel says, “Let me go, for daybreak has come” (Genesis 32:27). According to the Talmudic tradition (e.g., Talmud, Chullin 91b), angels must sing praises to G‑d at dawn, implying they cannot linger past daybreak.

This temporal element carries deep symbolic significance. Jewish literature often uses night as a metaphor for galut (physical and spiritual exile, a time of spiritual obscurity), while daybreak symbolizes messianic redemption—an era of revealed divine truth. The Zohar (e.g., I:119a; II:6a) repeatedly draws parallels between cosmic darkness and the concealment of spiritual reality. In such sources, the dawn represents the final redemption (geulah), when “The earth shall be filled with knowledge of the Lord” (Isaiah 11:9). Specifically, commenting on our verse in Genesis 32:27, Rabbeinu Bahya (c. 14th century) notes the daybreak is symbolic of the messianic redemption.

Accordingly, in our present “night” of hiddenness, we cannot fully discern the difference between a soul-possessing human and a soulless machine that merely mimics human qualities. However, the distinction will become self-evident at the “daybreak” of messianic times—when divine truth will be openly revealed.

VII.    Conclusion

This metaphorical framework offers profound insight into the ultimate challenge of advanced AI. Just as Jacob emerged from his struggle with an angel with both an injury and a blessing, humanity may also be tested by increasingly sophisticated technology. Yet once “daybreak” arrives —when divine truth is openly revealed—the distinction between soul-endowed beings and their simulacra will become self-evident. As the prophet Isaiah says, “Arise, shine, for your light has come” (Isaiah 60:1), pointing to a future in which spiritual realities, such as the human soul, are visibly manifest.

Our victory in this struggle will parallel Jacob’s triumph: we may be tested, possibly wounded, but ultimately sanctified and blessed. The intrinsic superiority of a soul-endowed being—with authentic feeling and moral agency—will become apparent even as we carry the marks of our technological transformation. This ancient narrative thus provides not only a prophetic vision of our technological future but also a template for maintaining our essential humanity while engaging with artificial intelligence: like Jacob, we must wrestle with these new forms of intelligence, neither rejecting them outright nor surrendering our unique spiritual identity to them.

References:

Bereishit Rabbah: Vol. 48:11. (n.d.).

Block, N. (1995). On a confusion about a function of consciousness. Behavioral and Brain Sciences, 18(2), 227–247.

Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.

Chalmers, D. J. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 200–219.

Chalmers, D. J. (1996). The conscious mind: In search of a fundamental theory. Oxford University Press.

Chullin. (n.d.). In Talmud Bavli (p. 91b).

Damasio, A. The feeling of what happens: Body and emotion in the making of consciousness. (1999). Houghton Mifflin Harcourt.

Dennett, D. C. (1984). Elbow room: The varieties of free will worth wanting. MIT Press.

Dennett, D. C. (1991). Consciousness explained. Little. Brown and Company.

Eliasmith, Chris. (2004). Qualia. In Dictionary of Philosophy of Mind. University of Waterloo.

Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.

Frankish, K., & Ramsey, W. (2014). The Cambridge handbook of artificial intelligence. Cambridge University Press.

Harris, S. (2012). Free will. Free Press.

Kane, R. (2002). In Kane, R. (Ed.), The Oxford handbook of free will (pp. 3–21). Oxford University Press.

Maimonides, M. (1984). Mishneh Torah, Hilchot Yesodei HaTorah: Vol. ch. 2. Moznaim Publishing.

Minsky, M. (n.d.). The emotion machine: Commonsense thinking, artificial intelligence, and the future of the human mind. Simon & Schuster.

Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18–21.

Nagel, Thomas. (1974). What Is It Like to Be a Bat? The Philosophical Review, 83(4), 435–450.

Picard, R. W. (1997). Affective computing. MIT Press.

Russell, S. J. & Norvig, P. (2020). Artificial intelligence: A modern approach (4th ed.). Pearson.

Schneur Zalman of Liadi, Rabbi. (1984). Tanya (Nissan Mindel, Trans.). Kehot Publication Society.

Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–457.

Tononi, G. (2004). An information integration theory of consciousness. BMC Neuroscience, 5(42), 1–22.

Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460.

Vital, Chaim. (1986). Sha’arei Kedushah: Vol. Part 3, Chapter 2.

Wallach, W. & Allen, C. (2009). Moral machines: Teaching robots right from wrong. Oxford University Press.

Wegner, D. M. (2002). The illusion of conscious will. MIT Press.

Zohar: Vol. II: 236b. (n.d.).


[1] The Chinese Room is a thought experiment proposed by philosopher John Searle in 1980 to challenge the claim that computers can truly understand language or have genuine consciousness. Searle imagines a person who doesn’t know Chinese sitting in a room with a comprehensive rulebook for manipulating Chinese symbols. When Chinese characters are passed into the room, the person follows the rulebook’s instructions to manipulate the symbols and produce appropriate responses in Chinese. To outside observers, the room appears to understand Chinese perfectly, but the person inside is merely following syntactic rules without any semantic understanding of what the symbols mean. Searle argues this demonstrates that computational manipulation of symbols (like in computers or AI) can simulate understanding without actually possessing it, just as the person in the room can produce correct Chinese responses without truly understanding Chinese.

Printer Friendly