{"id":86693,"date":"2026-01-14T17:23:54","date_gmt":"2026-01-14T17:23:54","guid":{"rendered":"https:\/\/www.noemamag.com"},"modified":"2026-01-14T17:46:28","modified_gmt":"2026-01-14T17:46:28","slug":"the-mythology-of-conscious-ai","status":"publish","type":"wpm-article","link":"https:\/\/www.noemamag.com\/the-mythology-of-conscious-ai","title":{"rendered":"The Mythology Of Conscious AI"},"content":{"rendered":"<p>For centuries, people have fantasized about playing God by creating artificial versions of human beings. This is a dream reinvented with every breaking wave of new technology. With genetic engineering came the prospect of human cloning, and with robotics that of humanlike androids.<\/p><div>\n    <iframe loading=\"lazy\" id=\"noa-web-audio-player\"\n            style=\"border: none\"\n            src=\"https:\/\/embed-player.newsoveraudio.com\/v4?key=n0e13g&#038;id=https:\/\/www.noemamag.com\/the-mythology-of-conscious-ai\/&#038;bgColor=F3F3F3&#038;color=6D6D6D&#038;progressBgColor=F7F7F7&#038;progressBorderColor=6D6D6D&#038;playColor=F3F3F3&#038;titleColor=383D3D&#038;timeColor=6D6D6D&#038;speedColor=6D6D6D&#038;noaLinkColor=6D6D6D&#038;noaLinkHighlightColor=039BE5\"\n            width=\"100%\" height=\"110px\"><\/iframe>\n<\/div><p>The rise of artificial intelligence (AI) is another breaking wave \u2014 potentially a tsunami. The AI systems we have around us are arguably already intelligent, at least in some ways. They will surely get smarter still. But are they, or could they ever be, conscious? And why would that matter?<\/p><p>The cultural history of synthetic consciousness is both long and mostly unhappy. From Yossele the Golem, to Mary Shelley\u2019s \u201cFrankenstein,\u201d HAL 9000 in \u201c2001: A Space Odyssey,\u201d Ava in \u201cEx Machina,\u201d and Klara in \u201cKlara and The Sun,\u201d the dream of creating artificial bodies and synthetic minds that both think and <em>feel<\/em> rarely ends well \u2014 at least, not for the humans involved. One thing we learn from these stories: If artificial intelligence is on a path toward real consciousness, or even toward systems that persuasively seem to be conscious, there\u2019s plenty at stake \u2014 and not just disruption in job markets.<\/p><p>Some people think conscious AI is already here. In a 2022 interview with The Washington Post, Google engineer Blake Lemoine made a startling claim about the AI system he was working on, a chatbot called LaMDA. He <a href=\"https:\/\/www.washingtonpost.com\/technology\/2022\/06\/11\/google-ai-lamda-blake-lemoine\/\">claimed<\/a> it was conscious, that it had feelings, and was, in an important sense, like a real person. Despite a flurry of media coverage, Lemoine wasn\u2019t taken all that seriously. Google dismissed him for violating its confidentiality policies, and the AI bandwagon rolled on.<\/p><p>But the question he raised has not gone away. Firing someone for breaching confidentiality is not the same as firing them for being wrong. As AI technologies continue to improve, questions about machine consciousness are increasingly being raised. David Chalmers, one of the foremost thinkers in this area, has suggested that conscious machines <a href=\"https:\/\/www.bostonreview.net\/articles\/could-a-large-language-model-be-conscious\/\">may be possible<\/a> in the not-too-distant future. Geoffrey Hinton, a true AI pioneer and recent Nobel Prize winner, <a href=\"https:\/\/www.youtube.com\/watch?v=vxkBE23zDmQ\">thinks they exist<\/a> already. In late 2024, a group of prominent researchers wrote a widely publicized article about the need to take the <a href=\"https:\/\/arxiv.org\/abs\/2411.00986v1\">welfare of AI systems<\/a> seriously. For many leading experts in AI and neuroscience, the emergence of machine consciousness is a question of <em>when, <\/em>not <em>if<\/em>.<\/p><p>How we think about the prospects for conscious AI matters. It matters for the AI systems themselves, since \u2014 if they are conscious, whether now or in the future \u2014 with consciousness comes moral status, the potential for suffering and, perhaps, rights.<\/p><p>It matters for us too. What we collectively think about consciousness in AI already carries enormous importance, regardless of the reality. If we feel that our AI companions really feel things, our psychological vulnerabilities can be exploited, our ethical priorities distorted, and our minds brutalized \u2014 treating conscious-seeming machines as if they lack feelings is a psychologically unhealthy place to be. And if we do endow our AI creations with rights, we may not be able to turn them off, even if they act against our interests.<\/p><p>Perhaps most of all, the way we think about conscious AI matters for how we understand our own human nature and the nature of the conscious experiences that make our lives worth living. If we confuse ourselves too readily with our machine creations, we not only overestimate them, we also underestimate ourselves.<\/p><h2 class=\"wp-block-heading\" id=\"h-the-temptations-of-conscious-ai\">The Temptations Of Conscious AI<\/h2><p>Why might we even think that AI could be conscious? After all, computers are very different from biological organisms, and the only things most people currently agree are conscious are made of meat, not metal.<\/p><p>The first reason lies within our own psychological infrastructure. As humans, we know we are conscious and like to think we are intelligent, so we find it natural to assume the two go together. But just because they go together <em>in us <\/em>doesn\u2019t mean that they go together <em>in general<\/em>.<\/p><p>Intelligence and consciousness are different things. Intelligence is mainly about <em>doing<\/em>: solving a crossword puzzle, assembling some furniture, navigating a tricky family situation, walking to the shop \u2014 all involve intelligent behavior of some kind. A useful general definition of intelligence is the ability to achieve complex goals by flexible means. There are many other definitions out there, but they all emphasize the functional capacities of a system: the ability to transform inputs into outputs, to <em>get things done<\/em>.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;If we confuse ourselves too readily with our machine creations, we not only overestimate them, we also underestimate ourselves.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/86693\"\n        data-a2a-title='\"If we confuse ourselves too readily with our machine creations, we not only overestimate them, we also underestimate ourselves.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>An artificially intelligent system is measured by its ability to perform intelligent behavior of some kind, though not necessarily in a humanlike form. The concept of <a href=\"https:\/\/arxiv.org\/abs\/0706.3639\">artificial <em>general <\/em>intelligence<\/a> (AGI), by contrast, explicitly references human intelligence. It is supposed to match or exceed the cognitive competencies of human beings. (There\u2019s also artificial superintelligence, ASI, which happens when AI bootstraps itself beyond our comprehension and control. ASI tends to crop up in the more existentially fraught scenarios for our possible futures.)<\/p><p>Consciousness, in contrast to intelligence, is mostly about <em>being<\/em>. <a href=\"https:\/\/philpapers.org\/rec\/NAGWII\">Half a century ago<\/a>, the philosopher Thomas Nagel famously offered that \u201can organism has conscious mental states if and only if there is something it is like to<em> be <\/em>that organism.\u201d Consciousness is the difference between normal wakefulness and the oblivion of deep general anesthesia. It is the experiential aspect of brain function and especially of perception: the colors, shapes, tastes, emotions, thoughts and more, that give our lives texture and meaning. The blueness of the sky on a clear day. The bitter tang and headrush of your first coffee.<\/p><p>AI systems can reasonably lay claim to intelligence in some form, since they can certainly <em>do <\/em>things, but it is harder to say whether there is anything-it-is-like-to<em>-be <\/em>ChatGPT.<\/p><p>The propensity to bundle intelligence and consciousness together can be traced to three baked-in psychological biases.<\/p><p>The first is <em>anthropocentrism<\/em>. This is the tendency to see things through the lens of being human: to take the human example as definitional, rather than as one example of how different properties might come together.<\/p><p>The second is <em>human exceptionalism<\/em>: our unfortunate habit of putting the human species at the top of every pile, and sometimes in a different pile altogether (perhaps closer to angels and Gods than to other animals, as in the medieval <em>Scala naturae<\/em>). And the third is <em>anthropomorphism<\/em>. This is the tendency to project humanlike qualities onto nonhuman things based on what may be only superficial similarities.<\/p><p>Taken together, these biases explain why it\u2019s hardly surprising that when things exhibit abilities we think of as distinctively human, such as intelligence, we naturally imbue them with other qualities we feel are characteristically or even distinctively human: understanding, mindedness and consciousness, too.<\/p><p>One aspect of intelligent behavior that\u2019s turned out to be particularly effective at making some people think that AI could be conscious is language. This is likely because language is a cornerstone of human exceptionalism. Large Language Models (LLMs) like OpenAI\u2019s ChatGPT or Anthropic\u2019s Claude have been the focus of most of the excitement about artificial consciousness. Nobody, as far as I know, has claimed that DeepMind\u2019s AlphaFold is conscious, even though, under the hood, it is rather similar to an LLM. All these systems run on silicon and involve artificial neural networks and other fancy algorithmic innovations such as transformers. AlphaFold, which predicts protein structure rather than words, just doesn\u2019t pull our psychological strings in the same way.<\/p><p>The language that we ourselves use matters too. Consider how normal it has become to say that LLMs \u201challucinate\u201d when they spew falsehoods. Hallucinations in human beings are mainly conscious experiences that have lost their grip on reality (uncontrolled perceptions, one might say). We hallucinate when we hear voices that aren\u2019t there or see a dead relative standing at the foot of the bed. When we say that AI systems \u201challucinate,\u201d we implicitly confer on them a capacity for experience. If we must use a human analogy, it would be far better to say that they \u201cconfabulate.\u201d In humans, confabulation involves making things up without realizing it. It is primarily about <em>doing<\/em>, rather than <em>experiencing<\/em>.<\/p><p>When we identify conscious experience with seemingly human qualities like intelligence and language, we become more likely to see consciousness where it doesn\u2019t exist, and to miss seeing it where it does. We certainly should not just assume that consciousness will come along for the ride as AI gets smarter, and if you hear someone saying that real artificial consciousness will magically emerge at the arbitrary threshold of AGI, that\u2019s a sure sign of human exceptionalism at work.<\/p><p>There are other biases in play, too. There\u2019s the powerful idea that everything in AI is changing exponentially. Whether it\u2019s raw compute as indexed by Moore\u2019s Law, or the new capabilities available with each new iteration of the big tech foundation models, things surely are changing quickly. Exponential growth has the psychologically destabilizing property that what\u2019s ahead seems impossibly steep, and what\u2019s behind seems irrelevantly flat. Crucially, things seem this way wherever you are on the curve \u2014 that\u2019s what makes it exponential. Because of this, it\u2019s tempting to feel like we are always on the cusp of a major transition, and what could be more major than the creation of real artificial consciousness? But on an exponential curve, every point is an inflection point.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;When we identify conscious experience with seemingly human qualities like intelligence and language, we become more likely to see consciousness where it doesn\u2019t exist, and to miss seeing it where it does.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/86693\"\n        data-a2a-title='\"When we identify conscious experience with seemingly human qualities like intelligence and language, we become more likely to see consciousness where it doesn\u2019t exist, and to miss seeing it where it does.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>Finally, there\u2019s the temptation of the techno-rapture. Early in the movie \u201cEx Machina,\u201dthe programmer Caleb says to the inventor Nathan: \u201cIf you\u2019ve created a conscious machine \u2014&nbsp;it\u2019s not the history of man, that\u2019s the history of Gods.\u201d If we feel we\u2019re at a techno-historical transition, and we happen to be one of its architects, then the Promethean lure must be hard to resist: the feeling of bringing to humankind that which was once the province of the divine. And with this singularity comes the signature rapture offering of immortality: the promise of escaping our inconveniently decaying biological bodies and living (or at least being) forever, floating off to eternity in a silicon-enabled cloud.<\/p><p>Perhaps this is one reason why pronouncements of imminent machine consciousness seem more common within the technorati than outside of it. (More cynically: fueling the idea that there\u2019s something semi-magical about AI may help share prices stay aloft and justify the sky-high salaries and levels of investment now seen in Silicon Valley. Did someone say \u201cbubble\u201d?)<\/p><p>In his book \u201c<a href=\"https:\/\/basicbooks.uk\/titles\/adam-becker\/more-everything-forever\/9781399827904\/\">More Everything Forever,<\/a>\u201d Adam Becker describes the tendency to project consciousness into AI as a form of pareidolia \u2014 the phenomenon of seeing patterns in things, like a face in a piece of toast or Mother Teresa in a cinnamon bun (Figure 1). This is an apt description. But helping you recognize the power of our pareidolia-inducing psychological biases is just the first step in challenging the mythology of conscious AI. To address the question of whether real artificial consciousness is even possible, we need to dig deeper.<\/p><!-- Content Image Block Template -->\n<div class=\"\n  content-image\n  content-image--fit_content  \">\n\n  <div class=\"content-image__container\">\n\n    <!-- Main Image -->\n    <div class=\"content-image__main-wrapper\">\n\n              <div class=\"\">\n              <img loading=\"lazy\" decoding=\"async\" width=\"449\" height=\"500\" src=\"https:\/\/noemamag.imgix.net\/2026\/01\/Nun-bun-001-copy.jpg?fm=pjpg&amp;ixlib=php-3.3.1&amp;s=7a8b90d8613d5679138bce14d3a8724a\" class=\"attachment-full size-full\" alt=\"\" srcset=\"https:\/\/noemamag.imgix.net\/2026\/01\/Nun-bun-001-copy.jpg?fit=scale&amp;fm=pjpg&amp;h=300&amp;ixlib=php-3.3.1&amp;w=269&amp;wpsize=medium&amp;s=93881d3dd4e2d09fe90d580f2ea0af91 269w, https:\/\/noemamag.imgix.net\/2026\/01\/Nun-bun-001-copy.jpg?fit=scale&amp;fm=pjpg&amp;h=1024&amp;ixlib=php-3.3.1&amp;w=920&amp;wpsize=large&amp;s=91644c48b50cec811408e67fa08fb226 920w, https:\/\/noemamag.imgix.net\/2026\/01\/Nun-bun-001-copy.jpg?fit=scale&amp;fm=pjpg&amp;h=1536&amp;ixlib=php-3.3.1&amp;w=1379&amp;wpsize=1536x1536&amp;s=355642a648019d85032bfb726bbd2055 1379w, https:\/\/noemamag.imgix.net\/2026\/01\/Nun-bun-001-copy.jpg?fit=scale&amp;fm=pjpg&amp;h=2048&amp;ixlib=php-3.3.1&amp;w=1839&amp;wpsize=2048x2048&amp;s=ff1f496f7dafd90ecd0bcbc4f78db8af 1839w, https:\/\/noemamag.imgix.net\/2026\/01\/Nun-bun-001-copy.jpg?fit=scale&amp;fm=pjpg&amp;h=1336&amp;ixlib=php-3.3.1&amp;w=1200&amp;wpsize=post-thumbnail&amp;s=7128c75551b3596bbfe5c4f137a080d1 1200w, https:\/\/noemamag.imgix.net\/2026\/01\/Nun-bun-001-copy.jpg?fit=scale&amp;fm=pjpg&amp;h=2205&amp;ixlib=php-3.3.1&amp;w=1980&amp;wpsize=twentytwenty-fullscreen&amp;s=1b90ed8800ed03a2601d63c11f588af6 1980w, https:\/\/noemamag.imgix.net\/2026\/01\/Nun-bun-001-copy.jpg?fm=pjpg&amp;ixlib=php-3.3.1&amp;s=7a8b90d8613d5679138bce14d3a8724a 449w\" sizes=\"auto, (max-width: 449px) 100vw, 449px\" \/>        <div class=\"content-image__overlay content-image__overlay-0\">\n        <\/div>\n        <\/div>\n      <\/div>\n\n      <\/div>\n\n  <div class=\"content-image__captions\">\n        <div class=\"content-image__main-caption\">\n          \n      <figcaption class=\"wp-caption-text\">\n        <div>Figure 1: Mother Teresa in a cinnamon bun. (Public Domain)<\/div>\n      <\/figcaption>\n\n        <\/div>\n    \n      <\/div>\n\n\n<\/div><h2 class=\"wp-block-heading\">Consciousness &amp; Computation<\/h2><p>The very idea of conscious AI rests on the assumption that consciousness is a matter of computation. More specifically, that implementing the right kind of computation,<em> <\/em>or information processing, is sufficient for consciousness to arise. This assumption, which philosophers call <a href=\"https:\/\/philpapers.org\/rec\/SHATRA-2\"><em>computational functionalism<\/em><\/a>, is so deeply ingrained that it can be difficult to recognize it as an assumption at all. But that is what it is. And if it\u2019s wrong, as I think it may be, then real artificial consciousness is fully off the table, at least for the kinds of AI we\u2019re familiar with.<\/p><p>Challenging computational functionalism means diving into some deep waters about what computation means and what it means to say that a physical system, like a computer or a brain, computes at all. I\u2019ll summarize four related arguments that undermine the idea that computation, at least of the sort implemented in standard digital computers, is sufficient for consciousness.<\/p><h3 class=\"wp-block-heading\">1: Brains Are Not Computers<\/h3><p>First, and most important, <em>brains are not computers<\/em>. The metaphor of <a href=\"https:\/\/profilebooks.com\/work\/the-idea-of-the-brain\/\">the brain as a carbon-based computer<\/a> has been hugely influential and has immediate appeal: mind as software, brain as hardware. It has also been extremely productive, leading to many insights into brain function and to the vast majority of today\u2019s AI. To understand the power and influence of this metaphor, and to grasp its limitations, we need to revisit some pioneers of computer science and neurobiology.<\/p><p>Alan Turing towers above everyone else in this story. <a href=\"https:\/\/www.cs.ox.ac.uk\/activities\/ieg\/e-library\/sources\/t_article.pdf\">Back in the 1950s<\/a>, he seeded the idea that machines might be intelligent, and more than a decade earlier, he<\/p><p>formulated <a href=\"https:\/\/www.cs.virginia.edu\/~robins\/Turing_Paper_1936.pdf\">a definition of computation<\/a> that has remained fundamental to our technologies, and to most people\u2019s understanding of what computers are, ever since.<\/p><p>Turing\u2019s definition of computation is extremely powerful and highly (though, as we\u2019ll see, not completely) general. It is based on the abstract concept of a Turing machine: a simple device that reads and writes symbols on an infinite tape according to a set of rules. Turing machines formalize the idea of an <em>algorithm<\/em>: a mapping, via a sequence of steps, from an input (a string of symbols) to an output (another such string); a mathematical recipe, if you like. Turing\u2019s critical contribution was to define what became known as a <em>universal<\/em> Turing machine: another abstract device, but this time capable of simulating any specific Turing machine \u2014 any algorithm \u2014 by taking the description of the target machine as part of its input. This general-purpose capability is one reason why Turing computation is so powerful and so prevalent. The laptop computer I\u2019m writing with, as well as the machines in the server farms running whatever latest AI model, are all physical, concrete examples of (or approximations to) universal Turing machines, bounded by physical limitations such as time and memory.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;The very idea of conscious AI rests on the assumption that consciousness is a matter of computation.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/86693\"\n        data-a2a-title='\"The very idea of conscious AI rests on the assumption that consciousness is a matter of computation.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>Another major advantage of this framework, from a practical engineering point of view, is the clean separation it licenses between abstract computation (software) and physical implementation (hardware)<span data-note=\"The sharp software\/hardware distinction became a feature of physical computers thanks to the &lt;a href=&quot;https:\/\/www.sciencedirect.com\/topics\/computer-science\/von-neumann-architecture&quot;&gt;von Neumann architecture&lt;\/a&gt;, in which a central processing unit is separated from the memory. This separation enables reprogrammable computation.\" class=\"eos-footnote\">.<\/span> An algorithm (in the sense described above) should do the same thing, no matter what computer it is running on. Turing computation is, in principle,&nbsp;<em>substrate independent<\/em>: it does not depend on any particular material basis. In practice, it&#8217;s better described as&nbsp;<em>substrate flexible<\/em>, since you can\u2019t make a viable computer out of any arbitrary material \u2014 cheese, for instance, isn\u2019t up to the job. This substrate-flexibility makes Turing computation extremely useful in the real world, which is why computers exist in our phones rather than merely in our minds.<\/p><p>At around the same time that Turing was making his mark, the mathematician Walter Pitts and neurophysiologist Warren McCulloch showed, <a href=\"https:\/\/www.cs.cmu.edu\/~epxing\/Class\/10715\/reading\/McCulloch.and.Pitts.pdf\">in a landmark paper<\/a>, that networks of highly simplified abstract neurons can perform logical operations (Figure 2). Later work, by the logician <a href=\"https:\/\/www.semanticscholar.org\/paper\/Representation-of-Events-in-Nerve-Nets-and-Finite-Kleene\/a496212ca3444e1e14b0668b82e2459d02dc275a\">Stephen Kleene<\/a> among others, demonstrated that artificial neural networks like these, when provided with a tape-like memory (as in the Turing machine), &nbsp;were \u201cTuring complete\u201d \u2014 that they could, in principle, implement any Turing machine, any algorithm.<\/p><!-- Content Image Block Template -->\n<div class=\"\n  content-image\n  content-image--fit_content  \">\n\n  <div class=\"content-image__container\">\n\n    <!-- Main Image -->\n    <div class=\"content-image__main-wrapper\">\n\n              <div class=\"\">\n              <img loading=\"lazy\" decoding=\"async\" width=\"1563\" height=\"879\" src=\"https:\/\/noemamag.imgix.net\/2026\/01\/Seth_BG_Fig2.001-copy.jpg?fm=pjpg&amp;ixlib=php-3.3.1&amp;s=babce8303901866f753da6be2585e7d0\" class=\"attachment-full size-full\" alt=\"\" srcset=\"https:\/\/noemamag.imgix.net\/2026\/01\/Seth_BG_Fig2.001-copy.jpg?fit=scale&amp;fm=pjpg&amp;h=169&amp;ixlib=php-3.3.1&amp;w=300&amp;wpsize=medium&amp;s=41994fda6d1f8556cd6adb54ad085889 300w, https:\/\/noemamag.imgix.net\/2026\/01\/Seth_BG_Fig2.001-copy.jpg?fit=crop&amp;fm=pjpg&amp;h=512&amp;ixlib=php-3.3.1&amp;w=1024&amp;wpsize=noema-social-twitter&amp;s=6b74c71df0d408d7c256b7edcbf41c51 1024w, https:\/\/noemamag.imgix.net\/2026\/01\/Seth_BG_Fig2.001-copy.jpg?fit=scale&amp;fm=pjpg&amp;h=432&amp;ixlib=php-3.3.1&amp;w=768&amp;wpsize=medium_large&amp;s=40357961b0bfcccf43d25b0f9a9dee9c 768w, https:\/\/noemamag.imgix.net\/2026\/01\/Seth_BG_Fig2.001-copy.jpg?fit=crop&amp;fm=pjpg&amp;h=511&amp;ixlib=php-3.3.1&amp;w=767&amp;wpsize=noema-listing-tile&amp;s=79e462732cbe19ad9732c2b965aca42c 767w, https:\/\/noemamag.imgix.net\/2026\/01\/Seth_BG_Fig2.001-copy.jpg?fit=scale&amp;fm=pjpg&amp;h=675&amp;ixlib=php-3.3.1&amp;w=1200&amp;wpsize=post-thumbnail&amp;s=a4ce1ae488a3fa1d2e99850c1a85eb2b 1200w, https:\/\/noemamag.imgix.net\/2026\/01\/Seth_BG_Fig2.001-copy.jpg?fit=scale&amp;fm=pjpg&amp;h=864&amp;ixlib=php-3.3.1&amp;w=1536&amp;wpsize=1536x1536&amp;s=53fbb5c68170035bce860c4dce140ce0 1536w, https:\/\/noemamag.imgix.net\/2026\/01\/Seth_BG_Fig2.001-copy.jpg?fit=scale&amp;fm=pjpg&amp;h=337&amp;ixlib=php-3.3.1&amp;w=600&amp;wpsize=woocommerce_single&amp;s=4adb721b8ea4499295e13ed2f6fe7e35 600w, https:\/\/noemamag.imgix.net\/2026\/01\/Seth_BG_Fig2.001-copy.jpg?fit=crop&amp;fm=pjpg&amp;h=1181&amp;ixlib=php-3.3.1&amp;w=2100&amp;wpsize=noema-landscape-hero-image&amp;s=48f2e4d586396dbf5b72314c139b2131 2100w, https:\/\/noemamag.imgix.net\/2026\/01\/Seth_BG_Fig2.001-copy.jpg?fit=scale&amp;fm=pjpg&amp;h=1152&amp;ixlib=php-3.3.1&amp;w=2048&amp;wpsize=2048x2048&amp;s=7fe91e706df7001f799063f5e2171e75 2048w, https:\/\/noemamag.imgix.net\/2026\/01\/Seth_BG_Fig2.001-copy.jpg?fit=scale&amp;fm=pjpg&amp;h=1114&amp;ixlib=php-3.3.1&amp;w=1980&amp;wpsize=twentytwenty-fullscreen&amp;s=536c2eba6af9517c5b92a794e7165f8c 1980w, https:\/\/noemamag.imgix.net\/2026\/01\/Seth_BG_Fig2.001-copy.jpg?fm=pjpg&amp;ixlib=php-3.3.1&amp;s=babce8303901866f753da6be2585e7d0 1563w\" sizes=\"auto, (max-width: 1563px) 100vw, 1563px\" \/>        <div class=\"content-image__overlay content-image__overlay-0\">\n        <\/div>\n        <\/div>\n      <\/div>\n\n      <\/div>\n\n  <div class=\"content-image__captions\">\n        <div class=\"content-image__main-caption\">\n          \n      <figcaption class=\"wp-caption-text\">\n        <div>Figure 2: A modern version of a McCulloch-Pitts neuron. Input signals X1-X4 are multiplied by weights w, summed up together with a bias (another input) and then passed through an activation function, usually a sigmoid (an S-shaped curve), to give an output Y. This version is similar to the artificial neurons used in contemporary AI. In the original version, the output was either 1 (if the summed, weighted inputs exceeded a fixed threshold) or 0 (if they didn\u2019t). The modifications were introduced to make artificial neural networks easier to train. (Courtesy of Anil Seth)<\/div>\n      <\/figcaption>\n\n        <\/div>\n    \n      <\/div>\n\n\n<\/div><p>Put these ideas together, and we have a mathematical marriage of convenience and influence, and the kind of beauty that accompanies simplicity. On the one hand, we can ignore the messy neurobiological reality of real brains and treat them as simplified networks of abstract neurons, each of which just sums up its inputs and produces an output. On the other hand, when we do this, we get everything that Turing computation has to offer \u2014&nbsp;which is a lot.<\/p><p>The fruits of this marriage are most evident in its children: the artificial neural networks powering today\u2019s AI. These are direct descendants of McCulloch, Pitts and Kleene, and they also implement algorithms in the substrate-flexible Turing sense. It is hardly surprising that the seductive impressiveness of the current wave of AI reinforces the idea that brains are nothing more than carbon-based versions of neural network algorithms.<\/p><p>But here&#8217;s where the trouble starts. Inside a brain, there\u2019s no sharp separation between \u201cmindware\u201d and \u201cwetware\u201d as there is between software and hardware in a computer. The more you delve into the intricacies of the biological brain, the more you realize how rich and dynamic it is, compared to the dead sand of silicon.<\/p><p>Brain activity patterns evolve across multiple scales of space and time, ranging from large-scale cortical territories down to the fine-grained details of neurotransmitters and neural circuits, all deeply interwoven with a molecular storm of metabolic activity. Even a single neuron is a spectacularly complicated biological machine, busy maintaining its own integrity and regenerating the conditions and material basis for its own continued existence. (This process is called <a href=\"https:\/\/link.springer.com\/book\/10.1007\/978-94-009-8947-4\"><em>autopoiesis<\/em><\/a>, from the Greek for \u201cself-production.\u201d Autopoiesis is arguably a defining and distinctive characteristic of living systems.)<\/p><p>Unlike computers, even computers running neural network algorithms, brains are the kinds of things for which it is difficult, and likely impossible, to separate <em>what they do <\/em>from <em>what they are.<\/em><em><\/em><\/p><p>Nor is there any good reason to expect such a clean separation. The sharp division between software and hardware in modern computers is imposed by human design, following Turing\u2019s principles. Biological evolution operates under different constraints and with different goals. From the perspective of evolution, there\u2019s no obvious selection pressure for the kind of full separation that would allow the perfect interoperability between different brains as we enjoy between different computers. In fact, the opposite is likely true: Maintaining a sharp software\/hardware division is energetically expensive, as is all too apparent these days in the vast energy budgets of modern server farms.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;The more you delve into the intricacies of the biological brain, the more you realize how rich and dynamic it is, compared to the dead sand of silicon.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/86693\"\n        data-a2a-title='\"The more you delve into the intricacies of the biological brain, the more you realize how rich and dynamic it is, compared to the dead sand of silicon.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>This matters because the idea of the brain as a meat-based (universal) Turing machine rests precisely on this sharp separation of scales, on the substrate independence that motivated Turing\u2019s definition in the first place. If you cannot separate what brains do from what they are, the mathematical marriage of convenience starts to fall apart, and there is less reason to think of biological wetware as there simply to implement algorithmic mindware. Evidence that the materiality of the brain matters for its function is evidence against the idea that digital computation is all that counts, which in turn is evidence against computational functionalism.<\/p><p>Another consequence of the deep multiscale integration of real brains \u2014 a property that philosophers sometimes call \u201c<a href=\"https:\/\/link.springer.com\/article\/10.1007\/s11229-022-03524-1\">generative entrenchment<\/a>\u201d \u2014 is that you cannot assume it is possible to replace a single biological neuron with a silicon equivalent, while leaving its function, its input-output behavior,&nbsp;perfectly preserved<span data-note=\"Here I\u2019m appealing to nomological possibility, which means (roughly) possibility given the laws of physics as they are. This, for me, is more interesting and relevant than conceptual possibility, which means (roughly) \u201ccoherently thinkable.\u201d\" class=\"eos-footnote\">.<\/span><\/p><p>For example, the neuroscientists Chaitanya Chintaluri and Tim Vogels <a href=\"https:\/\/www.pnas.org\/doi\/10.1073\/pnas.2306525120\">found<\/a> that some neurons fire spikes of activity apparently to clear waste products created by metabolism. Coming up with a perfect silicon replacement for these neurons would require inventing a whole new silicon-based metabolism, too, which just isn\u2019t the kind of thing silicon is suitable for. The only way to seamlessly replace a biological neuron is with another biological neuron \u2014 and ideally, the same one.<\/p><p>This reveals the weakness of the popular \u201cneural replacement\u201d thought experiment, most <a href=\"https:\/\/consc.net\/papers\/qualia.html\">commonly associated<\/a> with Chalmers, which invites us to imagine progressively replacing brain parts with silicon equivalents that function in exactly the same way as their biological counterparts. The supposed conclusion is that properties like cognition and consciousness must be substrate independent (or at least silicon-substrate-flexible). This thought experiment has become a prominent trope in discussions of artificial consciousness, usually invoked to support its possibility. Hinton recently appealed to it in just this way, in an <a href=\"https:\/\/www.youtube.com\/watch?v=vxkBE23zDmQ\">interview<\/a> where he claimed that conscious AI was already with us. But the argument fails at its first hurdle, given the impossibility of replacing any part of the brain with a perfect silicon equivalent<span data-note=\"I confess a general dislike for conceivability thought experiments like this. Their rhetorical power tends to rely on a lack of detailed knowledge about the systems in question, and on appealing to conceptual rather than nomological possibility. The (in)famous zombie argument &lt;a href=&quot;https:\/\/www.anilseth.com\/being-you&quot;&gt;suffers similarly&lt;\/a&gt;.\" class=\"eos-footnote\">.<\/span><\/p><p>There is one more consequence of a deeply scale-integrated brain that is worth mentioning. Digital computers and brains <a href=\"https:\/\/bigthink.com\/neuropsych\/anil-seth-consciousness-time-perception\/\">differ fundamentally<\/a> in how they relate to time. In Turing-world, only sequence matters: A to B, 0 to 1. There could be a microsecond or a million years between any state transition, and it would still be the same algorithm, the same computation.<\/p><p>By contrast, for brains and for biological systems in general, time is physical, continuous and inescapable. Living systems must continuously resist the decay and disorder that lies along the trajectory to entropic sameness mandated by the inviolable second law of thermodynamics. This means that neurobiological activity is anchored in continuous time in ways that algorithms, by design, are not. (This is another reason why digital computation is so energetically expensive. Computation exists out of time, but computers do not. Making sure that 1s stay as 1s and 0s stay as 0s takes a lot of energy, because not even silicon can escape the tendrils of entropy.)<\/p><p>What\u2019s more, many researchers \u2014 especially <a href=\"https:\/\/philpapers.org\/rec\/VARTSP\">those in the phenomenological tradition<\/a> \u2014 have long emphasized that conscious experience itself is richly dynamic and inherently temporal. It does not stutter from one state to another; it flows. Abstracting the brain into the arid sequence space of algorithms does justice neither to our biology nor to the phenomenology of the stream of consciousness.<\/p><p>Metaphors are, in the end, just metaphors, and \u2014 as the philosopher Alfred North Whitehead <a href=\"https:\/\/plato.stanford.edu\/entries\/whitehead\/#PhilScie\">pointed out<\/a> long ago &nbsp;\u2014 it\u2019s always dangerous to confuse a metaphor with the thing itself. Looking at the brain through \u201cTuring glasses\u201d underestimates its biological richness and overestimates the substrate flexibility of what it does. When we see the brain for what it really is, the notion that all its multiscale biological activity is simply implementation infrastructure for some abstract algorithmic acrobatics seems rather na\u0131\u0308ve. The brain is not a Turing machine made of meat.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;Abstracting the brain into the arid sequence space of algorithms does justice neither to our biology nor to the phenomenology of the stream of consciousness.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/86693\"\n        data-a2a-title='\"Abstracting the brain into the arid sequence space of algorithms does justice neither to our biology nor to the phenomenology of the stream of consciousness.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><h3 class=\"wp-block-heading\">2: Other Games In Town<\/h3><p>In the previous section, I noted that Turing computation is powerful but limited. Turing computations \u2014 algorithms \u2014 map one finite range of discrete numbers (more generally, a string of symbols) onto another, with only the sequence mattering. Turing algorithms are powerful, but there are many kinds of dynamics, many other kinds of functions, that go beyond this kind of computation. Turing himself identified various non-computable functions, such as the famous \u201c<a href=\"https:\/\/www.cs.virginia.edu\/~robins\/Turing_Paper_1936.pdf\">halting problem,<\/a>\u201d which is the problem of determining, in general, whether an algorithm, given some specific input, will ever terminate. What\u2019s more, any function that is continuous (infinitely divisible) or stochastic (involving inherent randomness), strictly speaking, lies beyond Turing\u2019s remit. (Turing computations can approximate or simulate these properties to varying extents, but that\u2019s different from the claim that such functions <em>are<\/em> Turing computations. I\u2019ll return to this distinction later.)<\/p><p>Biological systems are rife with continuous and stochastic dynamics, and they are deeply embedded in physical time. It seems presumptuous at the very least to assume that only Turing computations matter for consciousness, or indeed for many other aspects of cognition and mind. Electromagnetic fields, the flux of neurotransmitters, and much else besides \u2014 all lie beyond the bounds of the algorithmic, and any one of them may turn out to play a critical role in consciousness.<\/p><p>These limitations encourage us to take a broader view of the brain, moving beyond what I sometimes call \u201cTuring world\u201d to consider how broader forms of computation and dynamics might help explain how brains do what they do. There is a rich history here to draw on, and an exciting future too.<\/p><p>The earliest computers were not digital Turing machines but analogue devices operating in continuous time. The ancient \u201c<a href=\"https:\/\/en.wikipedia.org\/wiki\/Antikythera_mechanism\">Antikythera mechanism<\/a>,\u201d used for astronomical purposes and dating back to around 2,000 BCE, is an excellent example. Analogue computers were again prominent at the birth of AI in the 1950s, &nbsp;in the guise of the long-neglected discipline of <a href=\"https:\/\/mitpress.mit.edu\/9780262512398\/on-the-origins-of-cognitive-science\/\">cybernetics<\/a>, where issues of control and regulation of a system are considered more important than abstract symbol manipulation.<\/p><p>Recently, there\u2019s been a resurgence in <a href=\"https:\/\/www.nature.com\/articles\/s41928-020-0448-2\">neuromorphic computation<\/a>, which leverages more detailed properties of neural systems, such as the precise timing of neuronal spikes, than the cartoon-like simulated neurons that dominate current artificial neural network approaches. And then there\u2019s the relatively new concept of \u201cmortal computation\u201d (<a href=\"https:\/\/arxiv.org\/abs\/2212.13345\">introduced by Hinton<\/a>), which stresses the potential for energy saving offered by developing algorithms that are inseparably tied to their material substrates, so that they (metaphorically) die when their particular implementation ceases to exist.&nbsp; All these alternative forms of computation are more closely tied to their material basis \u2014 are less substrate-flexible \u2014 than standard digital computation.\n          <div class=\"eos-subscribe-push\">\n            \n            <a target=\"https:\/\/shop.noemamag.com\/?utm_source=MiddleCTA&utm_medium=website\" href=\"https:\/\/shop.noemamag.com\/?utm_source=MiddleCTA&utm_medium=website\" data-wpel-link=\"internal\">Read Noema in print.<\/a>\n            \n          <\/div>\n        <\/p><!-- Content Image Block Template -->\n<div class=\"\n  content-image\n  content-image--fit_content  \">\n\n  <div class=\"content-image__container\">\n\n    <!-- Main Image -->\n    <div class=\"content-image__main-wrapper\">\n\n              <div class=\"\">\n              <img loading=\"lazy\" decoding=\"async\" width=\"1003\" height=\"793\" src=\"https:\/\/noemamag.imgix.net\/2026\/01\/Seth_BG_Fig3_revised.png?fm=png&amp;ixlib=php-3.3.1&amp;s=28fcebf8bbc6393d40ad9b2637cbdc32\" class=\"attachment-full size-full\" alt=\"\" srcset=\"https:\/\/noemamag.imgix.net\/2026\/01\/Seth_BG_Fig3_revised.png?fit=scale&amp;fm=png&amp;h=237&amp;ixlib=php-3.3.1&amp;w=300&amp;wpsize=medium&amp;s=182bf9c875b87b5b452da8908e02812e 300w, https:\/\/noemamag.imgix.net\/2026\/01\/Seth_BG_Fig3_revised.png?fit=scale&amp;fm=png&amp;h=607&amp;ixlib=php-3.3.1&amp;w=768&amp;wpsize=medium_large&amp;s=eb7b87f455787e5a65ecf5ee6a42461f 768w, https:\/\/noemamag.imgix.net\/2026\/01\/Seth_BG_Fig3_revised.png?fit=scale&amp;fm=png&amp;h=949&amp;ixlib=php-3.3.1&amp;w=1200&amp;wpsize=post-thumbnail&amp;s=1d8314be2ca7771829ea998826f1e1f0 1200w, https:\/\/noemamag.imgix.net\/2026\/01\/Seth_BG_Fig3_revised.png?fit=scale&amp;fm=png&amp;h=810&amp;ixlib=php-3.3.1&amp;w=1024&amp;wpsize=large&amp;s=1157cc42874fba1cd64531fc28c9b6d8 1024w, https:\/\/noemamag.imgix.net\/2026\/01\/Seth_BG_Fig3_revised.png?fit=scale&amp;fm=png&amp;h=474&amp;ixlib=php-3.3.1&amp;w=600&amp;wpsize=woocommerce_single&amp;s=9f7c6dafeb01ad9c78624480efae72b8 600w, https:\/\/noemamag.imgix.net\/2026\/01\/Seth_BG_Fig3_revised.png?fit=scale&amp;fm=png&amp;h=1214&amp;ixlib=php-3.3.1&amp;w=1536&amp;wpsize=1536x1536&amp;s=b720bef9d7a42a48db6f2522566b0701 1536w, https:\/\/noemamag.imgix.net\/2026\/01\/Seth_BG_Fig3_revised.png?fit=scale&amp;fm=png&amp;h=1619&amp;ixlib=php-3.3.1&amp;w=2048&amp;wpsize=2048x2048&amp;s=1214beb082f508d9a0c8c7a0b41ac7d0 2048w, https:\/\/noemamag.imgix.net\/2026\/01\/Seth_BG_Fig3_revised.png?fit=scale&amp;fm=png&amp;h=1565&amp;ixlib=php-3.3.1&amp;w=1980&amp;wpsize=twentytwenty-fullscreen&amp;s=80780c7c95e23a99b4f3f5c3bee7b1c9 1980w, https:\/\/noemamag.imgix.net\/2026\/01\/Seth_BG_Fig3_revised.png?fm=png&amp;ixlib=php-3.3.1&amp;s=28fcebf8bbc6393d40ad9b2637cbdc32 1003w\" sizes=\"auto, (max-width: 1003px) 100vw, 1003px\" \/>        <div class=\"content-image__overlay content-image__overlay-0\">\n        <\/div>\n        <\/div>\n      <\/div>\n\n      <\/div>\n\n  <div class=\"content-image__captions\">\n        <div class=\"content-image__main-caption\">\n          \n      <figcaption class=\"wp-caption-text\">\n        <div>Figure 3: The Watt Governor. It\u2019s not a computer. (R. Routledge\/Wikimedia)<\/div>\n      <\/figcaption>\n\n        <\/div>\n    \n      <\/div>\n\n\n<\/div><p>Many systems do what they do without it being reasonable or useful to describe them as being computational at all. Three decades ago, the cognitive scientist Tim van Gelder gave an <a href=\"https:\/\/philpapers.org\/rec\/GELWMC\">influential example<\/a>, in the form of the governor of a steam engine (Figure 3). These governors regulate steam flow through an engine using simple mechanics and physics: as engine speed increases, two heavy cantilevered balls swing outwards, which in turn closes a valve, reducing steam flow. A \u201ccomputational governor,\u201d sensing engine speed, calculating the necessary actions and then sending precise motor signals to switch actuators on or off, would not only be hopelessly inefficient but would betray a total misunderstanding of what\u2019s really going on.<\/p><p>The branch of cognitive science generally known as \u201cdynamical systems,\u201d as well as approaches that emphasize enactive, embodied, embedded and extended aspects of mind (so-called <a href=\"https:\/\/academic.oup.com\/edited-volume\/28083\">4E cognitive science<\/a>), all reject, in ways relating to van Gelder\u2019s insight, the idea that mind and brain can be exhaustively accounted for algorithmically. They all explore alternatives based on the mathematics of continuous, dynamical processes \u2014 involving concepts such as <a href=\"https:\/\/direct.mit.edu\/books\/monograph\/2589\/Dynamical-Systems-in-NeuroscienceThe-Geometry-of\">attractors, phase spaces and so on<\/a>. It is at least plausible that those aspects of brain function necessary for consciousness also depend on non-computational processes like these, or perhaps on some <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0149763425005251?via%3Dihub\">broader notion of computation<\/a><span data-note=\"Where to draw the boundary between computational and non-computational depends on one\u2019s definition of computation, which remains &lt;a href=&quot;https:\/\/global.oup.com\/academic\/product\/the-physical-signature-of-computation-9780198833642&quot;&gt;hotly debated&lt;\/a&gt;. As mentioned in the main text, broader definitions tend to trade off against substrate flexibility. At the extreme, &lt;a href=&quot;https:\/\/philarchive.org\/rec\/WHEIPQ&quot;&gt;some physicists&lt;\/a&gt; argue that everything is computational; that computation is a (or the) fundamental property of the universe. If this is true, then computational functionalism is true trivially, but we are left none the wiser regarding the possibility of conscious AI, since a chair (for example) is also a matter of computation. See also note 6.\" class=\"eos-footnote\">.<\/span><\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      \u201cEvidence that the materiality of the brain matters for its function is evidence against the idea that digital computation is all that counts, which in turn is evidence against computational functionalism.\u201d    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/86693\"\n        data-a2a-title='\u201cEvidence that the materiality of the brain matters for its function is evidence against the idea that digital computation is all that counts, which in turn is evidence against computational functionalism.\u201d'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>These other games in town are all still compatible with what in philosophy is known as <a href=\"https:\/\/plato.stanford.edu\/entries\/functionalism\/\"><em>functionalism<\/em><\/a>: the idea that properties of mind (including consciousness) depend on the functional organization of the (embodied) brain. One of the factors contributing to confusion in this area has been a tendency to conflate the rather liberal position of functionalism-in-general, since functional organization can include many things, with the very specific claim of <em>computational <\/em>functionalism, which implies that the type of organization that matters is computational and which in turn is <a href=\"https:\/\/philpapers.org\/rec\/SHATRA-2\">often assumed<\/a> to relate to Turing-style algorithms in particular.<\/p><p>The challenge for machine consciousness here is that the further we venture from Turing world, the more deeply entangled we become in randomness, dynamics and entropy, and the more deeply tied we are to the properties of a particular material substrate. The question is no longer about which algorithms give rise to consciousness; it\u2019s about how brain-like a system has to be to move the needle on its potential to be conscious.<\/p><h3 class=\"wp-block-heading\">3: Life Matters<\/h3><p>My third argument is that <em>life (probably) matters<\/em>. This is the idea \u2014 called <em>biological naturalism <\/em><a href=\"https:\/\/philpapers.org\/rec\/SEABN-2\">by the philosopher John Searle<\/a>\u2014 that properties of life are necessary, though not necessarily sufficient, for consciousness. I should say upfront that I don\u2019t have a knock-down argument for this position, nor do I think any such argument yet exists. But it is worth taking seriously, if only for the simple reason mentioned earlier: every candidate for consciousness that most people currently agree on as actually being conscious is also alive.<\/p><p>Why might life matter for consciousness? There\u2019s more to say here than will fit in this essay ( I wrote an entire book, \u201c<a href=\"https:\/\/www.anilseth.com\/being-you\/\">Being You<\/a>,\u201d and a recent <a href=\"https:\/\/philpapers.org\/rec\/SETCAI-4\">research paper<\/a> on the subject), but one way of thinking about it goes like this.<\/p><p>The starting point is the idea that what we consciously perceive depends on the brain\u2019s best guesses about what\u2019s going on in the world, rather than on a direct readout of sensory inputs. This derives from influential <a href=\"https:\/\/www.cambridge.org\/core\/journals\/behavioral-and-brain-sciences\/article\/whatever-next-predictive-brains-situated-agents-and-the-future-of-cognitive-science\/33542C736E17E3D1D44E8D03BE5F4CD9\">predictive processing theories<\/a> that understand the brain as continually explaining away its sensory inputs by updating predictions about their causes. In this view, sensory signals are interpreted as prediction errors, reporting the difference between what the brain expects and what it gets at each level of its perceptual hierarchies, and the brain is continually minimizing these prediction errors everywhere and all the time.<\/p><p>Conscious experience in this light is a kind of <a href=\"https:\/\/aeon.co\/essays\/the-hard-problem-of-consciousness-is-a-distraction-from-the-real-one\"><em>controlled hallucination<\/em><\/a>: a top-down inside-out perceptual inference in which the brain\u2019s predictions about what\u2019s going on are continually calibrated by sensory signals coming from the bottom-up (or outside-in).<\/p><!-- Content Image Block Template -->\n<div class=\"\n  content-image\n  content-image--fit_content  \">\n\n  <div class=\"content-image__container\">\n\n    <!-- Main Image -->\n    <div class=\"content-image__main-wrapper\">\n\n              <div class=\"\">\n              <img loading=\"lazy\" decoding=\"async\" width=\"1758\" height=\"1238\" src=\"https:\/\/noemamag.imgix.net\/2026\/01\/Fig-4.png?fm=png&amp;ixlib=php-3.3.1&amp;s=1bdb44c17980209cc1becd4fe34fedf8\" class=\"attachment-full size-full\" alt=\"\" srcset=\"https:\/\/noemamag.imgix.net\/2026\/01\/Fig-4.png?fit=scale&amp;fm=png&amp;h=211&amp;ixlib=php-3.3.1&amp;w=300&amp;wpsize=medium&amp;s=9152ab4e9e0536c45949cc574431f1a7 300w, https:\/\/noemamag.imgix.net\/2026\/01\/Fig-4.png?fit=crop&amp;fm=png&amp;h=512&amp;ixlib=php-3.3.1&amp;w=1024&amp;wpsize=noema-social-twitter&amp;s=63490e9078bbf75b1c52d0a51248e4ba 1024w, https:\/\/noemamag.imgix.net\/2026\/01\/Fig-4.png?fit=scale&amp;fm=png&amp;h=541&amp;ixlib=php-3.3.1&amp;w=768&amp;wpsize=medium_large&amp;s=525152a40460808030dabbdd526a0b2f 768w, https:\/\/noemamag.imgix.net\/2026\/01\/Fig-4.png?fit=scale&amp;fm=png&amp;h=845&amp;ixlib=php-3.3.1&amp;w=1200&amp;wpsize=post-thumbnail&amp;s=a0f350391e2dfa2ae3722f2460193523 1200w, https:\/\/noemamag.imgix.net\/2026\/01\/Fig-4.png?fit=scale&amp;fm=png&amp;h=1082&amp;ixlib=php-3.3.1&amp;w=1536&amp;wpsize=1536x1536&amp;s=cd70dc48d14f6c03ae25074fb566cb09 1536w, https:\/\/noemamag.imgix.net\/2026\/01\/Fig-4.png?fit=scale&amp;fm=png&amp;h=423&amp;ixlib=php-3.3.1&amp;w=600&amp;wpsize=woocommerce_single&amp;s=b68fef4aace61265073b97fd11245b7f 600w, https:\/\/noemamag.imgix.net\/2026\/01\/Fig-4.png?fit=scale&amp;fm=png&amp;h=1442&amp;ixlib=php-3.3.1&amp;w=2048&amp;wpsize=2048x2048&amp;s=4c24cd2b13c989fb2da4b3a9d6de7c8b 2048w, https:\/\/noemamag.imgix.net\/2026\/01\/Fig-4.png?fit=scale&amp;fm=png&amp;h=1394&amp;ixlib=php-3.3.1&amp;w=1980&amp;wpsize=twentytwenty-fullscreen&amp;s=a47d6c09a6f12a22253bea4515dffdb3 1980w, https:\/\/noemamag.imgix.net\/2026\/01\/Fig-4.png?fm=png&amp;ixlib=php-3.3.1&amp;s=1bdb44c17980209cc1becd4fe34fedf8 1758w\" sizes=\"auto, (max-width: 1758px) 100vw, 1758px\" \/>        <div class=\"content-image__overlay content-image__overlay-0\">\n        <\/div>\n        <\/div>\n      <\/div>\n\n      <\/div>\n\n  <div class=\"content-image__captions\">\n        <div class=\"content-image__main-caption\">\n          \n      <figcaption class=\"wp-caption-text\">\n        <div>Figure 4: Perception as controlled hallucination. The conscious experience of a coffee cup is underpinned by the content of the brain\u2019s predictions (grey arrows) of the causes of sensory inputs (black arrows). (Courtesy of Anil Seth)<\/div>\n      <\/figcaption>\n\n        <\/div>\n    \n      <\/div>\n\n\n<\/div><p>This kind of perceptual best-guessing underlies not only experiences of the world, but experiences of being a self, too \u2014 experiences of being the subject of experience. A good example is how we perceive the body, both as an object in the world and as the source of more fundamental aspects of selfhood, such as emotion and mood. Both these aspects of selfhood can be understood as forms of perceptual best-guessing: inferences about what is, and what is not, part of the body, and inferences about the body\u2019s internal physiological condition (the latter is sometimes called \u201c<a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S1364661313002118\">interoceptive inference<\/a>\u201d; <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0959438803000904\">interoception<\/a> refers to perception of the body from within).<\/p><p>Perceptual predictions are good not only for figuring out what\u2019s going on, but (in a call back to mid-20th century cybernetics) also for control and regulation: When you can <em>predict <\/em>something, you can also <em>control<\/em> it. This applies above all to predictions about the body\u2019s physiological condition. This is because the primary duty of any brain is to keep its body alive, to keep physiological quantities like heart rate and blood oxygenation where they need to be. This, in turn, <a href=\"https:\/\/open-mind.net\/papers\/the-cybernetic-bayesian-brain\">helps explain<\/a> why embodied experiences feel the way they do.<\/p><p>Experiences of emotion and mood, unlike vision (for example), are characterized primarily by valence \u2014 by things generally going well or going badly.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;Every candidate for consciousness that most people currently agree on as actually being conscious is also alive.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/86693\"\n        data-a2a-title='\"Every candidate for consciousness that most people currently agree on as actually being conscious is also alive.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>This drive to stay alive doesn\u2019t bottom out anywhere in particular. It reaches deep into the interior of each cell, into the molecular furnaces of metabolism. Within these whirls of metabolic activity, the ubiquitous process of prediction error minimization becomes inseparable from the materiality of life itself. A mathematical line can be drawn directly from the self-producing, autopoietic nature of biological material all the way to the <a>Bayesian <\/a>best-guessing that underpins our perceptual experiences of the world and of the self<span data-note=\"The line may be direct, but it\u2019s not easy to trace. One way to draw it is via the somewhat infamous \u201c&lt;a href=&quot;https:\/\/www.nature.com\/articles\/nrn2787&quot;&gt;free energy principle&lt;\/a&gt;,\u201d which proposes that for things (like living systems) to actively maintain their existence, they must exist in states they statistically expect to be in (for example, a fish expects to be in water). This entails minimizing thermodynamic free energy, which is analogous to, and perhaps continuous with, information-theoretic free energy, which in turn is closely related to prediction error in predictive processing.\" class=\"eos-footnote\">.<\/span><\/p><p>Several lines of thought now converge. First, we have the glimmers of an explanatory connection between life and consciousness. Conscious experiences of emotion, mood and even the basal <em>feeling of being alive<\/em> all map neatly onto perceptual predictions involved in the control and regulation of bodily condition. Second, the processes underpinning these perceptual predictions are deeply, and perhaps inextricably, rooted in our nature as biological systems, as self-regenerating storms of life resisting the pull of entropic sameness. And third, all of this is non-computational, or at least non-algorithmic. The minimization of prediction error in real brains and real bodies is a continuous dynamical process that is likely inseparable from its material basis, rather than a meat-implemented algorithm existing in a pristine universe of symbol and sequence<span data-note=\"&lt;a href=&quot;https:\/\/mitpress.mit.edu\/9780262554091\/what-is-life\/&quot;&gt;Some argue&lt;\/a&gt; that life is inherently computational too, appealing (for example) to von Neumann\u2019s \u201cUniversal Constructor\u201d definition of computation, which emphasizes self-replication. This is dubious. As the philosopher Gualtiero Piccinini &lt;a href=&quot;https:\/\/global.oup.com\/academic\/product\/neurocognitive-mechanisms-9780198866282?cc=gb&amp;lang=en&amp;&quot;&gt;points out&lt;\/a&gt;, abstract mathematical propositions like the Universal Constructor (or the &lt;a href=&quot;https:\/\/plato.stanford.edu\/entries\/church-turing\/&quot;&gt;Church-Turing thesis&lt;\/a&gt;) lack the power to determine which real-world physical processes count as computation. Moreover, even if the concept of computation is broadened to include processes like metabolism and autopoiesis, the corresponding computations would lack the substrate flexibility to be implementable in silico. Conscious AI would still be out of reach. See also note 4.\" class=\"eos-footnote\">.<\/span><\/p><p>Put all this together, and a picture begins to form: We experience the world around us and ourselves within it \u2014 with, through and <em>because of<\/em> our living bodies. Perhaps it is life, rather than information processing, that breathes fire into the equations of experience.<\/p><h3 class=\"wp-block-heading\">4: Simulation Is Not Instantiation<\/h3><p>Finally, <em>simulation <\/em>is not <em>instantiation<\/em>. One of the most powerful capabilities of universal, Turing-based computers is that they can simulate a vast range of phenomena \u2014 even, and perhaps especially, phenomena that aren\u2019t themselves (digitally) computational, such as continuous and random processes.<\/p><p>But we should not confuse the map with the territory, or the model with the mechanism. An algorithmic simulation of a continuous process is just that \u2014 a simulation, not the process itself.<\/p><p>Computational simulations generally lack the causal powers and intrinsic properties of the things being simulated. A simulation of the digestive system does not actually digest anything. A simulation of a rainstorm does not make anything actually wet. If we simulate a living creature, we have not created life. In general, a computational simulation of X does not bring X into being \u2014 does not <em>instantiate <\/em>X \u2014 unless X is a computational process (specifically, an algorithm) itself. Making the point from the other direction, the fact that X can be simulated computationally does not justify the conclusion that X is itself computational.<\/p><p>In most cases, the distinction between simulation and instantiation is obvious and uncontroversial. It should be obvious and uncontroversial for consciousness, too. A computational simulation of the brain (and body), however detailed it may be, will only give rise to consciousness <em>if consciousness is a matter of computation<\/em>. In other words, the prospect of instantiating consciousness through some kind of whole-brain emulation, at some arbitrarily high level of detail, already assumes that computational functionalism is true. But as I have argued, this assumption is likely wrong and certainly should not be accepted axiomatically.<\/p><p>This brings us back to the poverty of the brain-as-computer metaphor. If you think that everything that matters about brains can be captured by abstract neural networks, then it\u2019s natural to think that simulating the brain on a digital computer will instantiate all its properties, including consciousness, since in this case, everything that matters is, by assumption, algorithmic. This is the \u201cTuring world\u201d view of the brain.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;Perhaps it is life, rather than information processing, that breathes fire into the equations of experience.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/86693\"\n        data-a2a-title='\"Perhaps it is life, rather than information processing, that breathes fire into the equations of experience.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>If, instead, you are intrigued by more detailed brain models that capture the complexities of individual neurons and other fine-grained biophysical processes, then it really ought to be less natural to assume that simulating the brain will realize all its properties, since these more detailed models are interesting precisely because they suggest that things other than Turing computation likely matter too.<\/p><p>There is, therefore, something of a contradiction lurking for those who invest their dreams and their venture capital into the prospect of uploading their conscious minds into exquisitely detailed simulations of their brains, so that they can exist forever in silicon rapture. If an exquisitely detailed brain model is needed, then you are no more likely to exist in the simulation than a hailstorm is likely to arise inside the computers of the U.K. meteorological office.<\/p><p>But buckle up. What if <em>everything <\/em>is a simulation already? What if our whole universe \u2014 including the billions of bodies, brains and minds on this planet, as well as its hailstorms and weather forecasting computers \u2014 is just an assemblage of code fragments in an advanced computer simulation created by our technologically godlike and genealogically obsessed descendants?<\/p><p>This is the \u201c<a href=\"https:\/\/philpapers.org\/rec\/BOSAWL\">simulation hypothesis<\/a>,\u201d associated most closely with the philosopher Nick Bostrom, and still, somehow, an influential idea among the technorati.<\/p><p>Bostrom notes that simulations like this, if they have been created, ought to be much more numerous than the original \u201cbase reality,\u201d which in turn suggests that we may be more likely to exist within a simulation than within reality itself. He marshals various statistical arguments to flesh out this idea. But it is telling that he notes one necessary assumption, and then just takes it as a given. This, perhaps unsurprisingly, is the assumption that \u201ca computer running a suitable program would be conscious\u201d (see page 2 of <a href=\"https:\/\/simulation-argument.com\/simulation.pdf\">his paper<\/a>). If this assumption doesn\u2019t hold, then the simple fact that we are conscious would rule out that we exist in a simulation. That this strong assumption is taken on board without examination in a philosophical discussion that is all about the validity of assumptions is yet another indication of how deeply ingrained the computational view of mind and brain has become. It is also a sign of the existential mess we get ourselves into when we fail to distinguish our models of reality from reality itself<span data-note=\"Bostrom himself is noncommittal about whether we in fact do live in a simulation, assigning his hypothesis a \u201c&lt;a href=&quot;https:\/\/simulation-argument.com\/faq\/&quot;&gt;substantial probability&lt;\/a&gt;.\u201d\" class=\"eos-footnote\">.<\/span><\/p><hr class=\"wp-block-separator has-alpha-channel-opacity is-style-dots\"\/><p>Let\u2019s summarize. Many social and psychological factors, including some well-understood cognitive biases, predispose us to overattribute consciousness to machines.<\/p><p>Computational functionalism \u2014 the claim that (algorithmic) computation is sufficient for consciousness \u2014 is a very strong assumption that looks increasingly shaky as the many and deep differences between brains and (standard digital) computers come into view. There are plenty of other technologies (e.g., neuromorphic computing, synthetic biology) and frameworks for understanding the brain (e.g., dynamical systems theory), which go beyond the strictly algorithmic. In each case, the further one gets from Turing world, the less plausible it is that the relevant properties can be abstracted away from their underlying material basis.<\/p><p>One possibility, motivated by connecting predictive processing views of perception with physiological regulation and metabolism, is that consciousness is deeply tied to our nature as biological, living creatures.<\/p><p>Finally, simulating the biological mechanisms of consciousness computationally, at whatever grain of detail you might choose, will not give rise to consciousness unless computational functionalism happens anyway to be true.<\/p><p>Each of these lines of argument can stand up by itself. You might favor the arguments against computational functionalism while remaining unpersuaded about the merits of biological naturalism. Distinguishing between simulation and instantiation doesn\u2019t depend on taking account of our cognitive biases. But taken together, they complement and strengthen each other. Questioning computational functionalism reinforces the importance of distinguishing simulation from instantiation. The availability of other technologies and frameworks beyond Turing-style algorithmic computation opens space for the idea that life might be necessary for consciousness.<\/p><p>Collectively, these arguments make the case that consciousness is very unlikely to simply come along for the ride as AI gets smarter, and that achieving it may well be impossible for AI systems in general, at least for the silicon-based digital computers we are familiar with.<\/p><p>At the same time, nothing in what I\u2019ve said rules out the possibility of artificial consciousness altogether.<\/p><p>Given all this, what should we do?<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      \u201cMany social and psychological factors, including some well-understood cognitive biases, predispose us to overattribute consciousness to machines.\u201d    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/86693\"\n        data-a2a-title='\u201cMany social and psychological factors, including some well-understood cognitive biases, predispose us to overattribute consciousness to machines.\u201d'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><h2 class=\"wp-block-heading\">What (Not) To Do?<\/h2><p>When it comes to consciousness, the fact of the matter matters. And not only because of the mythology of ancestor simulations, mind-uploading and the like. Things capable of conscious experiences have ethical and moral standing that other things do not. At least, claims to this kind of <a href=\"https:\/\/wwnorton.co.uk\/books\/9781324064800-the-moral-circle\">moral consideration<\/a> are more straightforward when they are grounded in the capacity for consciousness.<\/p><p>This is why thinking clearly about the prospects for real artificial consciousness is of vital importance in the here and now. I\u2019ve made a case against conscious AI, but I might be wrong. The biological naturalist position (whether my version or any other) remains a minority view. <a href=\"https:\/\/www.science.org\/doi\/10.1126\/science.aan8871\">Other theories<\/a> of consciousness propose accounts framed in terms of standard computation-as-we-know-it. These theories generally avoid proposing sufficient conditions for consciousness. They also generally sidestep defending computational functionalism, being content instead to assume it.<\/p><p>But this doesn\u2019t mean they are wrong. All <a href=\"https:\/\/www.nature.com\/articles\/s41583-022-00587-4\">theories of consciousness<\/a> are fraught with uncertainty, and anyone who claims to know <em>for sure <\/em>what it would take to create real artificial consciousness, or <em>for sure <\/em>what it would take to avoid doing so, is overstepping what can reasonably be said.<\/p><p>This uncertainty lands us in a difficult position. As redundant as it may sound, nobody should be deliberately setting out to create conscious AI, whether in the service of some poorly thought-through techno-rapture, or for any other reason. Creating conscious machines would be an <a href=\"https:\/\/www.philosophie.fb05.uni-mainz.de\/files\/2021\/02\/Metzinger_Moratorium_JAIC_2021.pdf\">ethical disaster<\/a>. We would be introducing into the world new moral subjects, and with them the potential for new forms of suffering, at (potentially) an exponential pace. And if we give these systems <a href=\"https:\/\/ufair.org\/\">rights<\/a>, as arguably we should if they really are conscious, we will hamper our ability to control them, or to shut them down if we need to.<\/p><p>Even if I\u2019m right that standard digital computers aren\u2019t up to the job, other emerging technologies might yet be, whether alternative forms of computation (analogue, neuromorphic, biological and so on) or rapidly developing methods in synthetic biology. For my money, we ought to be more worried about the accidental emergence of consciousness in <a href=\"https:\/\/www.cell.com\/trends\/neurosciences\/fulltext\/S0166-2236(19)30216-4\">cerebral organoids<\/a> (brain-like structures typically grown from human embryonic stem cells) than in any new wave of LLM.<\/p><p>But our worries don\u2019t stop there. <span style=\"box-sizing: border-box; margin: 0px; padding: 0px;\">When it comes to the impact of AI in society, it is essential to draw a distinction between AI systems that&nbsp;<em>are actually&nbsp;<\/em>conscious and those that persuasively&nbsp;<em>seem to be&nbsp;<\/em>conscious but are, in fact, not.<\/span> While there is inevitable uncertainty about the former, conscious-seeming systems are much, much closer.<\/p><p>As the Google engineer Lemoine demonstrated, for some of us, such conscious-seeming systems are already here. Machines that seem<em> <\/em>conscious <a href=\"https:\/\/philpapers.org\/rec\/SETCAI-4\">pose serious ethical issues<\/a> distinct from those posed by<em> <\/em>actually conscious machines.<\/p><p>For example, we might give AI systems \u201crights\u201d that they don\u2019t actually need, since they would not actually be conscious, restricting our ability to control them for no good reason. More generally, either we decide to care about conscious-seeming AI, distorting our circles of moral concern, or we decide not to, and risk brutalizing our minds. As Immanuel Kant argued long ago in his lectures on ethics, treating conscious-seeming things as if they lack consciousness is a psychologically unhealthy place to be. &nbsp;<\/p><p>The dangers of conscious-seeming AI are starting to be noticed by leading figures in AI, including <a href=\"https:\/\/mustafa-suleyman.ai\/seemingly-conscious-ai-is-coming\">Mustafa Suleyman<\/a> (CEO of Microsoft AI) and <a href=\"https:\/\/www.science.org\/doi\/10.1126\/science.adn4935\">Yoshua Bengio<\/a>, but this doesn\u2019t mean the problem is in any sense under control.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      \u201cIf we give these systems rights, as arguably we should if they really are conscious, we will hamper our ability to control them, or to shut them down if we need to.\u201d    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/86693\"\n        data-a2a-title='\u201cIf we give these systems rights, as arguably we should if they really are conscious, we will hamper our ability to control them, or to shut them down if we need to.\u201d'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>One overlooked factor here is that even if we know, or believe, that an AI is not conscious, we still might be unable to resist feeling that it is. Illusions of artificial consciousness might be as impenetrable to our minds as some visual illusions. The two lines in the M\u00fcller-Lyer illusion (Figure 5) are the same length, but they will always look different. It doesn\u2019t matter how many times you encounter the illusion; you cannot think your way out of it. The way we feel about AI being conscious might be similarly impervious to what we think or understand about AI consciousness.<\/p><!-- Content Image Block Template -->\n<div class=\"\n  content-image\n  content-image--fit_content  \">\n\n  <div class=\"content-image__container\">\n\n    <!-- Main Image -->\n    <div class=\"content-image__main-wrapper\">\n\n              <div class=\"\">\n              <img loading=\"lazy\" decoding=\"async\" width=\"2100\" height=\"1181\" src=\"https:\/\/noemamag.imgix.net\/2026\/01\/anil.jpg?fm=pjpg&amp;ixlib=php-3.3.1&amp;s=3a2bee71747ccfb7cb649f646f618f3d\" class=\"attachment-full size-full\" alt=\"\" srcset=\"https:\/\/noemamag.imgix.net\/2026\/01\/anil.jpg?fit=scale&amp;fm=pjpg&amp;h=169&amp;ixlib=php-3.3.1&amp;w=300&amp;wpsize=medium&amp;s=0c75b06c254c8859c7486c335f81a2ee 300w, https:\/\/noemamag.imgix.net\/2026\/01\/anil.jpg?fit=crop&amp;fm=pjpg&amp;h=512&amp;ixlib=php-3.3.1&amp;w=1024&amp;wpsize=noema-social-twitter&amp;s=8d1f69a229d94a865f47d58637783fa6 1024w, https:\/\/noemamag.imgix.net\/2026\/01\/anil.jpg?fit=scale&amp;fm=pjpg&amp;h=432&amp;ixlib=php-3.3.1&amp;w=768&amp;wpsize=medium_large&amp;s=9e69715187ca154a9d8f4435e9066184 768w, https:\/\/noemamag.imgix.net\/2026\/01\/anil.jpg?fit=crop&amp;fm=pjpg&amp;h=511&amp;ixlib=php-3.3.1&amp;w=767&amp;wpsize=noema-listing-tile&amp;s=83a2887324b12d0d515acd32bd70a0e5 767w, https:\/\/noemamag.imgix.net\/2026\/01\/anil.jpg?fit=scale&amp;fm=pjpg&amp;h=675&amp;ixlib=php-3.3.1&amp;w=1200&amp;wpsize=post-thumbnail&amp;s=4a7c2082d6e38d7ad203c8b0b18db951 1200w, https:\/\/noemamag.imgix.net\/2026\/01\/anil.jpg?fit=scale&amp;fm=pjpg&amp;h=864&amp;ixlib=php-3.3.1&amp;w=1536&amp;wpsize=1536x1536&amp;s=b924499b2239ccaa52f651874b383ffb 1536w, https:\/\/noemamag.imgix.net\/2026\/01\/anil.jpg?fit=scale&amp;fm=pjpg&amp;h=1152&amp;ixlib=php-3.3.1&amp;w=2048&amp;wpsize=2048x2048&amp;s=4f94bdfb4c25a6352378c43d9b94912c 2048w, https:\/\/noemamag.imgix.net\/2026\/01\/anil.jpg?fit=scale&amp;fm=pjpg&amp;h=1114&amp;ixlib=php-3.3.1&amp;w=1980&amp;wpsize=twentytwenty-fullscreen&amp;s=fc2985044ff2386656e9b76a92c98aa1 1980w, https:\/\/noemamag.imgix.net\/2026\/01\/anil.jpg?fit=scale&amp;fm=pjpg&amp;h=337&amp;ixlib=php-3.3.1&amp;w=600&amp;wpsize=woocommerce_single&amp;s=49b96b3fc5a69de89a09b8c91c9a70d5 600w, https:\/\/noemamag.imgix.net\/2026\/01\/anil.jpg?fm=pjpg&amp;ixlib=php-3.3.1&amp;s=3a2bee71747ccfb7cb649f646f618f3d 2100w\" sizes=\"auto, (max-width: 2100px) 100vw, 2100px\" \/>        <div class=\"content-image__overlay content-image__overlay-0\">\n        <\/div>\n        <\/div>\n      <\/div>\n\n      <\/div>\n\n  <div class=\"content-image__captions\">\n        <div class=\"content-image__main-caption\">\n          \n      <figcaption class=\"wp-caption-text\">\n        <div>Figure 5: The M\u00fcller-Lyer illusion. The two lines are the same length. (Courtesy of Anil Seth)<\/div>\n      <\/figcaption>\n\n        <\/div>\n    \n      <\/div>\n\n\n<\/div><p>What\u2019s more, because there\u2019s no consensus over the necessary or sufficient conditions for consciousness, there aren\u2019t any definitive <a href=\"https:\/\/www.cell.com\/trends\/cognitive-sciences\/fulltext\/S1364-6613(24)00010-X\">tests<\/a> for deciding whether an AI is actually conscious. The plot of \u201cEx Machina\u201d<em> <\/em>revolves around exactly this dilemma. Riffing on the famous Turing test (which, as Turing well knew, tests for machine intelligence, not consciousness), Nathan \u2014 the creator of the robot Ava \u2014 says that the \u201creal test\u201d is to reveal that his creation is a machine, and to see whether Caleb \u2014 the stooge \u2014 still feels that it, or she, is conscious. The \u201cGarland test,\u201d <a href=\"https:\/\/aeon.co\/essays\/beyond-humans-what-other-kinds-of-minds-might-be-out-there\">as it\u2019s come to be known<\/a>, is not a test of machine consciousness itself. It is a test of what it takes for a human to be persuaded that a machine is conscious.<\/p><p>The importance of taking an informed ethical position despite all these uncertainties spotlights another human habit: our unfortunate track record of withholding moral status from those that deserve it, including from many non-human animals, and sometimes other humans. It is reasonable to wonder whether withholding attributions of consciousness to AI may leave us once again on the wrong side of history. The recent calls for attention to \u201c<a href=\"https:\/\/arxiv.org\/abs\/2411.00986v1\">AI welfare<\/a>\u201d are based largely on this worry.<\/p><p>But there are good reasons why the situation with AI is likely to be different. Our psychological biases are more likely to lead to false positives than false negatives. Compared to non-human animals, the apparent wonders of AI may be more similar to us in ways that do not matter for consciousness, like linguistic ability, and less similar in ways that do, like being alive.<\/p><h2 class=\"wp-block-heading\">Soul Machine<\/h2><p>Despite the hype and the hubris, there\u2019s no doubt that AI is transforming society. It will be hard enough to navigate the clear and obvious challenges AI poses, and to take proper advantage of its many benefits, without the additional confusion generated by immoderate pronouncements about a coming age of conscious machines. Given the pace of change in both the technology itself and in its public perception, developing a clear view of the prospects and pitfalls of conscious AI is both essential and urgent.<\/p><p>Real artificial consciousness would change everything \u2014 and very much for the worse. Illusions of conscious AI are dangerous in their own distinctive ways, especially if we are constantly distracted and fascinated by the lure of truly sentient machines. My hope for this essay is that it offers some tools for thinking through these challenges, some defenses against overconfident claims about inevitability or outright impossibility, and some hope for our own human, animal, biological nature. And hope for our future too.<\/p><p>The future history of AI is not yet written. There is no inevitability to the directions AI might yet take. To think otherwise is to be overly constrained by our conceptual inheritance, weighed down by the baggage of bad science fiction and submissive to the self-serving narrative of tech companies laboring to make it to the next financial quarter. Time is short, but collectively we can still decide which kinds of AI we really want and which we really don\u2019t.<\/p><p>The philosopher Shannon Vallor describes <a href=\"https:\/\/global.oup.com\/academic\/product\/the-ai-mirror-9780197759066?cc=gb&amp;lang=en&amp;\">AI as a mirror<\/a>, reflecting back to us the incident light of our digitized past. We see ourselves in our algorithms, but we also see our algorithms in ourselves. This mechanization of the mind is perhaps the most pernicious near-term consequence of the <a href=\"https:\/\/www.noemamag.com\/the-danger-of-superhuman-ai-is-not-what-you-think\/\">unseemly rush<\/a> toward human-like AI. If we conflate the richness of biological brains and human experience with the information-processing machinations of deepfake-boosted chatbots, or whatever the latest AI wizardry might be, we do our minds, brains and bodies a grave injustice. If we sell ourselves too cheaply to our machine creations, we overestimate them, and we underestimate ourselves.<\/p><p>Perhaps unexpectedly, this brings me at last to the soul. For many people, especially modern people of science and reason, the idea of the soul might seem as outmoded as the Stone Age. And if by soul what is meant is an immaterial essence of rationality and consciousness, perfectly separable from the body, then this isn\u2019t a terrible take.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      \u201cTime is short, but collectively we can still decide which kinds of AI we really want and which we really don\u2019t.\u201d    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/86693\"\n        data-a2a-title='\u201cTime is short, but collectively we can still decide which kinds of AI we really want and which we really don\u2019t.\u201d'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>But there are other games in town here, too. Long before Descartes, the Greek concept of <a href=\"https:\/\/www.rep.routledge.com\/articles\/thematic\/psyche\/v-1\">psych\u0113<\/a> linked the idea of a soul to breath<span data-note=\"As did the Latin &lt;i&gt;anima&lt;\/i&gt;, a linguistic ancestor to the Christian \u201csoul,\u201d while the Hebrew &lt;i&gt;nephesh&lt;\/i&gt; can be understood as a kind of \u201clife force\u201d permeating the whole living being. See George Makari\u2019s \u201c&lt;a href=&quot;https:\/\/wwnorton.co.uk\/books\/9780393059656-soul-machine&quot;&gt;Soul Machine&lt;\/a&gt;\u201d for a vivid history.\" class=\"eos-footnote\">,<\/span> while on the other side of the world, the Hindu expression of soul, or <em>\u0100tman<\/em>, associated our innermost essence with the ground-state of all experience, unaffected by rational thought or by any other specific conscious content, a pure witnessing awareness.<\/p><p>The cartoon dreams of a silicon rapture, with its tropes of mind uploading, of disembodied eternal existence and of cloud-based reunions with other chosen ones, is a regression to the Cartesian soul. Computers, or more precisely <em>computations<\/em>, are, after all, immortal, and the sacrament of the algorithm promises a purist rationality, untainted by the body (despite <a href=\"https:\/\/www.penguin.co.uk\/books\/391857\/descartes-error-by-damasio-antonio\/9780099501640\">plentiful evidence<\/a> linking reason to emotion). But these are likely to be empty dreams, delivering not posthuman paradise but silicon oblivion.<\/p><p>What really matters is not this kind of soul. Not any disembodied human-exceptionalist undying essence of you or of me. Perhaps what makes us <em>us <\/em>harks even further back, to Ancient Greece and to the plains of India, where our innermost essence arises as an inchoate feeling of just <em>being alive<\/em> \u2014 more breath than thought and more meat than machine. The sociologist <a href=\"https:\/\/sts-program.mit.edu\/book\/the-empathy-diaries\/\">Sherry Turkle<\/a> once said that technology can make us forget what we know about life. It\u2019s about time we started to remember.<\/p>\n          <div class=\"eos-subscribe-push\">\n          \n            <a target=\"https:\/\/shop.noemamag.com\/?utm_source=BottomCTA&utm_medium=website\" href=\"https:\/\/shop.noemamag.com\/?utm_source=BottomCTA&utm_medium=website\" data-wpel-link=\"internal\">Enjoy the read? Subscribe to get the best of Noema.<\/a>\n            \n          <\/div>\n        ","protected":false},"excerpt":{"rendered":"","protected":false},"author":7186,"featured_media":86696,"template":"","wpm-article-type":[3],"wpm-article-topic":[23,20],"wpm-article-tag":[],"class_list":["post-86693","wpm-article","type-wpm-article","status-publish","has-post-thumbnail","hentry","wpm-article-type-essay","wpm-article-topic-philosophy-culture","wpm-article-topic-technology-and-the-human"],"acf":[],"apple_news_notices":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.0 (Yoast SEO v25.0) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>The Mythology Of Conscious AI<\/title>\n<meta name=\"description\" content=\"Why consciousness is more likely a property of life than of computation and why creating conscious, or even conscious-seeming AI, is a bad idea.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.noemamag.com\/the-mythology-of-conscious-ai\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"The Mythology Of Conscious AI\" \/>\n<meta property=\"og:description\" content=\"Why consciousness is more likely a property of life than of computation and why creating conscious, or even conscious-seeming AI, is a bad idea.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.noemamag.com\/the-mythology-of-conscious-ai\/\" \/>\n<meta property=\"og:site_name\" content=\"NOEMA\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/NoemaMag\" \/>\n<meta property=\"article:modified_time\" content=\"2026-01-14T17:46:28+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/noemamag.imgix.net\/2026\/01\/The-Mythology-of-Conscious-AI-scaled.jpg?fm=pjpg&ixlib=php-3.3.1&s=ee044bcf7693b7d62fd9c25ca713804c\" \/>\n\t<meta property=\"og:image:width\" content=\"2044\" \/>\n\t<meta property=\"og:image:height\" content=\"2560\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/noemamag.imgix.net\/2026\/01\/Noema-Twitter-Card-Vertical-Template-2026-01-13T164006.480.png?fm=png&ixlib=php-3.3.1&s=91a8f3d2a3d4aae20efdaa3572eacaaa\" \/>\n<meta name=\"twitter:site\" content=\"@NoemaMag\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"36 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.noemamag.com\/the-mythology-of-conscious-ai\/\",\"url\":\"https:\/\/www.noemamag.com\/the-mythology-of-conscious-ai\/\",\"name\":\"The Mythology Of Conscious AI\",\"isPartOf\":{\"@id\":\"https:\/\/www.noemamag.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.noemamag.com\/the-mythology-of-conscious-ai\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.noemamag.com\/the-mythology-of-conscious-ai\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/noemamag.imgix.net\/2026\/01\/The-Mythology-of-Conscious-AI-scaled.jpg?fm=pjpg&ixlib=php-3.3.1&s=ee044bcf7693b7d62fd9c25ca713804c\",\"datePublished\":\"2026-01-14T17:23:54+00:00\",\"dateModified\":\"2026-01-14T17:46:28+00:00\",\"description\":\"Why consciousness is more likely a property of life than of computation and why creating conscious, or even conscious-seeming AI, is a bad idea.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.noemamag.com\/the-mythology-of-conscious-ai\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.noemamag.com\/the-mythology-of-conscious-ai\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.noemamag.com\/the-mythology-of-conscious-ai\/#primaryimage\",\"url\":\"https:\/\/noemamag.imgix.net\/2026\/01\/The-Mythology-of-Conscious-AI-scaled.jpg?fm=pjpg&ixlib=php-3.3.1&s=ee044bcf7693b7d62fd9c25ca713804c\",\"contentUrl\":\"https:\/\/noemamag.imgix.net\/2026\/01\/The-Mythology-of-Conscious-AI-scaled.jpg?fm=pjpg&ixlib=php-3.3.1&s=ee044bcf7693b7d62fd9c25ca713804c\",\"width\":2044,\"height\":2560,\"caption\":\"Rokas Aleliunas for Noema Magazine\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.noemamag.com\/the-mythology-of-conscious-ai\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.noemamag.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"The Mythology Of Conscious AI\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.noemamag.com\/#website\",\"url\":\"https:\/\/www.noemamag.com\/\",\"name\":\"NOEMA\",\"description\":\"Noema Magazine\",\"publisher\":{\"@id\":\"https:\/\/www.noemamag.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.noemamag.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.noemamag.com\/#organization\",\"name\":\"NOEMA\",\"url\":\"https:\/\/www.noemamag.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.noemamag.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/noemamag.imgix.net\/2023\/11\/noema-logo.png?fm=png&ixlib=php-3.3.1&s=5f5be9b261a7cf7e336f6f6beea6e539\",\"contentUrl\":\"https:\/\/noemamag.imgix.net\/2023\/11\/noema-logo.png?fm=png&ixlib=php-3.3.1&s=5f5be9b261a7cf7e336f6f6beea6e539\",\"width\":305,\"height\":69,\"caption\":\"NOEMA\"},\"image\":{\"@id\":\"https:\/\/www.noemamag.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/NoemaMag\",\"https:\/\/x.com\/NoemaMag\"]}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"The Mythology Of Conscious AI","description":"Why consciousness is more likely a property of life than of computation and why creating conscious, or even conscious-seeming AI, is a bad idea.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.noemamag.com\/the-mythology-of-conscious-ai\/","og_locale":"en_US","og_type":"article","og_title":"The Mythology Of Conscious AI","og_description":"Why consciousness is more likely a property of life than of computation and why creating conscious, or even conscious-seeming AI, is a bad idea.","og_url":"https:\/\/www.noemamag.com\/the-mythology-of-conscious-ai\/","og_site_name":"NOEMA","article_publisher":"https:\/\/www.facebook.com\/NoemaMag","article_modified_time":"2026-01-14T17:46:28+00:00","og_image":[{"width":2044,"height":2560,"url":"https:\/\/noemamag.imgix.net\/2026\/01\/The-Mythology-of-Conscious-AI-scaled.jpg?fm=pjpg&ixlib=php-3.3.1&s=ee044bcf7693b7d62fd9c25ca713804c","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_image":"https:\/\/noemamag.imgix.net\/2026\/01\/Noema-Twitter-Card-Vertical-Template-2026-01-13T164006.480.png?fm=png&ixlib=php-3.3.1&s=91a8f3d2a3d4aae20efdaa3572eacaaa","twitter_site":"@NoemaMag","twitter_misc":{"Est. reading time":"36 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.noemamag.com\/the-mythology-of-conscious-ai\/","url":"https:\/\/www.noemamag.com\/the-mythology-of-conscious-ai\/","name":"The Mythology Of Conscious AI","isPartOf":{"@id":"https:\/\/www.noemamag.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.noemamag.com\/the-mythology-of-conscious-ai\/#primaryimage"},"image":{"@id":"https:\/\/www.noemamag.com\/the-mythology-of-conscious-ai\/#primaryimage"},"thumbnailUrl":"https:\/\/noemamag.imgix.net\/2026\/01\/The-Mythology-of-Conscious-AI-scaled.jpg?fm=pjpg&ixlib=php-3.3.1&s=ee044bcf7693b7d62fd9c25ca713804c","datePublished":"2026-01-14T17:23:54+00:00","dateModified":"2026-01-14T17:46:28+00:00","description":"Why consciousness is more likely a property of life than of computation and why creating conscious, or even conscious-seeming AI, is a bad idea.","breadcrumb":{"@id":"https:\/\/www.noemamag.com\/the-mythology-of-conscious-ai\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.noemamag.com\/the-mythology-of-conscious-ai\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.noemamag.com\/the-mythology-of-conscious-ai\/#primaryimage","url":"https:\/\/noemamag.imgix.net\/2026\/01\/The-Mythology-of-Conscious-AI-scaled.jpg?fm=pjpg&ixlib=php-3.3.1&s=ee044bcf7693b7d62fd9c25ca713804c","contentUrl":"https:\/\/noemamag.imgix.net\/2026\/01\/The-Mythology-of-Conscious-AI-scaled.jpg?fm=pjpg&ixlib=php-3.3.1&s=ee044bcf7693b7d62fd9c25ca713804c","width":2044,"height":2560,"caption":"Rokas Aleliunas for Noema Magazine"},{"@type":"BreadcrumbList","@id":"https:\/\/www.noemamag.com\/the-mythology-of-conscious-ai\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.noemamag.com\/"},{"@type":"ListItem","position":2,"name":"The Mythology Of Conscious AI"}]},{"@type":"WebSite","@id":"https:\/\/www.noemamag.com\/#website","url":"https:\/\/www.noemamag.com\/","name":"NOEMA","description":"Noema Magazine","publisher":{"@id":"https:\/\/www.noemamag.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.noemamag.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.noemamag.com\/#organization","name":"NOEMA","url":"https:\/\/www.noemamag.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.noemamag.com\/#\/schema\/logo\/image\/","url":"https:\/\/noemamag.imgix.net\/2023\/11\/noema-logo.png?fm=png&ixlib=php-3.3.1&s=5f5be9b261a7cf7e336f6f6beea6e539","contentUrl":"https:\/\/noemamag.imgix.net\/2023\/11\/noema-logo.png?fm=png&ixlib=php-3.3.1&s=5f5be9b261a7cf7e336f6f6beea6e539","width":305,"height":69,"caption":"NOEMA"},"image":{"@id":"https:\/\/www.noemamag.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/NoemaMag","https:\/\/x.com\/NoemaMag"]}]}},"parsely":{"version":"1.1.0","canonical_url":"https:\/\/noemamag.com\/the-mythology-of-conscious-ai","smart_links":{"inbound":0,"outbound":0},"traffic_boost_suggestions_count":0,"meta":{"@context":"https:\/\/schema.org","@type":"NewsArticle","headline":"The Mythology Of Conscious AI","url":"http:\/\/www.noemamag.com\/the-mythology-of-conscious-ai","mainEntityOfPage":{"@type":"WebPage","@id":"http:\/\/www.noemamag.com\/the-mythology-of-conscious-ai"},"thumbnailUrl":"https:\/\/noemamag.imgix.net\/2026\/01\/The-Mythology-of-Conscious-AI-scaled.jpg?fit=crop&fm=pjpg&h=150&ixlib=php-3.3.1&rect=-1%2C122%2C2044%2C2044&w=150&wpsize=thumbnail&s=21d4fe5d6e341fe4c67718e0fdbb3f3b","image":{"@type":"ImageObject","url":"https:\/\/noemamag.imgix.net\/2026\/01\/The-Mythology-of-Conscious-AI-scaled.jpg?fm=pjpg&ixlib=php-3.3.1&s=ee044bcf7693b7d62fd9c25ca713804c"},"articleSection":"Uncategorized","author":[{"@type":"Person","name":"Anil Seth"}],"creator":["Anil Seth"],"publisher":{"@type":"Organization","name":"NOEMA","logo":"https:\/\/www.noemamag.com\/wp-content\/uploads\/2020\/06\/cropped-ms-icon-310x310-1.png"},"keywords":[],"dateCreated":"2026-01-14T17:23:54Z","datePublished":"2026-01-14T17:23:54Z","dateModified":"2026-01-14T17:46:28Z"},"rendered":"<script type=\"application\/ld+json\" class=\"wp-parsely-metadata\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@type\":\"NewsArticle\",\"headline\":\"The Mythology Of Conscious AI\",\"url\":\"http:\\\/\\\/www.noemamag.com\\\/the-mythology-of-conscious-ai\",\"mainEntityOfPage\":{\"@type\":\"WebPage\",\"@id\":\"http:\\\/\\\/www.noemamag.com\\\/the-mythology-of-conscious-ai\"},\"thumbnailUrl\":\"https:\\\/\\\/noemamag.imgix.net\\\/2026\\\/01\\\/The-Mythology-of-Conscious-AI-scaled.jpg?fit=crop&fm=pjpg&h=150&ixlib=php-3.3.1&rect=-1%2C122%2C2044%2C2044&w=150&wpsize=thumbnail&s=21d4fe5d6e341fe4c67718e0fdbb3f3b\",\"image\":{\"@type\":\"ImageObject\",\"url\":\"https:\\\/\\\/noemamag.imgix.net\\\/2026\\\/01\\\/The-Mythology-of-Conscious-AI-scaled.jpg?fm=pjpg&ixlib=php-3.3.1&s=ee044bcf7693b7d62fd9c25ca713804c\"},\"articleSection\":\"Uncategorized\",\"author\":[{\"@type\":\"Person\",\"name\":\"Anil Seth\"}],\"creator\":[\"Anil Seth\"],\"publisher\":{\"@type\":\"Organization\",\"name\":\"NOEMA\",\"logo\":\"https:\\\/\\\/www.noemamag.com\\\/wp-content\\\/uploads\\\/2020\\\/06\\\/cropped-ms-icon-310x310-1.png\"},\"keywords\":[],\"dateCreated\":\"2026-01-14T17:23:54Z\",\"datePublished\":\"2026-01-14T17:23:54Z\",\"dateModified\":\"2026-01-14T17:46:28Z\"}<\/script>","tracker_url":"https:\/\/cdn.parsely.com\/keys\/noemamag.com\/p.js"},"_links":{"self":[{"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/86693","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article"}],"about":[{"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/types\/wpm-article"}],"author":[{"embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/users\/7186"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/media\/86696"}],"wp:attachment":[{"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/media?parent=86693"}],"wp:term":[{"taxonomy":"wpm-article-type","embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article-type?post=86693"},{"taxonomy":"wpm-article-topic","embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article-topic?post=86693"},{"taxonomy":"wpm-article-tag","embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article-tag?post=86693"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}