{"id":49665,"date":"2023-10-10T13:41:56","date_gmt":"2023-10-10T13:41:56","guid":{"rendered":"https:\/\/www.noemamag.com"},"modified":"2024-11-14T19:09:01","modified_gmt":"2024-11-14T19:09:01","slug":"artificial-general-intelligence-is-already-here","status":"publish","type":"wpm-article","link":"https:\/\/www.noemamag.com\/artificial-general-intelligence-is-already-here","title":{"rendered":"Artificial General Intelligence Is Already Here"},"content":{"rendered":"<p>Artificial General Intelligence (AGI) means many different things to different people, but the most important parts of it have already been achieved by the <a href=\"https:\/\/arxiv.org\/abs\/2303.12712\">current generation<\/a> of advanced AI large language models such as ChatGPT, Bard, LLaMA and Claude. These \u201cfrontier models\u201d have many flaws: They hallucinate scholarly citations and court cases, perpetuate biases from their training data and make simple arithmetic mistakes. Fixing every flaw (including those often exhibited by humans) would involve building an artificial superintelligence, which is a whole other project.<\/p><div>\n    <iframe loading=\"lazy\" id=\"noa-web-audio-player\"\n            style=\"border: none\"\n            src=\"https:\/\/embed-player.newsoveraudio.com\/v4?key=n0e13g&#038;id=https:\/\/www.noemamag.com\/artificial-general-intelligence-is-already-here\/&#038;bgColor=F3F3F3&#038;color=6D6D6D&#038;progressBgColor=F7F7F7&#038;progressBorderColor=6D6D6D&#038;playColor=F3F3F3&#038;titleColor=383D3D&#038;timeColor=6D6D6D&#038;speedColor=6D6D6D&#038;noaLinkColor=6D6D6D&#038;noaLinkHighlightColor=039BE5\"\n            width=\"100%\" height=\"110px\"><\/iframe>\n<\/div><p>Nevertheless, today\u2019s frontier models perform competently even on novel tasks they were not trained for, crossing a threshold that previous generations of AI and supervised deep learning systems never managed. Decades from now, they will be recognized as the first true examples of AGI, just as the 1945 <a href=\"https:\/\/www.britannica.com\/technology\/ENIAC\">ENIAC<\/a> is now recognized as the first true general-purpose electronic computer.<\/p><p>The ENIAC could be programmed with sequential, looping and conditional instructions, giving it a general-purpose applicability that its predecessors, such as the <a href=\"https:\/\/www.mit.edu\/~klund\/analyzer\/\">Differential Analyzer<\/a>, lacked. Today\u2019s computers far exceed ENIAC&#8217;s speed, memory, reliability and ease of use, and in the same way, tomorrow\u2019s frontier AI will improve on today\u2019s. <\/p><p>But the key property of generality? It has already been achieved.<\/p><h2 class=\"wp-block-heading\" id=\"h-what-is-general-intelligence\">What Is General Intelligence?<\/h2><p>Early AI systems exhibited artificial narrow intelligence, concentrating on a single task and sometimes performing it at near or above human level. <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/0010480975900099\">MYCIN<\/a>, a program developed by Ted Shortliffe at Stanford in the 1970s, only diagnosed and recommended treatment for bacterial infections. <a href=\"https:\/\/www.mt-archive.net\/70\/CEC-1977-Toma.pdf\">SYSTRAN<\/a> only did machine translation. IBM\u2019s Deep Blue only played chess.<\/p><p>Later deep neural network models trained with supervised learning such as <a href=\"https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2012\/file\/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf\">AlexNet<\/a> and <a href=\"https:\/\/www.deepmind.com\/research\/highlighted-research\/alphago\">AlphaGo<\/a> successfully took on a number of tasks in machine perception and judgment that had long eluded earlier heuristic, rule-based or knowledge-based systems.<\/p><p>Most recently, we have seen frontier models that can perform a wide variety of tasks without being explicitly trained on each one. These models have achieved artificial general intelligence in five important ways:<\/p><ol class=\"wp-block-list\"><li><strong>Topics<\/strong>: Frontier models are trained on hundreds of gigabytes of text from a wide variety of internet sources, covering any topic that has been written about online. Some are also trained on large and varied collections of audio, video and other media.<\/li>\n\n<li><strong>Tasks<\/strong>: These models can perform a variety of tasks, including answering questions, generating stories, summarizing, transcribing speech, translating language, explaining, making decisions, doing customer support, calling out to other services to take actions, and combining words and images.<\/li>\n\n<li><strong>Modalities<\/strong>: The most popular models operate on images and text, but some systems also process audio and video, and some are connected to robotic sensors and actuators. By <a href=\"https:\/\/ieeexplore.ieee.org\/document\/10158503\">using<\/a> modality-specific tokenizers or <a href=\"https:\/\/arxiv.org\/abs\/2305.07185\">processing<\/a> raw data streams, frontier models can, in principle, handle any known sensory or motor modality.<\/li>\n\n<li><strong>Languages<\/strong>: English is over-represented in the training data of most systems, but large models can converse in dozens of languages and translate between them, even for language pairs that have no example translations in the training data. If code is included in the training data, increasingly effective \u201ctranslation\u201d between natural languages and computer languages is even supported (i.e., general programming and reverse engineering).<\/li>\n\n<li><strong>Instructability<\/strong>: These models are capable of \u201c<a href=\"https:\/\/arxiv.org\/abs\/2212.07677\">in-context learning<\/a>,\u201d where they learn from a prompt rather than from the training data. In \u201c<a href=\"https:\/\/blog.paperspace.com\/few-shot-learning\/\">few-shot learning<\/a>,\u201d a new task is demonstrated with several example input\/output pairs, and the system then gives outputs for novel inputs. In \u201czero-shot learning,\u201d a novel task is described but <em>no<\/em> examples are given (for instance, \u201cWrite a poem about cats in the style of Hemingway\u201d or \u201c&#8217;Equiantonyms&#8217; are pairs of words that are opposite of each other and have the same number of letters. What are some &#8216;equiantonyms&#8217;?\u201d).<\/li><\/ol><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;The most important parts of AGI have already been achieved by the current generation of advanced AI large language models.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/49665\"\n        data-a2a-title='\"The most important parts of AGI have already been achieved by the current generation of advanced AI large language models.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>\u201cGeneral intelligence\u201d must be thought of in terms of a multidimensional scorecard, not a single yes\/no proposition. Nonetheless, there is a meaningful discontinuity between narrow and general intelligence: Narrowly intelligent systems typically perform a single or predetermined set of tasks, for which they are explicitly trained. Even multitask learning yields only narrow intelligence because the models still operate within the confines of tasks envisioned by the engineers. Indeed, much of the hard engineering work involved in developing narrow AI amounts to curating and labeling task-specific datasets.<\/p><p>By contrast, frontier language models can perform competently at pretty much any information task that can be done by humans, can be posed and answered using natural language, and has quantifiable performance. <\/p><p>The ability to do in-context learning is an especially meaningful meta-task for general AI. In-context learning extends the range of tasks from anything observed in the training corpus to anything that can be described, which is a big upgrade. A general AI model can perform tasks the designers never <a href=\"https:\/\/www.quantamagazine.org\/the-unpredictable-abilities-emerging-from-large-ai-models-20230316\/\">envisioned<\/a>.<\/p><p>So: Why the reluctance to acknowledge AGI?<\/p><p>Frontier models have achieved a significant level of general intelligence, according to the everyday meanings of those two words. And yet most commenters have been reluctant to say so for, it seems to us, four main reasons:<\/p><ol class=\"wp-block-list\"><li>A healthy skepticism about metrics for AGI<\/li>\n\n<li>An ideological commitment to alternative AI theories or techniques<\/li>\n\n<li>A devotion to human (or biological) exceptionalism<\/li>\n\n<li>A concern about the economic implications of AGI<\/li><\/ol><h2 class=\"wp-block-heading\" id=\"h-metrics\"><strong>Metrics<\/strong><\/h2><p>There is a great deal of disagreement on where the threshold to AGI lies. Some people try to avoid the term altogether; <a href=\"https:\/\/www.penguinrandomhouse.com\/books\/722674\/the-coming-wave-by-mustafa-suleyman-with-michael-bhaskar\/\">Mustafa Suleyman<\/a> has suggested a switch to \u201cArtificial Capable Intelligence,\u201d which he proposes be measured by a \u201cmodern Turing Test\u201d: the ability to quickly make a million dollars online (from an initial $100,000 investment). AI systems able to directly generate wealth will certainly have an effect on the world, though equating \u201ccapable\u201d with \u201ccapitalist\u201d seems dubious.<\/p><p>There is good reason to be skeptical of some of the metrics. When a human passes a well-constructed law, business or medical exam, we assume the human is not only competent at the specific questions on the exam, but also at a range of related questions and tasks \u2014 not to mention the broad competencies that humans possess in general. But when a frontier model is trained to <a href=\"https:\/\/cloud.google.com\/blog\/topics\/healthcare-life-sciences\/sharing-google-med-palm-2-medical-large-language-model\">pass<\/a> such an exam, the training is often narrowly tuned to the exact types of questions on the test. Today\u2019s frontier models are of course not fully qualified to be lawyers or doctors, even though they can pass those qualifying exams. As Goodhart\u2019s law states: \u201cWhen a measure becomes a target, it ceases to be a good measure.\u201d Better tests are needed, and there is much ongoing work, such as Stanford\u2019s test suite HELM (Holistic Evaluation of Language Models).<\/p><p>It is also important not to confuse linguistic fluency with intelligence. Previous generations of chatbots such as Mitsuku (now known as <a href=\"https:\/\/chat.kuki.ai\/\">Kuki<\/a>) could occasionally fool human judges by abruptly changing the subject and echoing a coherent passage of text. Current frontier models generate responses on the fly rather than relying on canned text, and they are better at sticking to the subject. But they still benefit from a human\u2019s natural assumption that a fluent, grammatical response most likely comes from an intelligent entity. We call this the \u201cChauncey Gardiner effect,\u201d after the hero in \u201c<a href=\"https:\/\/en.wikipedia.org\/wiki\/Being_There_(novel)\">Being There<\/a>\u201d \u2014 Chauncey is taken very seriously solely because he <em>looks<\/em> like someone who should be taken seriously.<\/p><p>The researchers Rylan Schaeffer, Brando Miranda and Sanmi Koyejo have <a href=\"https:\/\/arxiv.org\/abs\/2304.15004\">pointed out<\/a> another issue with common AI performance metrics: They are nonlinear. Consider a test consisting of a series of arithmetic problems with five-digit numbers. Small models will answer all these problems wrong, but as the size of the model is scaled up, there will be a critical threshold after which the model will get most of the problems right. This has led commenters to say that arithmetic skill is an emergent property in frontier models of sufficient size. But if instead the test included arithmetic problems with one- to four-digit numbers as well, and if partial credit were given for getting some of the digits correct, then we would see that performance increases gradually as the model size increases; there is no sharp threshold.<\/p><p>This finding casts <a href=\"https:\/\/hai.stanford.edu\/news\/ais-ostensible-emergent-abilities-are-mirage\">doubt<\/a> on the idea that super-intelligent abilities and properties, possibly including consciousness, could suddenly and mysteriously \u201cemerge,\u201d a fear among some citizens and policymakers. (Sometimes, the same narrative is used to \u201cexplain\u201d why humans are intelligent while the other great apes are supposedly not; in reality, this discontinuity may be equally illusory.) Better metrics reveal that general intelligence is continuous: \u201cMore is more,\u201d as opposed to \u201c<a href=\"https:\/\/www.science.org\/doi\/10.1126\/science.177.4047.393\">more is different<\/a>.\u201d<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;Frontier language models can perform competently at pretty much any information task that can be done by humans, can be posed and answered using natural language, and has quantifiable performance.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/49665\"\n        data-a2a-title='\"Frontier language models can perform competently at pretty much any information task that can be done by humans, can be posed and answered using natural language, and has quantifiable performance.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><h2 class=\"wp-block-heading\" id=\"h-alternative-theories\"><strong>Alternative Theories<\/strong><\/h2><p>The prehistory of AGI includes many competing theories of intelligence, some of which succeeded in narrower domains. Computer science itself, which is based on programming languages with precisely defined formal grammars, was in the beginning closely allied with \u201cGood Old-Fashioned AI\u201d (GOFAI). The GOFAI credo, drawing from a line going back at least to Gottfried Wilhelm Leibniz, the 17th-century German mathematician, is exemplified by Allen Newell and Herbert Simon\u2019s \u201c<a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/360018.360022\">physical symbol system hypothesis<\/a>,\u201d which holds that intelligence can be expressed in terms of a calculus wherein symbols represent ideas and thinking consists of symbol manipulation according to the rules of logic. <\/p><p>At first, natural languages like English appear to be such systems, with symbols like the words \u201cchair\u201d and \u201cred\u201d representing ideas like \u201cchair-ness\u201d and \u201cred-ness.\u201d Symbolic systems allow statements to be made \u2014 \u201cThe chair is red\u201d \u2014 and logical inferences to follow: \u201cIf the chair is red then the chair is not blue.\u201d<\/p><p>While this seems reasonable, systems built with this approach were always brittle and limited in the capabilities and generality they could achieve. There are two main problems: First, terms like \u201cblue,\u201d \u201cred\u201d and \u201cchair\u201d are only approximately defined, and the implications of these ambiguities become more serious as the complexity of the tasks being performed with them grows.<\/p><p>Second, there are very few logical inferences that are universally valid; a chair may be blue <em>and<\/em> red. More fundamentally, a great deal of thinking is not reducible to the manipulation of logical propositions. That\u2019s why, for decades, concerted efforts to bring together computer programming and linguistics failed to produce anything resembling AGI.<\/p><p>However, some researchers with ideological commitments to symbolic systems or linguistics have continued to insist that their particular theory is a requirement for general intelligence, and that neural nets or, more broadly, machine learning, are theoretically incapable of general intelligence \u2014 especially if they are trained <a href=\"https:\/\/aclanthology.org\/2020.acl-main.463\/\">purely on language<\/a>. These critics have been increasingly vocal in the wake of ChatGPT.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;For decades, concerted efforts to bring together computer programming and linguistics failed to produce anything resembling AGI.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/49665\"\n        data-a2a-title='\"For decades, concerted efforts to bring together computer programming and linguistics failed to produce anything resembling AGI.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>For example, <a href=\"https:\/\/www.nytimes.com\/2023\/03\/08\/opinion\/noam-chomsky-chatgpt-ai.html\">Noam Chomsky<\/a>, widely regarded as the father of modern linguistics, wrote of large language models: \u201cWe know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects.\u201d<\/p><p>Gary Marcus, a cognitive scientist and critic of contemporary AI, <a href=\"https:\/\/www.nytimes.com\/2023\/01\/06\/podcasts\/transcript-ezra-klein-interviews-gary-marcus.html\">says<\/a> that frontier models \u201care learning how to sound and seem human. But they have no actual idea what they are saying or doing.\u201d Marcus allows that neural networks may be <em>part<\/em> of a solution to AGI, but <a href=\"https:\/\/arxiv.org\/abs\/2002.06177\">believes<\/a> that \u201cto build a robust, knowledge-driven approach to AI, we must have the machinery of symbol manipulation in our toolkit.\u201d Marcus (and many others) have focused on finding gaps in the capabilities of frontier models, especially large language models, and often claim that they reflect fundamental flaws in the approach.\n          <div class=\"eos-subscribe-push\">\n            \n            <a target=\"https:\/\/shop.noemamag.com\/?utm_source=MiddleCTA&utm_medium=website\" href=\"https:\/\/shop.noemamag.com\/?utm_source=MiddleCTA&utm_medium=website\" data-wpel-link=\"internal\">Read Noema in print.<\/a>\n            \n          <\/div>\n        <\/p><p>Without explicit symbols, according to these critics, a merely learned, \u201cstatistical\u201d approach cannot produce true understanding. Relatedly, they claim that without symbolic concepts, no logical reasoning can occur, and that \u201creal\u201d intelligence requires such reasoning.<\/p><p>Setting aside the question of whether intelligence is always reliant on symbols and logic, there are reasons to question this claim about the inadequacy of neural nets and machine learning, because neural nets are so powerful at doing anything a computer can do. For example:<\/p><ul class=\"wp-block-list\"><li>Discrete or symbolic representations can readily <a href=\"https:\/\/royalsocietypublishing.org\/doi\/10.1098\/rsta.2022.0041\">be learned<\/a> by neural networks and emerge naturally during training.<\/li>\n\n<li>Advanced neural net models can <a href=\"https:\/\/arxiv.org\/abs\/2306.04637\">apply<\/a> sophisticated statistical techniques to data, allowing them to make near-optimal predictions from the given data. The models learn how to apply these techniques and to choose the best technique for a given problem, without being explicitly told. <sup>&nbsp;<\/sup><\/li>\n\n<li>Stacking several neural nets together in the right way yields a model that can <a href=\"https:\/\/proceedings.mlr.press\/v202\/giannou23a.html\">perform<\/a> the same calculations as any given computer program.<\/li>\n\n<li>Given example inputs and outputs of any function that can be computed by any computer, a neural net can <a href=\"https:\/\/arxiv.org\/abs\/2309.06979\">learn<\/a> to approximate that function. (Here \u201capproximate\u201d means that, in theory, the neural net can exceed any level of accuracy \u2014 99.9% correct for example \u2014 that you care to state.)<\/li><\/ul><p>For each criticism, we should ask whether it is prescriptive or empirical. A prescriptive criticism would argue: \u201cIn order to be considered as AGI, a system not only has to pass this test, it also has to be constructed in this way.\u201d We would push back against prescriptive criticisms on the grounds that the test itself should be sufficient \u2014 and if it is not, the test should be amended.<\/p><p>An empirical criticism, on the other hand, would argue: \u201cI don\u2019t think you can make AI work that way \u2014 I think it would be better to do it another way.\u201d Such criticism can help set research directions, but the proof is in the pudding. If a system can pass a well-constructed test, it automatically defeats the criticism.<\/p><p>In recent years, a great many tests have been devised for cognitive tasks associated with \u201cintelligence,\u201d \u201cknowledge,\u201d \u201ccommon sense\u201d and \u201creasoning.\u201d These include novel questions that can\u2019t be answered through memorization of training data but require generalization \u2014 the same proof of understanding we require of students when we test their understanding or reasoning using questions they haven\u2019t encountered during study. Sophisticated tests can introduce novel concepts or tasks, probing a test-taker\u2019s cognitive flexibility: the ability to learn and apply new ideas on the fly. (This is the essence of in-context learning.)<\/p><p>As AI critics work to devise new tests on which current models still perform poorly, they are doing useful work \u2014 although given the increasing speed with which newer, larger models are surmounting these hurdles, it might be wise to hold off for a few weeks before (once again) rushing to claim that AI is \u201chype.\u201d<\/p><h2 class=\"wp-block-heading\" id=\"h-human-or-biological-exceptionalism\"><strong>Human (Or Biological) Exceptionalism<\/strong><\/h2><p>Insofar as skeptics remain unmoved by metrics, they may be unwilling to accept <em>any<\/em> empirical evidence of AGI. Such reluctance can be driven by a desire to maintain something special about the human spirit, just as humanity has been reluctant to accept that the Earth is not the center of the universe and that Homo sapiens are not the pinnacle of a \u201cgreat chain of being.\u201d It\u2019s true that there is something special about humanity, and we should celebrate that, but we should not conflate it with general intelligence.<\/p><p>It is sometimes argued that anything that could count as an AGI must be conscious, have agency, experience subjective perceptions or feel feelings. One line of reasoning goes like this: A simple tool, such as a screwdriver, clearly has a purpose (to drive screws), but it cannot be said to have agency of its own; rather, any agency clearly belongs to either the toolmaker or tool user. The screwdriver itself is \u201cjust a tool.\u201d The same reasoning applies to an AI system trained to perform a specific task, such as optical character recognition or speech synthesis.<\/p><p>A system with artificial general intelligence, though, is harder to classify as a mere tool. The skills of a frontier model exceed those imagined by its programmers or users. Furthermore, since LLMs can be prompted to perform arbitrary tasks using language, can generate new prompts with language and indeed can prompt themselves (\u201c<a href=\"https:\/\/arxiv.org\/abs\/2201.11903\">chain of thought prompting<\/a>\u201d) the issue of whether and when a frontier model has \u201cagency\u201d requires more careful consideration.<\/p><p>Consider the many actions Suleyman\u2019s \u201c<a href=\"https:\/\/www.the-coming-wave.com\/\">artificial capable intelligence<\/a>\u201d might carry out in order to make a million dollars online:<\/p><div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\"><p>It might research the web to look at what\u2019s trending, finding what\u2019s hot and what\u2019s not on Amazon Marketplace; generate a range of images and blueprints of possible products; send them to a drop-ship manufacturer it found on Alibaba; email back and forth to refine the requirements and agree on the contract; design a seller\u2019s listing; and continually update marketing materials and product designs based on buyer feedback.<\/p><\/div><\/div><p>As Suleyman notes, frontier models are already capable of doing all of these things in principle, and models that can reliably plan and carry out the whole operation are likely imminent. Such an AI no longer seems much like a screwdriver.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;It\u2019s true that there is something special about humanity, and we should celebrate that, but we should not conflate it with general intelligence.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/49665\"\n        data-a2a-title='\"It\u2019s true that there is something special about humanity, and we should celebrate that, but we should not conflate it with general intelligence.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>Now that there are systems that can perform arbitrary general intelligence tasks, the claim that exhibiting agency amounts to being conscious seems problematic \u2014 it would mean that either frontier models <em>are<\/em> conscious or that agency doesn\u2019t necessarily entail consciousness after all.<\/p><p>We have no idea how to measure, verify or falsify the presence of consciousness in an intelligent system. We could just ask it, but we may or may not believe its response. In fact, \u201cjust asking\u201d appears to be something of a Rorschach test: Believers in AI sentience will accept a positive response, while nonbelievers will claim that any affirmative response is either mere \u201cparroting\u201d or that current AI systems are \u201cphilosophical zombies,\u201d capable of behaving like us but lacking any phenomenal consciousness or experience \u201con the inside.\u201d Worse, the Rorschach test applies to LLMs themselves: They may answer either way depending on how they are tuned or prompted. (ChatGPT and Bard are both trained to respond that they are not conscious.)<\/p><p>Hinging as it does on unverifiable beliefs (both human and AI), the consciousness or sentience debate isn\u2019t currently resolvable. Some researchers have proposed measures of consciousness, but these are either based on unfalsifiable theories or rely on correlates specific to our own brains, and are thus either prescriptive or can\u2019t assess consciousness in a system that doesn\u2019t share our biological inheritance.<\/p><p>To claim a priori that nonbiological systems simply <em>can\u2019t<\/em> be intelligent or conscious (because they are \u201cjust algorithms,\u201d for example) seems arbitrary, rooted in untestable spiritual beliefs. Similarly, the idea that feeling pain (for example) requires nociceptors may allow us to hazard informed guesses about the experience of pain among our close biological relatives, but it\u2019s not clear how such an idea could be applied to other neural architectures or kinds of intelligence.<\/p><p>\u201cWhat is it like to be a bat?\u201d Thomas Nagel famously wondered in 1974. We don\u2019t know, and don\u2019t know if we <em>could<\/em> know, what being a bat is like \u2014 or what being an AI is like. But we do have a growing wealth of tests assessing many dimensions of intelligence.<\/p><p>While the quest to seek more general and rigorous characterizations of consciousness or sentience may be worthwhile, no such characterization would alter measured competence at any task. It isn\u2019t clear, then, how such concerns could meaningfully figure into a definition of AGI.<\/p><p>It would be wiser to separate \u201cintelligence\u201d from \u201cconsciousness\u201d and \u201csentience.\u201d<\/p><h2 class=\"wp-block-heading\" id=\"h-economic-implications\"><strong>Economic Implications<\/strong><\/h2><p>Arguments about intelligence and agency readily shade into questions about rights, status, power and class relations \u2014 in short, political economy. Since the Industrial Revolution, tasks deemed \u201crote\u201d or \u201crepetitive\u201d have often been performed by low-paid workers, while programming \u2014 in the beginning considered \u201cwomen\u2019s work\u201d \u2014 rose in intellectual and financial status only when it became male-dominated in the 1970s. Yet ironically, while playing chess and solving problems in integral calculus turn out to be easy even for GOFAI, manual labor remains a major challenge even for today\u2019s most sophisticated AIs.<\/p><p>What would the public reaction have been had AGI somehow been achieved \u201con schedule,\u201d when a group of researchers convened at Dartmouth over the summer of 1956 to figure out \u201chow to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves\u201d? At the time, most Americans were optimistic about technological progress. The \u201cGreat Compression\u201d was underway, an era in which the economic gains achieved by rapidly advancing technology were redistributed broadly (albeit certainly not equitably, especially with regard to race and gender). Despite the looming threat of the Cold War, for the majority of people, the future looked brighter than the past.<\/p><p>Today, that redistributive pump has been thrown into reverse: The poor are getting poorer and the rich are getting richer (especially in the Global North). When AI is characterized as \u201c<a href=\"https:\/\/www.jstor.org\/stable\/j.ctv1ghv45t\">neither artificial nor intelligent<\/a>,\u201d but merely a repackaging of human intelligence, it is hard not to read this critique through the lens of economic threat and insecurity.<\/p><p>In conflating debates about what AGI <em>should<\/em> be with what it <em>is<\/em>, we violate David Hume\u2019s injunction to do our best to separate \u201cis\u201d from \u201cought\u201d questions. This is unfortunate, as the much-needed \u201cought\u201d debates are best carried out honestly. <\/p><p>AGI promises to generate great value in the years ahead, yet it also poses significant risks. The natural questions we should be asking in 2023 include: \u201cWho benefits?\u201d \u201cWho is harmed?\u201d \u201cHow can we maximize benefits and minimize harms?\u201d and \u201cHow can we do this fairly and equitably?\u201d These are pressing questions that should be discussed directly instead of denying the reality of AGI.<\/p>\n          <div class=\"eos-subscribe-push\">\n          \n            <a target=\"https:\/\/shop.noemamag.com\/?utm_source=BottomCTA&utm_medium=website\" href=\"https:\/\/shop.noemamag.com\/?utm_source=BottomCTA&utm_medium=website\" data-wpel-link=\"internal\">Enjoy the read? Subscribe to get the best of Noema.<\/a>\n            \n          <\/div>\n        ","protected":false},"excerpt":{"rendered":"","protected":false},"author":3610,"featured_media":49666,"template":"","wpm-article-type":[3],"wpm-article-topic":[20],"wpm-article-tag":[],"class_list":["post-49665","wpm-article","type-wpm-article","status-publish","has-post-thumbnail","hentry","wpm-article-type-essay","wpm-article-topic-technology-and-the-human"],"acf":[],"apple_news_notices":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.0 (Yoast SEO v25.0) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Artificial General Intelligence Is Already Here<\/title>\n<meta name=\"description\" content=\"Today\u2019s most advanced AI models have many flaws, but decades from now, they will be recognized as the first true examples of artificial general intelligence.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.noemamag.com\/artificial-general-intelligence-is-already-here\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Artificial General Intelligence Is Already Here\" \/>\n<meta property=\"og:description\" content=\"Today\u2019s most advanced AI models have many flaws, but decades from now, they will be recognized as the first true examples of artificial general intelligence.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.noemamag.com\/artificial-general-intelligence-is-already-here\/\" \/>\n<meta property=\"og:site_name\" content=\"NOEMA\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/NoemaMag\" \/>\n<meta property=\"article:modified_time\" content=\"2024-11-14T19:09:01+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/noemamag.imgix.net\/2023\/10\/Noema_Card-display-2000x1000-0-00-04-04.jpg?fm=pjpg&ixlib=php-3.3.1&s=afc743f02923a9cb64e721edb26d4d8e\" \/>\n\t<meta property=\"og:image:width\" content=\"2000\" \/>\n\t<meta property=\"og:image:height\" content=\"1000\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/noemamag.imgix.net\/2023\/10\/wide-insta-15.png?fm=png&ixlib=php-3.3.1&s=c0f9b335a98f528d71fdf9ae0a20ccb5\" \/>\n<meta name=\"twitter:site\" content=\"@NoemaMag\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"16 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.noemamag.com\/artificial-general-intelligence-is-already-here\/\",\"url\":\"https:\/\/www.noemamag.com\/artificial-general-intelligence-is-already-here\/\",\"name\":\"Artificial General Intelligence Is Already Here\",\"isPartOf\":{\"@id\":\"https:\/\/www.noemamag.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.noemamag.com\/artificial-general-intelligence-is-already-here\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.noemamag.com\/artificial-general-intelligence-is-already-here\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/noemamag.imgix.net\/2023\/10\/Noema_Card-display-2000x1000-0-00-04-04.jpg?fm=pjpg&ixlib=php-3.3.1&s=afc743f02923a9cb64e721edb26d4d8e\",\"datePublished\":\"2023-10-10T13:41:56+00:00\",\"dateModified\":\"2024-11-14T19:09:01+00:00\",\"description\":\"Today\u2019s most advanced AI models have many flaws, but decades from now, they will be recognized as the first true examples of artificial general intelligence.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.noemamag.com\/artificial-general-intelligence-is-already-here\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.noemamag.com\/artificial-general-intelligence-is-already-here\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.noemamag.com\/artificial-general-intelligence-is-already-here\/#primaryimage\",\"url\":\"https:\/\/noemamag.imgix.net\/2023\/10\/Noema_Card-display-2000x1000-0-00-04-04.jpg?fm=pjpg&ixlib=php-3.3.1&s=afc743f02923a9cb64e721edb26d4d8e\",\"contentUrl\":\"https:\/\/noemamag.imgix.net\/2023\/10\/Noema_Card-display-2000x1000-0-00-04-04.jpg?fm=pjpg&ixlib=php-3.3.1&s=afc743f02923a9cb64e721edb26d4d8e\",\"width\":2000,\"height\":1000},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.noemamag.com\/artificial-general-intelligence-is-already-here\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.noemamag.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Artificial General Intelligence Is Already Here\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.noemamag.com\/#website\",\"url\":\"https:\/\/www.noemamag.com\/\",\"name\":\"NOEMA\",\"description\":\"Noema Magazine\",\"publisher\":{\"@id\":\"https:\/\/www.noemamag.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.noemamag.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.noemamag.com\/#organization\",\"name\":\"NOEMA\",\"url\":\"https:\/\/www.noemamag.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.noemamag.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/noemamag.imgix.net\/2023\/11\/noema-logo.png?fm=png&ixlib=php-3.3.1&s=5f5be9b261a7cf7e336f6f6beea6e539\",\"contentUrl\":\"https:\/\/noemamag.imgix.net\/2023\/11\/noema-logo.png?fm=png&ixlib=php-3.3.1&s=5f5be9b261a7cf7e336f6f6beea6e539\",\"width\":305,\"height\":69,\"caption\":\"NOEMA\"},\"image\":{\"@id\":\"https:\/\/www.noemamag.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/NoemaMag\",\"https:\/\/x.com\/NoemaMag\"]}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Artificial General Intelligence Is Already Here","description":"Today\u2019s most advanced AI models have many flaws, but decades from now, they will be recognized as the first true examples of artificial general intelligence.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.noemamag.com\/artificial-general-intelligence-is-already-here\/","og_locale":"en_US","og_type":"article","og_title":"Artificial General Intelligence Is Already Here","og_description":"Today\u2019s most advanced AI models have many flaws, but decades from now, they will be recognized as the first true examples of artificial general intelligence.","og_url":"https:\/\/www.noemamag.com\/artificial-general-intelligence-is-already-here\/","og_site_name":"NOEMA","article_publisher":"https:\/\/www.facebook.com\/NoemaMag","article_modified_time":"2024-11-14T19:09:01+00:00","og_image":[{"width":2000,"height":1000,"url":"https:\/\/noemamag.imgix.net\/2023\/10\/Noema_Card-display-2000x1000-0-00-04-04.jpg?fm=pjpg&ixlib=php-3.3.1&s=afc743f02923a9cb64e721edb26d4d8e","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_image":"https:\/\/noemamag.imgix.net\/2023\/10\/wide-insta-15.png?fm=png&ixlib=php-3.3.1&s=c0f9b335a98f528d71fdf9ae0a20ccb5","twitter_site":"@NoemaMag","twitter_misc":{"Est. reading time":"16 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.noemamag.com\/artificial-general-intelligence-is-already-here\/","url":"https:\/\/www.noemamag.com\/artificial-general-intelligence-is-already-here\/","name":"Artificial General Intelligence Is Already Here","isPartOf":{"@id":"https:\/\/www.noemamag.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.noemamag.com\/artificial-general-intelligence-is-already-here\/#primaryimage"},"image":{"@id":"https:\/\/www.noemamag.com\/artificial-general-intelligence-is-already-here\/#primaryimage"},"thumbnailUrl":"https:\/\/noemamag.imgix.net\/2023\/10\/Noema_Card-display-2000x1000-0-00-04-04.jpg?fm=pjpg&ixlib=php-3.3.1&s=afc743f02923a9cb64e721edb26d4d8e","datePublished":"2023-10-10T13:41:56+00:00","dateModified":"2024-11-14T19:09:01+00:00","description":"Today\u2019s most advanced AI models have many flaws, but decades from now, they will be recognized as the first true examples of artificial general intelligence.","breadcrumb":{"@id":"https:\/\/www.noemamag.com\/artificial-general-intelligence-is-already-here\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.noemamag.com\/artificial-general-intelligence-is-already-here\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.noemamag.com\/artificial-general-intelligence-is-already-here\/#primaryimage","url":"https:\/\/noemamag.imgix.net\/2023\/10\/Noema_Card-display-2000x1000-0-00-04-04.jpg?fm=pjpg&ixlib=php-3.3.1&s=afc743f02923a9cb64e721edb26d4d8e","contentUrl":"https:\/\/noemamag.imgix.net\/2023\/10\/Noema_Card-display-2000x1000-0-00-04-04.jpg?fm=pjpg&ixlib=php-3.3.1&s=afc743f02923a9cb64e721edb26d4d8e","width":2000,"height":1000},{"@type":"BreadcrumbList","@id":"https:\/\/www.noemamag.com\/artificial-general-intelligence-is-already-here\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.noemamag.com\/"},{"@type":"ListItem","position":2,"name":"Artificial General Intelligence Is Already Here"}]},{"@type":"WebSite","@id":"https:\/\/www.noemamag.com\/#website","url":"https:\/\/www.noemamag.com\/","name":"NOEMA","description":"Noema Magazine","publisher":{"@id":"https:\/\/www.noemamag.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.noemamag.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.noemamag.com\/#organization","name":"NOEMA","url":"https:\/\/www.noemamag.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.noemamag.com\/#\/schema\/logo\/image\/","url":"https:\/\/noemamag.imgix.net\/2023\/11\/noema-logo.png?fm=png&ixlib=php-3.3.1&s=5f5be9b261a7cf7e336f6f6beea6e539","contentUrl":"https:\/\/noemamag.imgix.net\/2023\/11\/noema-logo.png?fm=png&ixlib=php-3.3.1&s=5f5be9b261a7cf7e336f6f6beea6e539","width":305,"height":69,"caption":"NOEMA"},"image":{"@id":"https:\/\/www.noemamag.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/NoemaMag","https:\/\/x.com\/NoemaMag"]}]}},"parsely":{"version":"1.1.0","canonical_url":"https:\/\/noemamag.com\/artificial-general-intelligence-is-already-here","smart_links":{"inbound":0,"outbound":0},"traffic_boost_suggestions_count":0,"meta":{"@context":"https:\/\/schema.org","@type":"NewsArticle","headline":"Artificial General Intelligence Is Already Here","url":"http:\/\/www.noemamag.com\/artificial-general-intelligence-is-already-here","mainEntityOfPage":{"@type":"WebPage","@id":"http:\/\/www.noemamag.com\/artificial-general-intelligence-is-already-here"},"thumbnailUrl":"https:\/\/noemamag.imgix.net\/2023\/10\/Noema_Card-display-2000x1000-0-00-04-04.jpg?fit=crop&fm=pjpg&h=150&ixlib=php-3.3.1&w=150&wpsize=thumbnail&s=c0755dd4f7d690112c744173f86e5f46","image":{"@type":"ImageObject","url":"https:\/\/noemamag.imgix.net\/2023\/10\/Noema_Card-display-2000x1000-0-00-04-04.jpg?fm=pjpg&ixlib=php-3.3.1&s=afc743f02923a9cb64e721edb26d4d8e"},"articleSection":"Uncategorized","author":[{"@type":"Person","name":"Blaise Ag\u00fcera y Arcas"}],"creator":["Blaise Ag\u00fcera y Arcas"],"publisher":{"@type":"Organization","name":"NOEMA","logo":"https:\/\/www.noemamag.com\/wp-content\/uploads\/2020\/06\/cropped-ms-icon-310x310-1.png"},"keywords":[],"dateCreated":"2023-10-10T13:41:56Z","datePublished":"2023-10-10T13:41:56Z","dateModified":"2024-11-14T19:09:01Z"},"rendered":"<script type=\"application\/ld+json\" class=\"wp-parsely-metadata\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@type\":\"NewsArticle\",\"headline\":\"Artificial General Intelligence Is Already Here\",\"url\":\"http:\\\/\\\/www.noemamag.com\\\/artificial-general-intelligence-is-already-here\",\"mainEntityOfPage\":{\"@type\":\"WebPage\",\"@id\":\"http:\\\/\\\/www.noemamag.com\\\/artificial-general-intelligence-is-already-here\"},\"thumbnailUrl\":\"https:\\\/\\\/noemamag.imgix.net\\\/2023\\\/10\\\/Noema_Card-display-2000x1000-0-00-04-04.jpg?fit=crop&fm=pjpg&h=150&ixlib=php-3.3.1&w=150&wpsize=thumbnail&s=c0755dd4f7d690112c744173f86e5f46\",\"image\":{\"@type\":\"ImageObject\",\"url\":\"https:\\\/\\\/noemamag.imgix.net\\\/2023\\\/10\\\/Noema_Card-display-2000x1000-0-00-04-04.jpg?fm=pjpg&ixlib=php-3.3.1&s=afc743f02923a9cb64e721edb26d4d8e\"},\"articleSection\":\"Uncategorized\",\"author\":[{\"@type\":\"Person\",\"name\":\"Blaise Ag\\u00fcera y Arcas\"}],\"creator\":[\"Blaise Ag\\u00fcera y Arcas\"],\"publisher\":{\"@type\":\"Organization\",\"name\":\"NOEMA\",\"logo\":\"https:\\\/\\\/www.noemamag.com\\\/wp-content\\\/uploads\\\/2020\\\/06\\\/cropped-ms-icon-310x310-1.png\"},\"keywords\":[],\"dateCreated\":\"2023-10-10T13:41:56Z\",\"datePublished\":\"2023-10-10T13:41:56Z\",\"dateModified\":\"2024-11-14T19:09:01Z\"}<\/script>","tracker_url":"https:\/\/cdn.parsely.com\/keys\/noemamag.com\/p.js"},"_links":{"self":[{"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/49665","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article"}],"about":[{"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/types\/wpm-article"}],"author":[{"embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/users\/3610"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/media\/49666"}],"wp:attachment":[{"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/media?parent=49665"}],"wp:term":[{"taxonomy":"wpm-article-type","embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article-type?post=49665"},{"taxonomy":"wpm-article-topic","embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article-topic?post=49665"},{"taxonomy":"wpm-article-tag","embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article-tag?post=49665"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}