Articles

Self-conscious A.I.?

Above: (Self)-Portrait of the artist seated before an easel, Christoffel van der Laemen, 17th century.  [Copyright information at https://commons.wikimedia.org/wiki/File:Christoffel_van_der_Laemen_-_Portrait_of_the_artist_seated_before_an_easel.jpg]

 

There are many things that A.I. cannot do and cannot be for the simple reason that it is a purely material entity and does not require us to postulate immaterial properties in order to explain what it is and does.  There are features of minds, however — or at least of some minds — that do imply immateriality, such as “qualia,” unified fields of consciousness, and intentionality.  These can be found even in animals (unless we think of them as Descartes did), but not in machines.1  If even nonhuman animals are more than machines, then this should be said all the more of rational animals capable of universal thoughts.  The argument for the immateriality of universal thought has been made often, but there is more to be said.  In this contribution, I would like to suggest that universality comes in degrees and that on the most universal level a new feature emerges that likewise is incapable of being instantiated in purely material entities.

The most universal thought we can think is that of “being”; it is so universal that literally nothing falls outside of it, for everything that does not fall under this concept is nothing.  Or, as the philosopher W.V.O. Quine put it,  Quine, the proper answer to the question “what is it that is?” is simply “everything.” 2 Now this thought of “everything” has a peculiar property: it includes itself.  For this particular thought is itself something, and in thinking “everything,” the thought also thinks itself.

It is of such a reflexive self-inclusion or self-reference that we want to ask: can a machine, a purely physical entity, have such a thought (if it can have thoughts at all)?  In other words: can a material entity contain itself?  In order to imagine such a case we would have to imagine something like putting a briefcase into itself. This would seem to be impossible.  Yet it is a completely normal thing for us to contain ourselves in the simple thought “I,” which is a thought that contains itself in so far as the thought is part of who I am – just as the thought of “everything” contains itself.  In the thought “I” we each refer to ourselves; we think ourselves, i.e., we contain ourselves in thought.3

No purely material entity can do this.  It is certainly possible that a printer flashes a message like “printer out of ink.”  In this way it appears to refer to itself (we can even make the message say “I am out of ink”).  But that is because we have programmed a material process by which low ink levels generate a message indicating that state of affairs.  Nothing in this scenario implies that our printer has self-consciousness (and nobody would ever suggest that).  Whatever self-referentiality the message has, it exists only in our mind.

Other machines may instantiate processes that loop back onto themselves or operate recursively.  Even a simple cybernetic system like a thermostat does that.  This, too, is not the same as self-consciousness or self-referentiality.  It is the same mechanism that reacts and relates to different stimuli in the environment at different times, but not to itself.  It manipulates its own procedures in so far as its outputs generate future inputs that affect the same mechanism in a feedback loop; but it is the same mechanism at a different time and the mechanism does not relate to itself in this process.  In other cases, a mechanism may apply the same operation recursively to the products of previous operations, but not itself.

Being aware of oneself in the relevant sense means to relate to oneself here and now.  We can certainly also recall our own states from yesterday, but that is a case of memory rather than self-consciousness.  Even this, however, presupposes self-consciousness in order to identify ourselves with those states here and now, i.e., to identify them as my memories, and to so identify with them here and now.  Likewise spatially: self-consciousness cannot be produced by looking into a mirror, but is presupposed in mirror recognition.4 To relate reflectively to myself here and now implies that I take a step away from myself, that I objectify myself in order to look at myself.  Yet at the same time I must recognize myself as the subject in the object.  In other words, I split myself into a subject and an object (i.e., a knower and known), yet I must recognize a more primordial identity underlying both sides.  Since the philosopher Fichte, much thought has been given to this underlying dialectic of identity and difference.5 It is associated with puzzles such as the Liar Paradox or Russell’s Paradox.6 What concerns us here is that the distance implied in this split into subject and object is not a distance in space and time; in other words: it is an immaterial distance.  A distance that exists now is by definition not a temporal distance.  And so it is with a distance that exists here: it is certainly conceivable that one part of a machine manipulates another part of the same machine.  That obviously happens all the time.  But a machine cannot manipulate all of its states at once, just as it cannot contain all of its own parts in itself.  It always needs to use one part of itself to manipulate the other parts, and this part cannot manipulate itself.  Trying to relate one material part to all the other parts can be imagined as folding a piece of paper on itself: then one half touches the other half.  But for a part to touch all of the other parts at once, we would need not to fold the paper infinitely many times, down to a geometrical point in which they all coincide.  Yet a geometrical point is without extension, and that means: immaterial.

A human mind, on the other hand, can reflect on itself as a whole, and not only in part – which, as Thomas Aquinas observes, is why our minds cannot be material.7 And whatever else is contained in the mind, the mind contains that together with itself as a whole, and not only as each part commensurate to another part.8 Or, as Therese Cory explains it: “Each part of a body can turn back upon another part (as when I touch my head), but not upon itself, since matter is extended and has parts outside of parts.  Only an indivisible and incorporeal being can be made wholly present to itself since it has no parts that get in the way of each other. What is immaterial can be placed in contact, so to speak, with the whole of itself.” 9

None of this sort of self-consciousness implies that we have an exhaustive knowledge of ourselves.  It merely means that I can refer to all that I am, whatever it is.  Similarly, the thought of “everything” refers unproblematically to everything without implying that I know everything like an omniscient God.  And this leads us to a further feature of such thoughts and their reflexivity: they are simple. The whole contains its parts in a simple way – which means that I can be acquainted with a sense of myself without knowing every aspect of myself.  Even patients with Alzheimer’s and dementia never lose this simple sense of self; and if we were ever to lose it, then nobody could explain it to us by presenting to us our (spatial or temporal, or even mental) parts.  For this simple sense of identity is precisely what makes these parts to be parts of the whole of our self-consciousness.  And with this feature we arrive at a further reason why this sense of self cannot be something material.  For nothing material can be simple. Everything material is extended and therefore by definition not simple.  That is why Leibniz thought that the most simple parts of the universe must be minds (i.e., his “monads”).  Apart from Leibniz’s much larger claims, this is at least true for myself: the referent of the first-person singular pronoun, “I”, is simple and therefore immaterial.  Therefore, I cannot be a computer.

Put in a different way: the reason why computers cannot be self-conscious is not that they are “not yet complex enough” and that making them more sophisticated would do the job.  The problem is rather that they are not simple enough.  And they will never be simple enough for the simple reason that they are material.

Nor could any good reason be given why we should suppose that making them more complex should suddenly give rise to something simple.  Simplicity can explain the unity of the parts, but putting parts together will not unify them.  As Aquinas will point out, they are only in potentia with regard to the whole that they can form and therefore their unity needs to be actualized by something that is already more unified or one than they are.10

Such unifying causes themselves may in turn need to be unified by something that is even simpler than they are.  But this sequence of unifiers needs a starting point; it cannot be an infinite regress.  And – in imitation of St. Thomas Aquinas – the ultimate unifier is what we all call God.  For he is the one who needs no unifier because he is utterly simple by his nature (which is even one with his existence).11 God is the simplest being – and the simpler a thing is, the greater is its power, Aquinas argues.12 That is why Richard Dawkins has it exactly backwards when he suggests that God must be the most complex being of all, so that he can make all the complexities in the world.13

God also therefore cannot be material or a computer. But would anyone even suggest that?  Yes, indeed, as there are fantasies of a computer “Singularity” now that drive whole educational institutions (such as Ray Kurzweil’s Singularity University).  And they prepare for this “coming god” that will take over our lives on the model of Jesuit Teilhard de Chardin’s “noosphere,” a technologically mediated eschaton.14 Or perhaps on the model of the Protestant “rapture” as J. Lanier suggests.15 Some even argue that we should worship this entity (namely to get it to be on our side).  Which would literally be “worshipping the work of our hands” – and hence an idol in the Old Testament sense, even if a “monotheist” idol.  But much as one can call it a “singularity,” there is nothing singular about it, because as a material thing there is nothing simple there that would unify it like a mind.

Nor do we need to fear that such a computer singularity would gain something that could make it self-conscious and then take over the world – as sometimes imagined, perhaps with the internet developing a global self-consciousness.  For, if what we have said is true, this is not a possibility for anything purely material, nor can purely material things be the cause for such simplicity to arise as an emergent property.

Not only is it sometimes suggested that the singularity would be self-conscious as a result, but the very path that leads to this result seems to presuppose reflexivity and self-consciousness.  For it is imagined that computers would start to upgrade themselves, thus outpacing even human inventiveness.16 This may not be typically noticed and hence it is worth pointing out: “upgrading oneself” is yet another form of reflexivity.  We can consider our own case: much of our own creativity is the result of our self-consciousness.  We can reflect on how we are, take that step back from ourselves or from what we have become, and be dissatisfied and change our minds.  We can upgrade our opinions, our view of ourselves, or of our products. That is how we can improve.  But no computer has ever built a better computer than itself.  As machines they cannot reflect on themselves as a whole, be dissatisfied with themselves, or re-envision what they are.17 They are not different from something as basic as a thermostat: a thermostat has a setting which it cannot change by itself.  Only we can do that.  It may now have sophisticated ways of adjusting its settings based on other parameters, but all of that will rely on yet another more basic algorithm or source code.  Even in machine learning with deep neural networks, there is an ultimate algorithm by which the machine “learns” or a setting that distinguishes success from failure; it cannot learn something new in a sense that would be new with regard to that algorithm or setting.  The last level can never be subject to being changed by the machine, for it has to use this level in order to change anything.  For, as we have said, nothing material can reflect on itself or contain itself as a whole.18

For us, too, there are, of course, limitations.  We can certainly not know everything about ourselves, nor can we just change ourselves by an act of our will.  But we may arrive at an insight (or a conversion) that makes us want to change our ways and opinions, whether we are successful at it or not. We can simply come to dislike ourselves.  No computer can become depressed about itself or puzzled by the meaning of its existence.  We ourselves may not know the answer to these questions either, but unlike the computer we can and do ask the question.  Nor will a computer end up committing suicide because of its self-loathing; for that, too, is an act of reflexivity or turning against oneself as a whole.  Suicide is quite different from having an auto-destruct mechanism, which is just one part of the machine disabling another part, so as to destroy in effect the whole machine.

That we, on the other hand, can sadly take our own lives, is just the negative side of the positive ability to transcend ourselves in reflection, to change, to become creative, and indeed to take our lives into our hands – not so as to cast it away, but to promise it to God or another human being (in religious or marriage vows).19 We in our reflexivity are the only beings that can take possession of our whole life and give it away.  A machine cannot give itself away, because it does not possess itself; rather, it is and has always been our possession.  It neither has itself, nor does it transcend itself, for only spiritual beings can have that ability.

References.

1.  Cf. A. Ramelow, “In A.I., Mind does not Matter,” (forthcoming in Euntes Docete).

2.  Willard Van Orman Quine, “On What There Is,” in: From a Logical Point of View (New York, Hagerstown, San Francisco, London: Harper Torch, 1963), 1-20, at 1. If computers cannot entertain universals as universals, then even less will they have this most universal thought; this may be why it can be said that they also do not have a “world” within which everything else is situated; Brian Cantwell Smith, The Promise of Artificial Intelligence: Reckoning and Judgment (Cambridge, Mass.: MIT Press, 2019), 102.

3.  Aquinas explains this in terms of two minds, which can contain each other in a way that two bodies cannot contain each other: It is impossible, furthermore, for two bodies to contain one another, since the container exceeds the contained. Yet, when one intellect has knowledge of another, the two intellects contain and encompass one another. Therefore, the intellect is not a body. Aquinas, Summa contra gentiles, 49 n. 7 (all trsl. of the ScG by Anton C. Pegis (New York: Hanover House, 1955-57). What Aquinas here suggests about two minds can be taken as the same mind appearing twice, namely in self-possession. The soul also knows its own powers in a reflexio or “complete return to itself.” Aquinas, De veritate q. 1, a. 9 and Super Librum De Causis, prop. 15. The more spiritual the existence, the more immediate the return: God knows himself by himself (as in Aristotle’s noesis noeseos), without first being informed by other things; Summa theologiae I, q. 14, a. 2 ad 3. On the other hand, while our intellect knows itself, our senses do not, because they require a material organ: the eye would need another eye to see itself.

4.  Past, present and future are part of what “I” refers to (whether we know it or not); hence the identity of the temporal parts transcends time and the here and now. This is why Aquinas can say: Esse autem nostrum habet aliquid sui extra se: deest enim aliquid quod jam de ipso praeteriit, et quod futurum est; Aquinas, In I Sent., d. 8, q. I, a. I, sol.; cf. Armand A. Maurer, “Time and the Person,” Proceedings of the American Catholic Philosophical Association 53 (1979): 182-193, at 182.

5.  Especially in the school of Dieter Henrich; see, for example, his Fichtes ursprüngliche Einsicht (Frankfurt/Main: Vittorio Klostermann, 1966), or Manfred Frank, Selbstbewußtsein und Selbsterkenntnis (Stuttgart: Reclam, 1991).

6.  Or self-referential statements like “this sentence is false.” Because Gödel has been invoked against machine intelligence (e.g., by R. Penrose), it may be worth pointing out that the statements relevant to Gödel’s incompleteness theorem – those that our mind can recognize to be true without being able to proof (deduce) them – are self-referential statements like “this statement is not provable in this system.” Eric J. Larson, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do (Cambridge, Mass.: Harvard University Press, 2021), 13.

7.   “…the action of no body is self-reflexive. For it is proved in the Physics that no body is moved by itself except with respect to a part, so that one part of it is the mover and the other the moved. But in acting the intellect reflects on itself, not only as to a part, but as to the whole of itself. Therefore, it is not a body.” Thomas Aquinas, Summa contra gentiles, II, 49, n. 8.

8.  “For it is only by quantitative commensuration that a body contains anything at all; so, too, if a thing contains a whole thing in the whole of itself, it contains also a part in a part of itself, a greater part in a greater part, a lesser part in a lesser part. But an intellect does not, in terms of any quantitative commensuration, comprehend a thing understood, since by its whole self it understands and encompasses both whole and part, things great in quantity and things small. Therefore, no intelligent substance is a body.”  Thomas Aquinas, Summa contra gentiles, II, 49, n. 2.

9.  Therese Scarpelli Cory, Aquinas on Human Self-Knowledge (New York/Cambridge UK: Cambridge University Press, 2014), 206.

10.  Thomas Aquinas, Summa contra gentiles, I, 18, n. 2.

11.  Thomas Aquinas, Summa contra gentiles, I, 18.

12.  Thomas Aquinas, Summa contra gentiles, II, 6, n. 6

13.   “… any God capable of designing anything would have to be complex enough to demand the same kind of explanation in his own right. God presents an infinite regress from which he cannot help us to escape.” Richard Dawkins, The God Delusion (London: Bantam Press, 2006), 109. Oddly, he himself then proceeds to explain complexity from simple beginnings in evolution.

14.  “[T]his new deity will be as omniscient and omnipotent as any previous vision of God. In the face of such power, Levandowski believes, humans will merely submit and pray to be spared.” Galen Beebe and Zachery Davis, “When Silicon Valley Gets Religion,” quoted in Ted Peters, “Artificial intelligence, Transhumanism, and Frankenfear,” in AI and IA: Utopia or Extinction? ed. Ted Peters (Adelaide: ATP Press, 2018), 15-42, at 38. See also Anselm Ramelow, “Technology and our Relationship with God,” Nova & Vetera 22 (2024): 159-186.

15.  Jaron Lanier, Who Owns the Future (New York: Simon & Schuster, 2013), 125.

16.  I.e., some “superintelligence” redesigning itself and getting a patent for it; David J. Chalmers, “The Singularity: A Philosophical Analysis,” Journal of Consciousness Studies 17 (2010): 7-65; Nick Bostrom and Eliezer Yudkowsky, “The Ethics of Artificial Intelligence,” in Cambridge Handbook of Artificial Intelligence, ed. William Ramsey and Keith Frankish (Cambridge: Cambridge University Press, 2014), 316–34, at 329-333; Raymond Kurzweil, The Singularity Is Near: When Humans Transcend Biology (New York: Viking, 2005).

17.  Animals, too, though conscious, are defined by their relationship with their ecological niche (to which they are fitted genetically); they cannot reflect on this relationship (and with it, themselves), and step out of the niche. But we can do this, and that is why we live everywhere on the planet (and perhaps soon in space).

18.  A ‘subject’ is “a ‘system’ which [is] once more confronted with itself as a whole, and hence cannot simply be thought of on the lines of a computer made up of different parts, which in spite of all built-in controls, cannot once more manipulate itself as a whole.” Karl Rahner, “Person,” in: Sacramentum Mundi vol. IV (New York: Herder & Herder, 1969), 404–19, at 417. Thus, a debugging tool is a computer program that is used to test and debug other programs (the “target” program), but not itself.

19.  Self-transcendence is the condition of the possibility of true intersubjectivity: anticipating another’s viewpoint. Computers can be programmed to behave as if they did. But they do not intrinsically do it, nor do we anticipate them to do so: that is why we do not tell jokes to computers, be friends with them or ask them to pray for us. – Phenomena like shame also depend on such intersubjective self-transcendence. Computers (and animals) do not blush, because they lack intersubjectivity. Roger Scruton in particular has insisted on this difference; Roger Scruton, On Human Nature (Princeton/Oxford: Princeton University Press, 2017), 50-78 (and throughout).

 

 

Connect With Catholic Scientists

The Society of Catholic Scientists is an international organization that fosters fellowship among Catholic scientists and witnesses to the harmony of faith and reason.

Support Our Mission

Join the hundreds of people whose financial contributions are allowing Catholic scientists to engage with each other and the world as never before.

About Us

The Society of Catholic Scientists is an international organization founded in June of 2016 to foster fellowship among Catholic scientists and to witness to the harmony of faith and reason.