The Stainless Gaze of Artificial Intelligence: A Lacanian Examination of Surveillance and Smart Architecture
Summary:
This study looks at the phenomenon of surveillance through AI. The Lacanian big Other is central to this surveillance problem, not only in the paranoiac who assumes that someone is watching him, but also in the way this observer is conceived and where he is located. With the ideas of a “smart city” and “smart homes,” artificial intelligence (AI) is increasingly taking the place of the watchful big Other and is embodied in architectural space and anchored in electronic virtuality. However, AI is absolutely infantile in the Lacanian sense, it has no access to the objet petit a. The gaze of AI positively corresponds to the panoptic point of view that Foucault identifies; it simultaneously sees less than we would expect. This essay considers this “stainless gaze” as an inversion of the “gaze as stain” introduced by Joan Copjec, focusing on the social implications and consequences for the architectural space we inhabit. The result in modern architecture is an invisible panopticon that no longer relies on the spatial transparency that the classical panopticon required. This new panopticon is much more aware of all our actions, but at the same time it is freed from the “object of desire,” depriving it access to social knowledge as such. This has consequences for the constitution of the subject within a surveilled society, which Lacanian psychoanalysis allows us to indicate.
I. Introduction
Today we are watched more than ever, but the watcher has changed. Neither the christian god, nor the government agent is primarily assumed to be watching, but algorithms of private corporations. However, these algorithms are not an absent watcher, but rely on sensors and the generation of data by physical systems. Instances of surveillance by algorithms are not just our internet behavior or our smartphone use, but increasingly objects within our homes and these homes themselves. The Internet of Things, smart city, and smart homes introduce variants of artificial intelligence (AI) into the architecture that surrounds us. From lightbulbs to juicers and toys, everything can connect to an external provider, and the data generated is used for predicting the user’s daily interest and for improving the product. There are several approaches to the smart home today, focusing on elderly care (Bitterman & Shach-Pinsly, 2015), improving the ecological effectiveness, and a more general luxury smart house touted as enabling ease and new interconnected appliances (Wilson et al., 2017). The focus of this text is on the latter, which is the most marketed one. From a psychoanalytic perspective, these buildings themselves operate in the place of a watchful big Other based in the electronic virtuality, as they surveil and report the smart home ”user’s” behavior to themselves and send this data to service providers. This means that the psychoanalysis of AI includes architecture taking the place of the big Other. This integration of technology into our living spaces turns the architecture of the Smart Building itself into a “skin” (Bitterman & Shach-Pinsly, 2015, p. 271) of surveilling machinery.
The smart home today is more than ever an assemblage (Maalsen, 2020). There is a broad variance of smart architecture as no dominant market leader has established itself. This also means that the data gathered is often enough not centralized, even though systems like Amazon’s Alexa and Google Nest take up the role of managing other smart systems like security, lighting, kitchen appliances, etc. The main recipient of this broad quantification of everyday life is instead the owner or user of the smart building herself because the user is where all data comes together. The concrete and localized interactions are therefore in focus: how these algorithms mirror the user and what this means for the surveilled subjects. This perspective differs from the usual approaches to surveillance capitalism and its variants, which essentially believe the marketing spin on surveillance, i.e., that it actually understands us. For example, Shoshana Zuboff assumes that the great surveillance tech corporations like google produce economic gain from “behavioral surplus” and that it is the correct and universal analysis of our behavior (Zuboff, 2019).
In difference to the marketing spin of the correct and universal analysis of our behavior, computers in general are, in the Lacanian sense, infantile as they have no access to the objet petit a (Žižek, 2017). An aspect of AI that has been in focus of a prior study (Heimann & Hübener, 2023). Notably, the presented position on the psychoanalysis of AI differs from other current approaches (e.g. Millar, 2021; Nusselder, 2006; Rambatan & Johanssen, 2022) by focusing on the formal structure of computation and the difference between Lacanian logic and the Boole/Frege/Russell tradition, to which computer science cleaves. The object of this study, the interaction between human and machine as a subjectivation of the surveilled subject, is at the same time difficult and easy to access. One the one hand, we can access with great ease the basic ways computers operate because we are not required to interfere from their behavior or speech but can simply reflect upon their internal logic. Nothing is barred to us in this sense, not even the problem of explainable AI is a real hindrance here (compare Arrieta et al., 2020), as we have access to all axiomatic decisions that computers rely on. However, the logic of our own minds is not as accessible and, in the case of unconscious processes, as understood by modern psychoanalytic (i.e. following Lacanian psychoanalysis) philosophy and epistemics (e.g. Žižek, 2012, p. 274). This logic of the unconscious includes the logically essential but radically indeterminate dimension Lacan called the objet petit a, which forces the analysis of the subject to be only accessible by recourse to the scientific core method of speculative mathematical reconstruction in the very precise sense that Quentin Meillassoux gave (2008, p. 115): by utilizing the “the capacity, proper to every mathematical statement, through which the latter is capable of formulating a possibility that can be absolutized, even if only hypothetically” (p. 126). However, this capacity, as Meillassoux demonstrates, only comes to the fore if we give up the metaphysical (Parmenidean) mathematico-logical approach that was instituted by Boole (1847):
Let us employ the symbol 1, or unity, to represent the Universe, and let us understand it as comprehending every conceivable class of objects whether actually existing or not. (p. 15)
In this case, the utilization of the capacity of thought that the Lacanian logic opened up is significant. Alain Badiou’s philosophical approaches demonstrate a strong foundation in modern mathematics, highlighting the capability to contemplate a central problem of subjectivity. This problem, which computers cannot emulate, leads to the subject’s central misrecognition through the machine and shapes their position towards it. The reason for this logic centered approach is then, too, that we cannot rely on the instruments of a phenomenological descriptive transcendentalism to access a symbolic structure that we can discern as formally necessary in the distinction between phenomena and noumena at least from Kant onwards (1967, p. B310–A255), but cannot experience. Logical objects, formal distinctions between the intelligible and the unintelligible, as much as numbers are not objects of possible positivistic empirical research, so any current empirical method relying on a positive descriptive framing (mirroring the Parmenidean approach of Boole) is thereby excluded to offer any insight. We must instead rely on the methods of the continental tradition of logic (instituted by Heidegger and Freud), with its reflections on negativity to reach the object at hand. Accordingly, the limits of formalization themselves are constituents of the real (Badiou, 2006, p. 5), which indicates the specific empiricism of this logic. This is especially relevant, since in order to to properly understand the logic of the unconscious, we need to be aware that ontologically indeterminate structure of the unconscious has not been properly discussed outside of continental logic.
This methodical approach allows to indicate that the gaze of AI, while positively corresponding to the panoptic viewpoint that Foucault (1995) identifies in Discipline and Punish, at the same time sees less than we would expect. This lack only comes to the fore if we analyze current AI as being determined by an “algorithmic unconscious”[1] (Possati, 2020) based on the Boole/Frege/Russell tradition of logic, i.e., the tradition of logic not following the insights of Freud and Heidegger that the void/indeterminate as such is part of any proper logic[2]. To conceptualize the problem of this gap between the logic of the unconscious and the machine’s logic in regard to the subjectivation this produces, the following essay will introduce this gaze without stains based on the “gaze as a stain” by Joan Copjec (1994). The result in modern architecture is an invisible panopticon – no longer reliant on the spatial transparency that the classical panopticon required. This new panopticon is much more aware of all our actions, but simultaneously, it has difficulties to access unconscious social knowledge. This essay proposes that this specific lack of the Other, the inability to conceive the lack as such, structures algorithmic surveillance. This shifts the idea of surveillance, as the big Other might be there in Bentham’s surveillance architecture or more classically in religious architecture, but this new data driven Other is much more talkative. This new home, where the architecture itself increasingly becomes the skin of technological innovations, is based on a new approach to the surveillance of its occupiers and owners.
II. The Lack of Knowledge
The architecture of the panopticon, of “eyes that must see without being seen” (Foucault, 1995, p. 171), which allows the real observers to be absent, invokes an invisible but knowing Other. This marks a conception of surveillance that creates an “an architecture that is no longer built simply to be seen …, but to permit an internal, articulated and detailed control to render visible those who are inside it” (p. 172). Surveillance in this sense is organized around keeping and implementing the law into the body enabled by spatial organization. Joan Copjec (1994) stated in her seminal book Read my Desire that the Foucaultian argument of the panopticon conflates the effect of the law and the realization of the law, which turns the symbolic into a purely positive structure:
For Foucault successfully demonstrates that the conception of the symbolic on which he […] relies makes the imaginary unnecessary. […] he rethinks symbolic law as the purely positive production, rather than repression, of reality and its desires. (pp. 23–24)
For Copjec (1994, p. 14), this conflation determines and deflates desire, thus abolishing the metonymic, transfinite dimension of the symbolic that Lacan and Freud opened. We propose that surveillance must be understood with desire in mind, as Copjec argues, because it is directed at desire, not simply at observed facts. However, Foucault still offers a central insight into the structural dimension of surveillance, and even the determination of desire plays an important role in how we should approach surveillance. Foucault (1995) himself conceptualizes the watcher of surveillance as “everywhere and always alert,” but is in itself “unverifiable” (pp. 177, 201). The surveilling instance must then be virtual, it does not rely on specific others to be present, but is located at the structural position of the big Other. Consequently, to conceptualize surveillance means to think of the surveilling big Other and its relation to desire. So, we take Foucault’s insights into surveillance and try to combine them with Copjec’s critique.
We will at first discuss how to approach the surveilling Other as everywhere and alert as well as unverifiable. Therefore, we will not start with real surveillance, but with the extreme phantasy of the demonic gaze from beyond and the temptation by otherworldly watchers. This may seem unconventional, but we will utilize a horror movie for the translational capabilities of this genre to encounter the anxiety in surveillance in a way that everyday experience rarely allows us to. The reason for this reflection of the surveilling Other in fiction is simple: Surveillance is, as Foucault noted, a penetrating and normal aspect of our modern societies; to discern it in detail requires us to approach a symptomatic, i.e., an excessive variant of this Other. The fiction that will allow us to discern this phantasy is the cult classic Event Horizon (Anderson, 1998). Central to the dissemination of the demonic Other is its relation to our desires, an element that not by accident nearly every fictional appearance of demons includes. However, the following has no claim on film theory, the movie is taken as a symptom instead.
II.1 Event Horizon
The demonic Other should, of course, be assumed in a purely virtual aspect, it is the extreme idea of being seen. The film Event Horizon, which is a haunted house movie transposed into a spaceship, allows us to locate this phantasmatic gaze. It’s notable that the movie approaches its theme at first with a relative hard science framing. Still, the viewer soon realizes that the Event Horizon is not just a spaceship or technical marvel. The first traces of the demonic gaze happen when the protagonists arrive at the eponymous ship. It is the “bioscan” showing the whole ship being filled with life despite having no heating or atmosphere nor any discernible lifeforms throughout the film. This early scene indicating that the ship is full of life is never validated by a visual confirmation of this life. The demonic gaze is there spatially bound to a specific location, but seems to be nothing but this location. Consequently, the watcher’s gaze of the Event Horizon, as the cause of all horror that happens, is one that is never located throughout the film. It knows everything about the protagonists and mirrors the watcher as ‘everywhere and always alert’ (Foucault, 1995, p. 177), yet it remains ‘unverifiable’ in itself (p. 201). While the ship itself is stained in some places, showing that the prior crew violently died, we never see a monstrous villain. Instead of being shown as an actor, this gaze manifests as the direct realization of the protagonist’s desires, comparable to the alien planet in Solaris (Tarkowsky, 1961). Its reality is for the viewer and protagonists simply the realization of impossible knowledge. Why is this knowledge impossible? This is because the idea of the demonic transfixes the transfinite character of desire, which Copjec (1994, p. 55) describes, and manifests desire in the most horrific way possible. This manifestation and fixing of desire that is otherwise kept at bay through its virtuality is what makes this extreme idea of surveillance horrible, the fear of having your horrible desires realized, mirroring the Foucaultian discipline.
Let’s take the example of the character of Dr. Weir in Event Horizon. We are introduced to him as being traumatized by his wife Claire’s loss. Already notable is the first of his scenes: The film shows him shaving himself and repeatedly looking at a bath. The camera closes up to the razor, as if waiting for him to cut himself. However, Dr. Weir doesn’t cut himself in this scene, but looking back from the end of the movie, we realize that this violent, but at this point, the virtual cut is a possible place of identification for him. His wife committed suicide by cutting her wrist in the bath. The cuts, here markers of an actively staged suicide, should then be taken as the manifestation of Claire’s own will and manifestation of her subjectivity; the objet petit a that Dr. Weir loved. This originally unconscious transference of Claire as a subject into the cut that ended her life is later realized by the demonic gaze. Dr. Weir, after falling fully under the demonic influence, is later portrayed by having cuts all over his body resembling the cuts his wife uses to kill herself in the bath. Not only is there a traumatic loss of his wife, but he also starts to identify with her subjectivity through the cut of her wrists. However, this desire of identification with her is not manifested at the beginning of the film, he keeps it painfully in check, even shaves with a razor looking like the one his wife uses to kill herself. The desire portrayed here is not part of his original trauma, but instead an active transformation of his loss into horror; the ship manifests as determining the dimension of desire to its horrible content.
Two things are important here: first, the absence of an identifiable gaze, except in its knowledge. We are never aware of this cause of demonic desires, except in reference to it being outside the known space and time. Secondly, this gaze is immediately aware of the most violent desires, like the identification with the violently inflicted stains on the body of Claire. Of prime importance is here the demonic element that Plato marked as being located beyond the usual measure.[3] This demonic Other knows more than we do because it represents an impossible knowledge of the beyond. Mirrored in the film, the ship is said to have gone beyond the scientifically describable and returned, although its appearance seems unchanged.
This demonic watcher, while not directly indicated in modern fantasies of surveillance, still allows us to grasp central elements of our conception of it. It is the extreme variant of the uncanny feeling, that something or someone knows more about myself than I do. Take for example the awesome capabilities that are often attributed to state intelligence, in difference to the bureaucratic banalities, which now and then come to light. This idea of the (evil) watcher that is aware of our deepest desires is also regularly used when discussing modern automated surveillance, the uncanny feeling that the marketing algorithm picked beforehand what we wanted to buy – a feeling well documented by Oscar Schwartz on the now defunct online newspaper theoutline.com (Schwartz, 2018). The Other, that knows us better than we do, appears today based on a belief in algorithms (Beer, 2017) that approaches the impossible. However, there is a distinction to be made here: The demonic watcher doesn’t infer our desires, it is assumed as directly aware of them, which is the cause of anxiety here.[4] This demonic watcher is therefore not a model for surveillance as such, but for a (paranoiac) subject constituting itself as being surveilled. The anxiety it causes then doesn’t stem from us being watched, but from the idea of our desires being already known, but not by ourselves. The subject of this demonic Other loses the freedom of their desires and becomes a subject in the Foucaultian sense, but as the movie describes, desire is central to this. Renata Salecl also noted that the cause of anxiety in being watched is not the watcher seeing us, but that it by seeing us might see more than what we ourselves can see:
[W]hat provokes anxiety is not that the idea of Big Brother (the controlling agency from Orwell’s novel) has become materialized, but rather that the lack is lacking—i.e. that there is no place for inconsistency, nonwholeness. (Salecl, 2004, p. 41)
Such a (phantastic) knowledge then determines the originally virtual space of desire and transforms desire into a fixed reality. We now have a basic idea to determine how desire is linked with surveillance, as Copjec’s critique of Foucault pointed us in this very direction. The idea of a demonic watcher, while not describing a real surveillance situation, nonetheless allowed us to link Copjec’s critique of Foucault as lacking desire and Foucault’s own conception of the surveilling gaze.
II.2 The Live of the Others
Now, let us look at surveillance in a more realistic frame of reference. Another movie where we can discuss surveillance and the gaze and link it much easier to modern concepts of surveillance is The Lives of Others (Donnersmark, 2006). The mode of surveillance that The Lives of Others portrays dominated most of the last century; a mode that adds to the space it watches. This surveillance is characterized by its traces. Tiny cameras, cables hidden under a tapestry, and rooms occupied by the watchers are the general elements that most spy movies of the last century displayed. Accordingly, the search for this surveillance plays a central role in our conceptualization of the watching Other of this surveillance. While this form of surveillance strives to leave as few traces as possible, it could not avoid causing these. However, its effect on the watched space is diminished compared to the surveillance of the panopticon, where the spatial organization itself is the enabler of surveillance.
Slavoj Žižek remarked that the portrayal of the Stasi in this movie essentially makes it look much more harmless than it historically was. The film uses a criminal intent, a bureaucrat’s desire for the writer Dreyman’s girlfriend, as the cause of the surveillance and intrusion into the writer’s life, whereas the actual Stasi would not need a criminal intent to behave as it did in the film (Žižek, 2020, pp. 151–152). The point that Žižek makes is, of course, that this implicit structure is not an obscene and irrelevant underside, but essential to the thing itself, and this is true for the act of surveillance too. The surveillance as portrayed in the film allows us to discern the dimensions of the demonic Other still at large. This is shown in the early teaching scene, where the protagonist Wiesler, a Stasi Hauptmann, teaches a group of Stasi officers about questioning and torture techniques. It is shown there that the aim of the state surveillance is to force the subject of torture to admit his desire, so that retroactively the torture and surveillance is justified. As the scene in the cafeteria shows, where an underling of Wiesler’s commanding officer is painfully questioned after making a joke about the regime’s leader, a desire to determine desire is ruling internally and externally. Central to this is that the knowledge base of the surveilling group is itself determined by the signifier’s reach, in contrast to the demonic. The demonic transformation of desire as virtual into desire as determined is changed here into an institutional desire to typecast all characters to specific political roles (e.g. loyal-and-potentially-traitorous and traitorous). But from within the signifier’s reach, there is no direct access to the desires of the victim here; instead, the surveilled subject must surrender their desires to the surveillance system to enable a retroactive determination based on the preformed master signifier: that everyone’s desire marks them as potentially guilty. With this institutional desire in mind, we can further analyze the idea of surveillance documenting itself in the movie.
Central to the story is that Wiesler’s night shift colleague is a much better observer regarding the actual aims of the Stasi, especially as he is interested in the observed couple’s intercourse but has no tendency to be lured into their life. And it is hard to argue that he isn’t the better observer in purely formal terms, as Wiesler himself gets lured into Dreyman’s life. Had he simply been the voyeuristic observer that the night shift officer was, Dreyman might have incriminated himself without needed action. This means that Žižek’s critique of the film holds true, even regarding the surveillance. The obscene voyeuristic underside, desire in its variations, is bound up with the surveillance. It seems therefore, that the movie could be read at the level of the letter as an argument for computational surveillance. The computer won’t have a stake in the surveillance, and won’t fall for the lure of the subject’s life, and will still record everything.
Still, the obscene observer would also have fallen into the trap that Dreyman and friends created to determine whether his flat is under observation by accepting at face value what is presented to him. They stage a fake escape with the help of a west Berlin ally to determine if they are watched. The reliance on taking things as they are presented would have failed the night shift here. The metonymy of the signifier, as the indeterminate within the discourse, intervenes here; the dream of language and behavior as clara et distincta to infer the desire of the watched comes to a limit in the lure for the observer. The computational observer and the voyeuristic observer would have fallen for the lure to coerce the Other to lay down his cards openly. This is central, as it is the structural place that Copjec marks as eluding the purely positive determination when considering surveillance. We therefore can add three dimensions, where desire and the metonymy of the signifier interact with surveillance: first, in the determination of desires, secondly in the Other’s desire to surveil, and third in the problem of pinning down this desire as expressed in language.
III. The Stainless Gaze
To specify this problem that the lure indicated in automated surveillance, we should look at the logic that is utilized in the factual description through a computer. We assume that computerized surveillance means that it is fully automatized, i.e., the actual surveillance is done by an algorithm. The thesis presented in this subchapter is that computers in general, and therefore AI in smart architecture as a special case, have a specific lack of knowledge. Computers and AI in smart architecture cannot surmount this lack, which becomes apparent if we compare the computer’s logic and Lacanian logic. The critique that Copjec mounts against Foucault can be mounted against AI as well; that it operates on “symbolic law as […] purely positive” (Copjec, 1994, p. 24). But what if the Foucaultian concept of the positivity of the symbolic is enforced structurally, not ideologically?[5] This is what happens in the way AI and modern computation approach the symbolic. We do find explicit models essentially operating on basing social identity purely on positive properties (Y. Wu et al., 2017), and we know computer science developed from the Boole/Frege/Russell tradition of logic (Priestley, 2011). This means that its approach to substantial negativity is essentially reduced to the zero as either a starting point, the zero level, or as a non-change within a system. The void or indeterminate as a virtual space is essentially disallowed in this approach to logic, which in turn means that the problem of desire is not even positable within this positive logic as Copjec (1994, pp. 25–26) also remarks.
This unpositability requires further explanation. A positivized symbolic can be found in the avoidance of the void that we find in the Boole/Frege/Russell tradition of logic. Boole, as the founder of this tradition, who, as quoted above, bases his calculus on the principle of a totalized One. This approach disallows desire from the start, the transfinite character of symbolic systems and even the possibility of always creating a bigger comprehending class is simply excluded by starting with a variant of the Spinozean absolute. Boolean logic therefore has no access to desire or the objet petit a as it assumes a whole or a system, marking a central difference to Lacanian thought as Bruce Fink (2004, p. 152) detailed. What is, instead, constituted in the Boole/Frege/Russell tradition of logic is an avoidance of the negative. This is very well self-documented in Carnap’s critique of Heidegger and its regular reoccurrence (Carnap, 2004). Carnap criticizes Heidegger for using the concept of negation as an active virtual possibility (“das Nichts nichtet”) instead of using negation only as the negation of existence. In Carnap’s view, this is a misuse of the negation as a determined negation of something. He insists that the only logical approach to negation is a privation, i.e., the specific negation of an existential judgment, disallowing psychoanalytic concepts of negativity as well. Notably, this critique is a repeating phenomenon. It appears with some regularity: Now and then someone attempts to demonstrate that Heidegger’s work on negativity is not real philosophy but essentially bad poetic scribblings. This has been well documented by Stephan Käufer (2005, p. 146). Modern computer science is a specific offshoot of this tradition (Priestley, 2011), operating within a further physical curtailment of its symbolic capabilities. The avoidance of a more complex concept of negation is thereby still active within modern computer science. We can identify this focus on systems as an assumed One, following Boole, even in many of the relatively recent shifts in AI theory (Agre, 1997; Freeman, 2000, pp. 28–30). This idea has therefore never left computer science. In stark difference to this, the specific negation that Lacan’s logic utilized requires the rejection of the One, which, as Badiou (2006, pp. 272–276) and Meillassoux (2008, pp. 103–108) demonstrated, is the result of accepting Cantor’s mathematical insights. One could consider it as the unconscious of modern logic comparable to Possati’s (2020) algorithmic unconscious.
In difference to this tradition of logic, which disallows the objet petit a and the pas-tout, Lacan allows us instead to include negation systematically as the condition of positive identity. Lacan’s approach to logic is not classical, instead, as Alain Badiou (2006) notes, “the clear Lacanian doctrine” is to think that “the real is the impasse of formalization” (p. 5). That is to include the failure of formalization into the logical calculus. This is incompatible with the logical tradition determining modern computers but not with scientific thinking as such. Mirroring an insight of the Bohr-Einstein debate that Werner Heisenberg (1984) formulated: “The observation itself indicates a break in the [consistent] temporality otherwise preserved in formalism [author’s translation]” (p. 69). This also indicates that the negative is structurally more than just a break or stain included within what we can call the positive or consistent. Copjec’s way of reading this break as a stain and as constitutive is also conceptualized by Lacan in the formulae of the phantasma (Lacan, 2014: p. 49) (for a more complete analysis of these formulae see Heimann, 2022):
In both formulae the objet petit a is central for the symbolic mirror of the Other that is used here. For the pervert, it marks the original image, which the subject barred by the signifier fails to reflect, while in the neurotic’s case, it is the stain of a virtual objet petit a that enables the subject to assume an identity with the subject barred through the signifier. Lacan (2014, p. 97) notes that in the neurotic’s phantasma, the mirror image appears as a double of the original. For the neurotic there is therefore no difference qua mirror image; instead, the phantasma’s structure is here identical to the original image because the neurotic’s phantasma has the pervert’s phantasma as its mirror image, indicating a double mirror. The difference between the original and the mirror image that dominates the pervert’s phantasma, which we can perceive as the inversion in optical mirroring, is not central for the neurotic. This indicates that there is no enantiomorphic relation between (S) and (𝑎 $). This is because, according to the optical mirror function, the mirror image of two enantiomorphic images is not enantiomorphic to itself but is identical.[6] It does not need the mirror as a transforming mediator to be identified with the mirror image anymore. The subject’s identity is then only constituted in a double mirror relation, which includes the indeterminate. The objet petit a, as it is utilized here, is therefore not just an excess but the negative (real) condition for a positive identity. Identity is produced by the insertion of the object resisting formalization, mirroring the argument of Joan Copjec (1996, p. 22) of how the resistance produces the body.
Let us look at the purely positive symbolic mirroring Other through this formula of the phantasma: If we remove the possibility of a virtual[7] (a) from the formula, we always end up with a variant of the pervert’s phantasma, as the stunted symbolic register can no longer sustain the gap that is the objet petit a, while the neurotic formula cannot be expressed at all. The formulae of phantasy that Lacan offers are rewritten by the machine as a variation of the pervert, where the virtual object a is not just the non-identical basis but no longer within the symbolic space it originally supports. This means that the inherent lack of also cannot come to the fore. A symbolic space that no longer supports the objet petit a also cannot give rise to the lack in the Other S(Ⱥ). Instead, the Subject barred by the signifier would appear as if whole or an imaginarized symbolic. The machine then enacts only the “state of knowledge and – display,” instead of the drive moving towards the impossible object a (Copjec, 1996, p. 27). This would shift our self-relation in the phantasma, through which we approach “reality in so far as it is generated by a structure of fiction” (Lacan, 2021, 19.05.1971). It will create Janus-headed surveillance that, as Clint Burnham (2022) notes, appears as a (fully) phallic signifier of power, but at the same time is quite unable of doing what it is supposed to do.
Consequently, the computer’s inability to consider desire will leave its actual epistemic power wanting. This inherent lack of AI surveillance can also be backed up with negative empirical studies. A recent study based on the Fragile Families and Child Wellbeing dataset concluded that, despite a massive effort, low predictive power (~0,2) could be utilized by machine learning regarding the data used (Salganik et al., 2020). Other comparable studies validate the low predictability of social processes (Littlefield et al., 2021; Robles-Granda et al., 2021). Let us ensure this is clear: If no predictions can be made, no laws of any kind could be found by these methods. This in turn means that no underlying macrosocial reality could be found by these computations, which would be central to statistical realism as the approach to social data (Desrosières, 2001, p. 348). Such studies that publish negative results are rare, however, the Salanik et al. study alone shows that there is a massive problem found in social AI research: The big (data) Other does not exist.
Simultaneously, methodological bias acts upon these AI analyses, as these are often enough “opinions embedded in math” (O’Neil, 2017, p. 50). If we add to this the ontologically necessary status of bugs and software failure (Floridi et al., 2015), we see a return of the unconscious that is structurally removed from its material calculations. Transferring this to the problem of automated surveillance, we arrive at a system of surveillance at the position of the Other that possesses a very specific approach to whatever it watches: It denies the existence of desire and reflects this desireless reality back at its users, while being determined by a specific interest external to its internal logic. This marks a failure of its intent because the seemingly objective reality is still determined by desire, just by the desire of the creator of the algorithm, of which the algorithm itself erased all traces. This failure makes it obvious that despite the forbidden gap, the discontinuity of the unconscious (Lacan, 1998, p. 25) is still there, but its mode of appearance is mediated through this absolute denial comparable to foreclosure. This approach we will call the stainless gaze in reference to Copjec’s analysis of the gaze as a stain. In contrast to the objet petit a as the gaze, which is a reflection of the subject’s inner gap, this stainless gaze only produces the consistent and overdetermined order that Lacan (2006a, p. 35) offers us in the analysis of the purloined letter. However, there is no way to approach the silence that the id and desire of the Other marks, as a computational model’s zero might represent this absence (Brahnam, 2018, p. 233–234), but it cannot operate with it as Lacan was well aware of:
[I]t is not because it lacks the supposed virtue of human consciousness that we refuse to call the machine to which we would attribute such fabulous performances a ‘thinking machine,’ but simply because it would think no more than the ordinary man does, without that making [it] any less prey to the summonses of the signifier [l’appel du signifiant]. (Lacan, 2006a, p. 45)
The machine does not repress actively, but cannot but operate in a repressed field of the unconscious, and in this aspect, it is comparable to the “ordinary man.” Still, the unconscious and the indeterminate of desire does not disappear because they are structurally locked out by computation, which is what Lacan describes here. While Lacan, of course, references here the cyberneticist’s dreams of his time, the insight he articulates applies to modern computers as well. This means that the l’appel du signifiant is still a part of the structural position that today’s AI occupies; it is however as inaccessible to it as the pas-tout is to the infant, while still determining its use. Disregarding all phallic phantasies of AI as strong AI, i.e., the idea of a machine subject of identical or even stronger capabilities than humans, we can therefore still posit the machine within the “torture house of being” (Žižek, 2014), where it acts as an automated variant of the Other as a mirror in which we constitute our identity.
IV. The Walls are Hard of Hearing
Where do we end up with these theoretical problems of surveillance and AI? We likely can expect the current trends of privatized surveillance and AI integration to continue (Guo et al., 2019), so that automated surveillance, especially oriented on desires, will continue (Charitsis et al., 2018). However, by missing a central element of the surveilled subject’s structural position, it will reduce modern subjectivity to its calculable aspects.
What are the consequences of this? The architecture of homes, if ordered to the regime of a stainless gaze, means to structure the surveilled in a proactive regime of efficiency calculability. This is the opinion embedded in math, taken from economics, that we find hidden by the algorithm’s objectiveness. In its current iterations, smart architecture is made to help the modern worker become more efficient in managing her work-life balance (Humphry & Chesher, 2021, p. 1178; Rottinghaus, 2021). Notably, this regime of efficiency has the backdrop of not improving the leisure time of the occupants of smart homes (Maalsen, 2020, p. 1537). Instead, time is now needed to ensure that the monitoring process as such are working correctly. Notably, this could be called the next iteration of work surveillance as the direct surveillance of work environments is averse to actual productivity (Martin et al., 2016, p. 2635–2636). Instead of a supervisor ensuring productivity, the smart home big Other encourages self-surveillance to improve productivity. There is a conflation of the roles of the surveilled and the surveilling: The surveillance is much more integrated into the surveilled lives, the surveilled must take responsibility for their surveillance, and they are expected to take its conclusions seriously. As Bunz and Meikle (2018, pp. 19–21) describe, this surveillance is not simply the extraction of data, but also aimed at actively influencing and sometimes forcing the user into specific actions. Surveillance itself is turned into self-reflection and self-monitoring. This makes it necessary to consider the big Other introduced by the now surveilling house and how its structure twists and shifts our subject-position in a discourse with the smart architecture. This is best understood as a variant of the university discourse, which is not surprising as Lacan (2006b, p. 206) noted that classical surveillance heavy societies are ruled by the university discourse (Lacan 2006b, p. 31):
How does the university discourse function in surveillance architecture today? We can assume that the master signifier can be taken as the idea of calculability, an aspect of smart homes detailed by Lynn Spiegel (2005). This master signifier (S1) of the smart home is not only the generation of profits by gathering of data, but all of this is also subsumed under the header of calculable efficiency. “Keep on knowing” (Lacan, 2006b, p. 106) is turned into “Keep on making things quantifiable to improve them.” This is obviously a combination of the computer’s inability to produce anything but positive data and the economist’s fixture. Therefore, the represented knowledge (compare Lacan, 2006b, p. 44) as it is in the agent’s position (S2), is structured by efficiency, as its master signifier.
However, this idea of efficiency is bound up with the metonymy of the signifier, as there is no total efficiency producing a necessary excess. Locating this excess is more difficult here, it cannot be in the position of the objet petit a in the university discourse. Despite the objet petit a being impossible in the machine’s discourse, it appears in a prominent position as a desire to be determined. Desire is not being determined here as a corruption or as a political position as it was in the prior examples, but instead as something to be made calculable and improvable. Still, the system, as it cannot approach desire as virtual, will determine even those desires that we require to stay virtual if they fit into the structure of its determination, while being unable to approach other desires at all. This is the caveat this surveillance discourse is facing here. The structural marker for desire (a) can be considered a virtual space here, but as a “known unknown,” if we apply Slavoj Žižek’s (2008) semiotic fourfold of knowledge. The negative space as approached from the logic that computers rely on is the negative as a negation of something already determined. This already determined field is not only the opinion embedded in math that O’Neil (2017) warns of. The big Other as the mirror operation shows us what we should be under the auspices of objective calculations. This is where the excess then should be located: in the position of the split subject ($).
Lacan (2006b) calls the position of the student in the university discourse “astudied” (pp. 105–106), marking the Other’s impossible demand of them to produce the subject of science with their flesh, i.e., their existence as ek-sistere outside the language. The subject of computer surveillance operates within the same difficult position. The Janushead of computer-based surveillance produces a specific split, one, as we argued, that is more oriented on the pervert’s conflictual phantasma than on neurotic integration of the lack. Essentially forced to “dispelling all the ambiguity of language,” the automatically self-surveilled subject is faced with the demand of acting “directly as the instrument of the big [data] Other’s will” (Žižek, 2006, p. 127). Instead of managing the self, it turns the ego into a pseudoautomated subject that is confronted with desires it might have wanted to keep in suspension, while being unable to reflect others altogether. However, in difference to the machine, this is not established by the impossibility of approaching desire, but more simply by repression:
With a machine, whatever doesn’t come on time simply falls by the wayside and makes no claims on anything. This is not true for man, the scansion is alive, the ego in Freud’s theory and in the technique of psychoanalysis and whatever doesn’t come on time remains in suspense. That is what is involved in repression. (Lacan, 1991, pp. 307–308)
V. Conclusions:
The surveillance aspects of modern architecture are in the case of smart buildings a central appearance of the Other, especially as the objective logic of the machine seems to produce clear and distinct mirror images. Never did the living and working space itself directly appear as a talkative and overreaching Other to the subject inhabiting it, adding a new virtual dimension to architecture that might have somewhat existed in religious architecture, but not as actively mirroring a specific subject position. To return to the question posed by this article on the intersection between Lacanian psychoanalysis and architecture, we can discern a new type of big Other as a direct intersection between psychoanalysis and architecture. Far from its origins in espionage, this surveillance is privatized. Firstly, as a discourse with an architectural space that the occupant is forced into, secondly, as this surveillance is no longer enforced by a watching party, but by the occupants themselves. This shifts its effect on surveillance architecture from determining spatial relations to determining a discourse with smart architecture. We can already see this shift taking place in the vanishing need for wide-open office spaces for example. However, this new surveillance has a significantly stronger individualizing effect on its recipients, which is caused and determined by its assemblage character. As of now, there are no central smart home systems, so the unity of different surveillance technologies is created by its occupants.
However, with computerized surveillance as a seemingly rational development of the surveillance architecture and methods of the last century, what is observed also changes. As the computer communicates its findings in numbers integrated into common language, it might still seem to operate in the symbolic space as we do, but as noted, its stunted symbolic order should be noted as an important aspect of its mirroring. We should be careful in examining this interaction between the machine and the subject, especially as the machine does not follow the same logic that constitutes the subject.[8]
Bibliography:
Agre, P. (1997). Computation and human experience. Cambridge University Press.
Arrieta, A Díaz-Rodríguez N, Del Ser J, et al. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58: 82–115.
Badiou, A. (2006). Being and event. Continuum.
Beer, D. (2017). The social power of algorithms. Information, Communication & Society 20(1): 1–13.
Bitterman, N. and Shach-Pinsly, D. (2015). Smart home – a challenge for architects and designers. Architectural Science Review 58(3): 266–274.
Boole, G. (1847). The mathematical analysis of logic, being an essay towards a calculus of deductive reasoning. Philosophical Library.
Brahnam, S. (2018). Primordia of après-coup, fractal memory, and hidden letters: Working the exercises in Lacan’s seminar on “The purloined letter. Journal of the Circle of Lacanian Ideology Critique(10&11): 202–244.
Bunz, M. & Meikle, G. (2018). The internet of things. Polity.
Burnham, C. (2022). Siri, what is psychoanalysis? Psychoanalysis, Culture & Society. DOI: 10.1057/s41282-022-00294-0.
Carnap, R. (2004). Überwindung der Metaphysik durch logische Analyse der Sprache [The elimination of metaphysics through logical analysis of language]. In T. Mormann (Ed.), Scheinprobleme in der Philosophie und andere metaphysikkritische Schriften (pp. 17-71). Meiner Verlag. (Original work published 1931)
Charitsis V., Zwick D & Bradshaw, A. (2018). Creating worlds that create audiences: Theorising personal data markets in the age of communicative capitalism. tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society 16(2): 820–834.
Chibber, V. (2022). The class matrix: Social theory after the cultural turn. Harvard University Press.
Copjec, J. (1994). Read my desire: Lacan against the historicists. MIT Press.
Copjec, J. (1996). The Strut of Vision: Seeing’s Somatic Support. Qui Parle 9(2): 1–30.
Desrosières, A (2001). How real are statistics? Four possible attitudes. Social Research 68(2): 339–355.
Fink, B. (2004). Lacan to the letter: Reading Écrits closely. Univ. of Minnesota Press.
Floridi, L., Fresco, N. & Primiero, G. (2015). On malfunctioning software. Synthese 192(4): 1199–1220.
Foucault, M. (1995). Discipline and punish: The birth of the prison. Vintage Books.
Freeman, W.J. (2000). How brains make up their minds. Columbia University Press.
Goulden, M. (2021). ‘Delete the family ’: platform families and the colonisation of the smart home. Information, Communication & Society 24(7): 903–920.
Guo, X., Shen, Z., Zhang, Y., et al. (2019) Review on the application of artificial intelligence in smart homes. Smart Cities 2(3): 402–420.
Heimann, M. (2022). The mirror operator: On Lacanian logic. The International Journal of Psychoanalysis, 103(5), 707–725.
Heimann, M., & Hübener, A.-F. (2023). AI as social actor. Journal of Digital Social Research, 5(1), 48–69.
Heisenberg, W. (1984). Physik und Erkenntnis 1927 – 1955: Ordnung der Wirklichkeit, Atomphysik, Kausalität, Unbestimmtheitsrelationen u.a. [Physics and knowledge 1927 – 1955: order of reality, atomic physics, causality, indeterminacy relations, etc.] Piper.
Humphry, J. & Chesher, C. (2021). Visibility and security in the smart home. Convergence: The International Journal of Research into New Media Technologies 27(5): 1170–1188.
Kant, I. (1967) Kritik der reinen Vernunft [Critique of pure reason]. Felix Meiner Verlag.
Käufer, S. (2005). Logic. In: Dreyfus, H. L. (Ed.) A companion to Heidegger. Blackwell Publishing, pp. 141–155.
Lacan, J. (1991). The seminar of Jacques Lacan: Book II. The ego in Freud’s theory and in the technique of psychoanalysis, 1954–1955 (J.-A. Miller, Ed. & S. Tomaselli, Trans.). W. W. Norton.
Lacan, J. (1998). The seminar of Jacques Lacan: Book XI. The four fundamental concepts of psychoanalysis (J.-A. Miller, Ed. & A. Sheridan, Trans.). W.W. Norton.
Lacan, J. (2014). The seminar of Jacques Lacan: Book X. Anxiety. (J.-A. Miller, Ed. & A.R. Price, Trans.) Polity.
Lacan, J. (2006a). Ecrits: The first complete edition in English (B. Fink, Trans.). Norton.
Lacan, J. (2006b). The seminar of Jacques Lacan: Book XVII. The other side of psychoanalysis (J.-A. Miller, Ed. & R. Grigg, Trans.). W.W. Norton.
Lacan, J. (2021). The seminar of Jacques Lacan: Book XVIII. On a discourse that might not be a semblance: 1971 (C. Gallagher, Trans.). Retrieved from http://www.lacaninireland.com/web/wp-content/uploads/2010/06/Book-18-On-a-discourse-that-might-not-be-a-semblance.pdf
Littlefield, A. K., Cooke, J. T., Bagge, C. L., et al. (2021). Machine learning to classify suicidal thoughts and behaviors: Implementation within the common data elements used by the Military Suicide Research Consortium. Clinical Psychological Science, 9(3), 467–481.
Maalsen, S. (2020). Revising the smart home as assemblage. Housing Studies, 35(9), 1534–1549.
Martin, A. J., Wellen, J. M., & Grimmer, M. R. (2016). An eye on your work: How empowerment affects the relationship between electronic surveillance and counterproductive work behaviours. The International Journal of Human Resource Management, 27(21), 2635–2651.
Meillassoux, Q. (2008). After finitude: An essay on the necessity of contingency. Continuum International Publishing Group.
Millar, I. (2021). The psychoanalysis of artificial intelligence. Springer International Publishing; Imprint: Palgrave Macmillan.
O’Neil, C. (2017). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
Nusselder, A. C. (2006). Interface fantasy: A Lacanian Cyborg Ontology [Een Lacaniaanse Cyborg Ontologie = Interface fantasie] (Doctoral dissertation, University of Rotterdam, 2006). Amsterdam: F&N Eigen Beheer.
Plato. (1991). The republic of Plato. Basic Books.
Picht, G., & Eisenbart, C. (1992). Aristoteles’ “De anima”. Klett-Cotta.
Possati, L. M. (2020). Algorithmic unconscious: why psychoanalysis helps in understanding AI. Palgrave Communications, 6(1).
Priestley, M. (2011). A Science of Operations. Springer London.
Robles-Granda, P., Lin, S., Wu, X., et al. (2021). Jointly predicting job performance, personality, cognitive ability, affect, and well-being. IEEE Computational Intelligence Magazine, 16(2), 46–61.
Rottinghaus, A. R. (2021). Smart homes and the new white futurism. Journal of Futures Studies, 25(4), 45–56.
Salecl, R. (2004). On anxiety (thinking in action). Routledge.
Salganik, M. J., Lundberg, I., Kindel, A. T., et al. (2020). Measuring the predictability of life outcomes with a scientific mass collaboration. Proceedings of the National Academy of Sciences of the United States of America, 117(15), 8398–8403.
Schwartz, O. (2018). Digital ads are starting to feel psychic: Targeted advertising has become so uncanny, it feels like we’re being surveilled. Two artists want to hear these stories. The Outline. Retrieved from https://theoutline.com/post/5380/targeted-ad-creepy-surveillance-facebook-instagram-google-listening-not-alone.
Spigel, L. (2005). Designing the smart house. European Journal of Cultural Studies, 8(4), 403–426.
Rambatan, B., & Johanssen, J. (2022). Event horizon: Sexuality, politics, online culture, and the limits of capitalism. John Hunt Publishing Limited.
West, S. M. (2019). Data capitalism: Redefining the logics of surveillance and privacy. Business & Society, 58(1), 20–41.
Wilson, C., Hargreaves, T., & Hauxwell-Baldwin, R. (2017). Benefits and risks of smart home technologies. Energy Policy, 103, 72–83.
Wu, Y., Lv, Q., Qiao, Y., et al. (2017). Linking virtual identities across service domains: An online behavior modeling approach. In Proceedings of the International Conference on Intelligent Environments (IE) (pp. 122–129). IEEE.
Žižek, S. (2006). The Fundamental perversion: Lacan, Dostoyevsky, Bouyeri. Lacanian ink, 27, 114–129.
Žižek, S. (2008). Rumsfeld and the bees. The Guardian. Retrieved June 28, 2008.
Žižek, S. (2012). Less than nothing, Hegel and the shadow of dialectical materialism. Verso.
Žižek, S. (2014). The poetic torture-house of language: How poetry relates to ethnic cleansing. poetry, 203(6), 579-586.
Žižek, S. (2017). The obscene immortality and its discontents. International Journal of Žižek Studies, 11(2).
Žižek, S. (2020). Hegel in a wired brain. Bloomsbury Academic.
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.
Notes:
[1] Although we need to note that the machine neither represses nor forecloses, but annihilates (Vernichtung) the unconscious dimension, which still is active, but differently negated than in Aufhebung, Verdrängung or Verwerfung.
[2] There is a notable exception in the calculus of distinction by Georg Spencer Brown (compare Spencer-Brown 1979, p. 101).
[3] The demonic is first used as an ontological distinction by Plato. He uses it to mark the distance of his idea of the good to our common sense approach. Let us look at this formulation of the idea of the good: “Therefore, say that not only being known is present in the things known as a consequence of the good, but also existence and being are in them besides as a result of it, although the good isn’t being but is still beyond being, exceeding it in dignity and power.” (Plato 1991, p. 189). Plato’s idea of the good is here understood as a principle which is to be considered as external to beings and being itself, while at the same time being the fundament of beings and the knowledge of those. Directly after this statement by Socrates, Glaukon indicates the epistemological problem of such a principle: “Apollo, what a demonic excess” (p. 189). (The idea of the platonic good is beyond the usual mortal capabilities of knowing, so Glaukon calls it a ὑπερβολἠ, which is, as Georg Picht (1992, pp. 53–54) argued, an excess transgressing beyond the usual measure of humans. This is what is indicated by the used adjective “demonic,” which does not have the later connotations of an evil spirit, but is a liminal structure between a measured and determined symbolic order and its excess. It’s very important to note the move beyond the usual measure, indicating a symbolic distinction.
[4] Showing strong parallels to the idea of divine knowledge in scholasticism.
[5] I am using here the concept of structure quite comparably to Vivek Chibber (2022), who argued that structures force us to do something, while ideology coerces at best.
[6] Enantiomorphs are by definition a mirrored pair of objects, however, that also means that these cannot be reoriented to match without a mirror.
[7] That is the objet petit a within the symbolic mirror. The objet petit a in the pervert’s formula is not affected, because it is not part of the mirror image.
Funding Statement:
The authors declare that internal funds of the Hochschule Niederrhein, Fachbereich 06 were used to finance this study.
Bios:
First Author: Marc Heimann, (Dr. phil.) is a researcher at the Hochschule Niederrhein, most recently within the BMBF-funded project “Public Understanding of AI”. He is the author of the book Die Logik des Streites: Zum Problem der Zerklüftung des Seins im Werk Heideggers (2021) and has published in Human Studies, The International Journal of Psychoanalysis and the Journal for Digital Social Research.
ORCID: 0000-0003-3757-1790
Contributor: Anne-Friederike Hübener (MD) is a professor for social medicine and social psychiatry at the Hochschule Niederrhein. She is a clinical practitioner in child, adolescent and adult psychiatry and psychotherapy with postgraduate studies in public health and social work. She has published in Frontiers in Psychiatry and the Journal for Digital Social Research.
ORCID: 0000-0003-3890-1739
Publication Date:
June 1, 2023