Essays

Towards a cybernetic phenomenology: how robotic tortoises uncover a better approach to science

November 20, 2020

Introduction

As the coronavirus pandemic rages on and humanity stares down its largest techno-scientific challenge in a generation, I regret to inform you, dear reader, that—as many a scholar will tell you—Science has a Big Problem (BP). The BP is not the antivaxxers. It’s not the postmodernists, and it’s not the objectivists. It’s not the conservatives, and it’s not the liberals.

Indeed, the BP is that they are all right. The antivaxxers should be critical of governments and institutions whose previous forays into large-scale medical interventions have been fraught, at best, particularly among the most disenfranchised members of society. The postmodernists are valid to demand that science account for social construction of gender, race, sexuality, queerness, etcetera. The conservatives are correct to crave a science that reaches beyond subjectivity and grounds itself in objective, unchanging fact. And the liberals are logical to suggest we ought to just believe Science,” when it does seem Science is the most effective way out of the very tangible health crisis facing humanity in the year of our lord 2020.

The BP is not new. Donna Haraway attempted to thread this needle decades ago; demanding a science that talks about reality with more confidence than we allow to the Christian Right when they discuss the Second Coming and there being raptured out of the final destruction of the world”; and yet a science that creates a better account of the world, that is science;’” that thinks about objectivity” as positioned rationality”—a larger vision” that is based on being somewhere in particular.” (Haraway 1988, 590)

She was right, of course. What is needed is a science that does not seek to disembody, atomize or dissect; but does venture to objectively understand and still continues to respect and even account for the privilege of partial perspective.” (Haraway 1988) And yet, the BP persists. Perhaps this is because maybe there are others like me who want to agree with Haraway but are confused as to where to go from here. Maybe what would be helpful is an example: how does one do a science as Haraway demands?

The good news is there’s at least one example—buried deep in the annals of intellectual history—of one group of people who seem to have done this kind of a science in practice: cyberneticians. Cybernetics arose out of a ragtag, diverse coalition of scientists, academics and engineers who built a science that rejects atomism and:

treats, not things, but ways of behaving. It does not ask, What is this thing?” but what does it do?”… It is thus essentially functional and behavioristic… The materiality is irrelevant, and so is the holding or not of the ordinary law of physics. (ASC: Foundations: Defining Cybernetics n.d.)

Although today mostly considered an historical footnote, cybernetics influenced many contemporary fields: from artificial intelligence to robotics to ecology to chaos theory to behavioral economics. Perhaps this is because embedded in cybernetics is a language to talk about systems, rather than simply individualized, atomized parts; or, maybe because, as Andrew Pickering notes in Cybernetics and the Mangle,” to recycle another old formulation from Thomas Kuhn, it is as if the cyberneticians have lived in a different world from the classical scientists.” (Pickering 2002, 431)

In this essay, I want to explore the historical moment that led to cybernetics—and all that became of it—and how cybernetics can give us a new way of seeing.” I will link this new way of seeing” to a philosophical moment also occurring in the early 20th century and also attempting to solve these problems: phenomenology, or the first-person study of the structures of experience (“phenomena”) as a legitimate way to know the world; attempting to bridge the gap from the highly technical cybernetics to the philosophical commitments that begat phenomenology. My hope is that this approach will provide a path beyond the BP.

History

Kuhn

In his famous essay, The Structure of Scientific Revolutions,” Kuhn argues for a role for history” in understanding scientific progress. He suggests in this text that perhaps science does not develop by the accumulation of individual discoveries and inventions” but instead is a much more messy and non-linear process, where multiple theories can compete with conflicting data and that scientists attempt towards the maximum internal coherence and the closest possible fit to nature.” (Kuhn 1996, 3) In addition, he notes the importance of the history of science to display the historical integrity of … science in its own time.” (Kuhn 1996, 3)

For Kuhn, most scientists work tirelessly on normal science” until the data they’re collecting no longer fit within that normal science.” This conflict—where the scientific data no longer fits the narrative outlined by normal science”—gives rise to a scientific revolution” where the old paradigms” of normal science” are challenged and new paradigms are created, each with their own puzzles that scientists will attempt to solve.

While the cybernetic moment was not a crisis of data, it was a crisis of institutions; a failure of existing fields to provide the proper language to talk about or understand concepts like systems and control  and thus the field continues to have revolutionary—in the Kuhnian sense—potential to dramatically reshape the ways science can operate.

The Birth of Cybernetics and the Macy Conferences

Cybernetics developed in the middle of the 20th century when, due to political quirks, scientists and engineers from different fields found themselves together for the first time and needed to develop a common vocabulary. As Stafford Beer notes:

We know when this started, and it started in the early 40s, and it started in Mexico City of all places. And the reason for that was that a lot of the world’s greatest scientists were working on wartime projects… They were sort of evacuated to a neutral country, to Mexico City, to work in peace … they were working at the clinic of neurophysiology, headed by probably the world’s leading brain experts at the time… Arturo Rosenblueth … and he became the host of this gang of folk … and they had experts in practically every subject there is … so all these people meeting and talking, so they said, well, what should we talk about … and somebody said, look, we have divided the world up into departments … can we think of a subject that has perhaps got lost because it never had a label … and they came up with the topic of control … so the question arises … whether there are some principles by which things remain in control, or are regulated? (Cantú 2012, 10:02)

These initial informal discussions would occur roughly concurrently alongside a small invitation-only meeting hosted by the Josiah Macy Jr. Foundation, a health charity, in New York City in 1942 formally called The Cerebral Inhibition Meeting.” In attendance were the medical directory of the Josiah Macy Jr. Foundation, Frank Fremont Smith; Gregory Bateson, the social scientist; Margaret Mead, the anthropologist; and the aforementioned Arturo Rosenblueth, whose discussion of circular causal systems” is of primary historical interest. As Steve Heims notes in his history of cybernetics:

Essentially the idea was to identify in a behaviorist spirit some of those aspects of what organisms do that can be analyzed in terms of what certain analogous machines do. But the analysis differed in some important respects from the tenets of classical behaviorism. First, it was concerned with goal-directed actions, where an organism acts with a purpose,” although, as Rosenblueth and collaborators put it, the definition of purposeful behavior is relatively vague, and hence operationally largely meaningless, the concept of purpose is useful and should, therefore, be retained.” Explaining actions in terms of a goal to be attained had traditionally been criticized by scientists because it meant explaining actions in terms of events that had not yet happened, the cause, so to speak, coming after the effect. Rosenblueth and his friends rejected the criticisms as irrelevant and readily spoke of goal-directed actions as in a well-defined sense teleological.” The description of purposive behavior of organisms in the images and language of engineering meant that, notwithstanding the traditional opposition between teleology and mechanism, one could henceforth speak explicitly and concretely about teleological mechanisms.”

Second, the model replaced the traditional cause-and-effect relation of a stimulus leading to a response by a circular causality” requiring negative feedback: A person reaches for a glass of water to pick it up, and as she extends her arm and hand is continuously informed (negative feedback)—by visual or proprioceptive sensations—how close the hand is to the glass and then guides the action accordingly, so as to achieve the goal of smoothly grabbing the glass. The process is circular because the position of the arm and hand achieved at one moment is part of the input information for the action at the next moment… Rosenblueth in his talk singled out goal-directed circular-causal processes with negative feedback as commonplace and worthy of systematic investigation in both organisms and machines, as well as in combined machine-organism systems. (Heims 1991, 15)

The most commonly cited example of negative feedback in cybernetic systems is that of someone steering a ship—where the ship’s compass, rudder, engine, and steersperson are all acting as a goal-directed system with feedback” (Heims 1991, 16)—in other words, a circuit.1

This small meeting set the framework for a more formal series of unique academic conferences hosted again by the Josiah Macy Jr. Foundation entitled The Feedback Mechanisms and Circular Causal Systems in Biology and the Social Sciences Meeting,” which occurred in early March 1946 and promised to generate a new kind of link between engineering, biology, and mathematics on the one hand and psychology, psychiatry, and all the social sciences on the other.” (Heims 1991, 56) These conferences would come to be known as simply The Macy Conferences.” The expanded list of participants and guests at these conferences included the attendees at the original Cerebral Inhibition Meeting, but crossed even more disciplinary lines, adding, for instance, Norbert Wiener, the mathematician and John von Neumann, the computer scientist. They would be joined in later conferences by the electrical engineers Heinz von Foerster and Claude Shannon; psychologists, roboticians, and many others.2 These meetings would prove to be highly influential and a major, if rarely cited, intellectual event in the early 20th century.

British Cybernetics and ontological theater”

As Andrew Pickering discusses, however, alongside the Macy Conferences a different strain” of cybernetics was developing in Britain. This approach was more experimental, more artistic, and does a better job highlighting what I believe is the revolutionary potential of cybernetics; i.e., that it creates an ontology that is nonmodern in two ways: in its refusal of a dualist split between people and things, and in an evolutionary, rather than causal and calculable, grasp of temporal process.” (Pickering 2010, 19)

While it is true that both British and North American cybernetic traditions hint at this revolutionary potential, the British cyberneticians such as W. Grey Walter, Ross Ashby, Stafford Beer, Gregory Bateson and R. D. Laing demonstrate how to do a cybernetic science,”—and why that perspective is important—far more concretely. Of particular interest for this project is the cybernetician Ross Ashby’s (1903–1972) problem of the Black Box”:

The problem of the Black Box arose in electrical engineering. The engineer is given a sealed box that has terminals for input, to which he may bring any voltages, shocks, or other disturbances, he pleases, and terminals for output from which he may observe what he can.[Ashby (1956), p.86)

But it is not merely engineers who are interested in Black Boxes. Most of the world appears as a Black Box. When I turn on a shower head, or flush a toilet, or use a microwave, I do not engage in a project of dissecting the shower head, the toilet, or the microwave: instead I must learn to navigate the world in spite of a general ignorance of the inner workings of these devices.

Engineers, too, can maintain very productive careers without fully grasping the interior workings of the objects of their profession. Outside of academia, I’m a software developer, and while I have a fairly complete understanding of how computers work,3 upon asking, I could not possibly describe the entire process by which these words are appearing on this page, even if I were to be given the source code for my word processor (and the operating system)—to say nothing about the operation of power grid, the construction fiber lines delivering me internet, among many other interlinked systems.

These systems and their interconnections are simply too complex for a single person—even a highly skilled engineer—to grasp at once, not to mention their relations to other systems, such as the movements my fingers and the underlying neural systems that are making that are allowing me think and input these words into the computer in the first place! And yet, I have no technical problems writing this essay, outside minor usability complaints about the proper use of my word processor. That’s because, according to Pickering:

A Black Box is something that does something, that one does something to, and that does something back—a partner in, as I would say, a dance of agency. Knowledge of its workings, on the other hand, is not intrinsic to the conception of a Black Box—it is something that may (or may not) grow out of our performative experience of the box. We could also note that there is something right about this ontology. We are indeed enveloped by lively systems that act and react to our doings, ranging from our fellow humans through plants and animals to machines and inanimate matter, and one can readily reverse the order of this list and say that inanimate matter is itself also enveloped by lively systems, some human but most nonhuman. The world just is that way.(Pickering 2010, 20)

The problem with Black Boxes is that two stances” remain. It is not immediately apparent how a Black Box ontology can move past the problems highlighted by Haraway and others: it can still be used as a tool to atomize—to pick apart things in the world and attempt to divest them of their context; to create a disembodied scientific objectivity.” (Haraway 1988, 576) There is still a drive within science to disassemble Black Boxes, to derive laws of nature from this dissection in order to strip away the opacity and to understand their inner workings in a representational fashion.” (Pickering 2010, 20) The work of the cyberneticians, however, gives rise to an alternative approach to Black Boxes, that:

drew back the veil the modern sciences cast over the performative aspects of the world, including our own being. Early cybernetic machines confront us, instead, with interesting and engaged material performances that do not entail a detour through knowledge. The phrase that runs through my mind at this point is ontological theater. I want to say that cybernetics staged a nonmodern ontology for us in a double sense. Contemplation of thermostats, tortoises, and homeostats helps us, first, to grasp the ontological vision more generally, a vision of the world as a place of continuing interlinked performances. We could think of the tortoise, say, exploring its world as a little model of what the world is like in general, an ontological icon. Going in the other direction, if one grasps this ontological vision, then building tortoises and homeostats stages for us examples of how it might be brought down to earth and played out in practice, as robotics, brain science, psychiatry, and so on. (Pickering 2010, 21)

Yet both Pickering and Heims fail to note another movement occurring slightly before the cybernetic one that echoes the approach taken by the cyberneticians: phenomenology.

Cybernetics and Phenomenology

Phenomenology is a philosophical movement that arose in the early 20th century as an attempt to solve many traditional philosophical problems that had been facing European/Western philosophers for centuries, if not millennia. Standing in stark contrast with the analytic/logical positivist movement occurring at roughly around the same time, phenomenologists such as Edmund Hurrserl and his student Martin Heidegger attempted to build a new approach to philosophy, essentially from scratch, starting from a first-person examination of the structures of consciousness. And, importantly for this project, phenomenologists reject the move that suggests our subjectivity is not to be trusted when building a science. Rather, they suggest, it is necessary to start from first-person egoist subjectivity—then, significantly, turn to intersubjectivity, to properly understand and gain an objective understanding the world.

Although both movements were occurring about concurrently, I do not believe it is likely that the phenomenologists and the cybernetics encountered each other, at least directly. Martin Heidegger (1889–1976) seems to have been aware of cybernetics, going so far to discuss it in his infamous last interview:4

Heidegger: Philosophy today dissolves into individual sciences: psychology, logic, political science. SPIEGEL: And what now takes the place of philosophy? Heidegger: Cybernetics (Spiegel 1966)

However, as a primarily engineering and scientific discipline, cybernetics rarely interfaced with philosophy, economics, or political science. While F. S. C. Northrop was at the original Macy Conferences, he was viewed with skepticism by other conferees.” (Heims 1991, 24) As Heims outlined, this is largely due to the intellectual conditions of the postwar period:

The bias for psychology and psychiatry over economics and political science as representative of the social sciences was in part a manifestation of the aforementioned social atomism and retreat from politics popular at mid-century, and in part indicates that even the interests of the cyberneticians lay in the first instance in mind and brain…. The mode of discourse at the meetings after the first was intended to be neutral-scientific and apolitical. Discussions of political science and economics, unlike psychology and engineering, were more likely to lead to loaded political issues. The Macy group safely stuck to scientific” topics, and its invited speakers were not of the kind to bring leftist politics into the discussion. The mechanical and psychological (atomistic) biases served to depoliticize the issues. (Heims 1991, 18)

This is not to say that the cyberneticians failed to account for political, social, or ethical implications of their thought. A close reading of Norbert Wiener surfaces many revolutionary leftist political ideas; not to mention the later work done by Stafford Beer in Salvador Allende’s Chile.5 It is merely to suggest that the cyberneticians were less well versed in the philosophic movements, including phenomenology, occurring in Europe at roughly the same time, and, if they were, failed to account for them publicly, although further archival research is necessary. This is a real shame because phenomenology and cybernetics clearly resonate with each other, and bridging the two could be a start to an alternative approach to science.

Hurrserl

Edmund Hurrserl (1859–1938) is considered to be the founder of phenomenology, writing several books intended to serve as an introduction to phenomenology.” Examining Hurrserl in depth is well outside the scope of this essay, but I believe several Hurrserlian ideas will be useful to this project; in particular Hurrserl’s concept of horizons and eidetic reduction.

For Hurrserl, the answer to the question what is _x_” is simply all the ways in which _x_ can appear”—not just to me, although that’s the first move—but to all subjects in the world. Combining these appearances to form, essentially, a kind of epistemological bell curve—along with the way things appear to other subjects—is the act by which I gain knowledge of the world. This is what Hurrserl calls horizons”; and it is an infinite project; perhaps, even, it is the project which could be said to be the proper domain of science.

The best way to explain this is through example. In Mexico, where I live, I receive a biweekly box of fresh produce grown locally. As I was raised and spent most of my life in the United States, I often receive what are to me very strange-looking fruits and vegetables and I regularly find myself needing to engage in a project of determining just what I received—and, most importantly, how to produce a delicious meal with it. I receive what is to me an unknown thing, in essence, and must engage in a project of coming into enough knowledge of it to use it in a scrumptious dinner. This is a phenomenological project of building a horizon for this unknown thing, and thus a personal coming into knowledge” of it.

But I do not start this horizon from scratch. Indeed, before I begin to research to figure out what exactly I’m dealing with, I have a great deal of information. Firstly, I am, of course, very familiar with the texture, smell, taste, shape, color and other characteristics of the unknown thing I’m holding in my hand. I can safely assume, for instance, that it is a kind of edible produce, assuming I trust the cooperative that delivered me the produce6. I know roughly where it was grown,7 when,8 and by whom.9 And I have a list of the Spanish names of what should be in this week’s despensa, which I can use in internet searches to give a name to the unknown thing; which I can then use to further research recipes, for instance. Sometimes, in an act of intersubjectivity, I’ll ask my Mexican friends what I ought to do with this produce, and they’re often more than happy to give me tips. I also have experience with other kinds of produce more familiar to my American diet, and I can make connections between for instance, a chayote, which is a kind of squash, and squash I’m more familiar with, such as a summer squash.

These activities build a horizon on which the object will appear to me in the future. When I receive a new box with the same produce I encountered in the past, I will often actively remember the process by which I first came aware of this particular piece of produce. What a chayote is is simply the horizon that includes all of this context—and, importantly, also the additional processes by which other subjects came to know the chayote, for instance, why my friend suggests I ought to fry the chayote in scrambled eggs, or the knowledge of the campensino who planted and grew the chayote, or the experiences of the people who first cultivated chayotes, etcetera.

Hurrserl would argue that this process is infinite and continuous, even when encountering familiar objects. For example, I was raised in a home in Iowa that was surrounded by cornfields; as such, corn has been a part of my life since I was very young. I may not remember the process by which I first came to know corn, the same way I can the process by which I first came to know chayote, but that process remains embedded in my perception of corn—and it can continue to evolve, as when I first learned Mexican street vendors sell (delicious) elotes covered in mayonnaise, lime juice, cheese, and chili seasoning. Moreover, it’s critical to remember that this is always an intersubjective process—simply because I wasn’t aware of elotes callejeros con mayonesa as a way corn might be served doesn’t, obviously, negate the existence of corn. Corn is simply all the ways in which corn might appear to me—_and_ the broader community—both known and unknown to me.

Still, there’s a problem worth considering: if phenomenology is all about essences, how do I separate, epistemically, corn, from say, a car? Hurrserl introduces the concept of eidetic variation as one solution to this problem. I come to the eidos of a thing through this conflict. Simply by engaging in an imaginary process of picking apart the necessary and continent properties of a thing, I can begin to contemplate, albeit imperfectly on my own, for instance, what makes corn, corn.” I know that simply calling corn _elote_” does not mean that what I’m eating when I eat Mexican street corn is in any significant way different from what might be served with a heavy helping of butter and accompany a pork chop dinner with my family in Iowa. This process of eidetic variation is messy, of course, and sorting this out rigorously seems to also be part of what it is to do science.

I would like to suggest that the work of the cyberneticians, particularly British cyberneticians, demonstrates another tool” to perform phenomenological analysis—to expand horizons and create boundaries in a way that is not overly reductive or atomistic. The British cyberneticians were notorious tinkerers who (perhaps unintentionally) found a way around atomism and towards phenomenology by performing and critically analyzing acts in the world; i.e., by invention—by making things, through engaging in a process of active synthesis through artificial performance. Notably, nothing is created ex nihilo, but the act of play and invention can create new structures which put objects in new relations—bound by the existing world, of course, just as the inventor of the wheel is bound by the existence of a certain conception of space, gravity, etcetera—and these new relations, I want to argue, can expand phenomenological horizons by unveiling the structures of the world in interesting, scientifically rigorous, and novel ways.

W. Grey Walter

To illustrate this, I now turn to W. Grey Walter’s robotic tortoises. W. Grey Walter (1910–1977) was a cybernetician, neurophysiologist, robotician, and a bit of a mad scientist. He wanted to explore the ways in which simple networks of neurons could lead to complex behaviors—but rather than dissect a brain, or run some kind of experiment, as the traditional Enlightenment methods of science might suggest, he built them himself—in the form of simple robotic tortoises made from alarm clock parts and war surplus items that could move around in their environment while avoiding obstacles and even returning to a cradle to charge; mechanical, mid-century proto-Roombas.

Robot toys were already somewhat common in popular culture in public exhibitions (such as World’s Fairs) and even in toy shops, but Walter’s robots were unique in that he was attempting to emulate biological systems—he understood the tortoises’ two sensors as neurons.” They weren’t toys—they were examples of very primitive nervous systems that could display a host of behaviors, including, according to Walter:

Exploration, curiosity, free-will in the sense of unpredictability, goal-seeking, self-regulation, avoidance of dilemmas, foresight, memory, learning, forgetting, association of ideas, form recognition, and the elements of social accommodation. Such is life. (Ashby 1952, 120)

I take this to be an alternative, phenomenological approach, to doing a science of the brain. While one way to ask what a nervous system is” is to dissect it, another way is what Walter, and other cyberneticians did—emulate it. Doing so adds to the phenomenological horizon of what nervous systems are”; a kind of real-life eidetic variation that skips the reductive steps normally taken by science to produce new understandings of the world by expanding phenomenological horizons through acts of performance. It seems to me that in many ways all science actually operates like this, to an extent—science itself, even the old Enlightenment conception of science I’m attempting to overthrow—exists as a series of performances that themselves work to create a narrative of how the world (or universe) works.”

The difference is that, like phenomenology, cybernetics doesn’t necessitate the veiling—or attempts at destruction—of subjectivity. If anything, Walter’s robots created a new subjectivity—that of the artificial tortoise, for which he could engage in an act of intersubjectivity. The mere act of assembling spare parts was all that was needed to expand the horizon of what nervous systems are” and how complex behavior can arise out of simple networks. And, not only do Walter’s robots expand the horizon of nervous systems,” but they also expand the horizons of exploration,” curiosity,” free-will,” goal-seeking,” and so on.

Conclusion

Grey Walter is just one example of a cybernetician involved in a horizon-expanding project through performance, or even play. I would go so far as to say that cybernetics—or, at least the British approach to cybernetics—provides a framework to do a phenomenology of systems; as Beer remarked:

According to the cybernetician, the purpose of a system is what it does. This is a basic dictum. It stands for bald fact, which makes a better starting point in seeking understanding than the familiar attributions of good intention, prejudices about expectations, moral judgment, or sheer ignorance of circumstances. (Beer 2002)

I believe the framework of cybernetics as outlined in this essay—particularly its intersubjective bits—will prove critical to developing an understanding of increasingly abstract and unknowable” systems, and perhaps here is what is missing from Haraway.

A final example will briefly illustrate how this kind of approach might proceed. It is a well-known trope within artificial intelligence circles that it is not possible to see” into a deep” neural network:

If you had a very small neural network, you might be able to understand it … but once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable. (Knight 2017)

I don’t believe it’s actually impossible to understand” these networks, any more than it’s impossible to understand” any other parts of our world. Instead, the key is simply to change the question: instead of asking how or why a deep neural network made the decision it made; it becomes critical to instead simply ask what did it do—skipping the atomization altogether: after all since the purpose of a system is what it does,” all that’s needed to gain a true understanding of the system is to observe it. And it’s possible to subject an AI to all kinds of tests which expand the horizon of our understanding of the AI.10

This is just one example of a potential project, and I want to argue that this is already what science does, even when it tries to pretend otherwise and argues that atomization and dissection alone can uncover an objective truth.” It is the case that this is one way to proceed, but it is not the only way to proceed, and by neglecting alternative approaches, Science itself fails to properly understand the world in all its complexity. All of science is a kind of performance upon which horizons are built; it is simply that the cyberneticians grasped that better than the classical Western sciences: they never felt the need to atomize or veil subjectivity; rather, they built tools for investigating and analyzing subjectivity. They leaned into Black Boxes, revered agency, and enthusiastically reached into the unknown—and I strongly believe the task before Science is to be courageous enough to follow their lead.

ASC: Foundations: Defining Cybernetics.” n.d. Accessed November 11, 2020. https://www.asc-cybernetics.org/foundations/definitions.htm.
Ashby, W. Ross. 1952. Design for a Brain: The Origin of Adaptive Behaviour. First. London: Chapman and Hall.
———. 1956. An Introduction to Cybernetics. London: Chapman & Hall.
Beer, Stafford. 2002. What Is Cybernetics?” Kybernetes 31 (2): 209–19. https://doi.org/10.1108/03684920210417283.
Haraway, Donna. 1988. Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective.” Feminist Studies 14 (3): 575–99. https://doi.org/10.2307/3178066.
Heims, Steve J. 1991. The Cybernetics Group. Cambridge, Mass.: MIT Press. https://doi.org/10.7551/mitpress/2260.001.0001.
Knight, Will. 2017. The Dark Secret at the Heart of AI.” MIT Technology Review. https://www.technologyreview.com/2017/04/11/5113/the-dark-secret-at-the-heart-of-ai/.
Kuhn, Thomas S. 1996. The Structure of Scientific Revolutions. 3rd Edition. Chicago, IL: University of Chicago Press.
Pickering, Andrew. 2002. Cybernetics and the Mangle: Ashby, Beer and Pask.” Social Studies of Science 32 (3): 413–37. http://www.jstor.org/stable/3183033.
———. 2010. The Cybernetic Brain: Sketches of Another Future. University of Chicago Press. https://www.press.uchicago.edu/ucp/books/book/chicago/C/bo8169881.html.
Spiegel, Der. 1966. Only a God Can Save Us: The Spiegel Interview (1966).” http://www.ditext.com/heidegger/interview.html.

  1. This example is so critical to cybernetics that the root word of cybernetics” — kybernḗtēs — can be literally translated from the Greek as helmsperson.”↩︎

  2. A full accounting of the attendees can be found at https://asc-cybernetics.org/foundations/history/MacySummary.htm↩︎

  3. I understand the fundamentals of processors, logic gates, encoding, programming languages, etcetera.↩︎

  4. The same one where he defended his association with the Nazi party.↩︎

  5. Beer attempted, and for a while (before the coup), succeeded at building a system of cybernetic socialism in Chile. For a detailed account of this project, see Eden Medina’s Cybernetic Revolutionaries (2011, The MIT Press)↩︎

  6. They have yet to fail me on this one, although they did once give me some edible weeds which I felt were questionable at first, until I breaded and fried them up in to a yummy snack. And they have yet to deliver me, for instance, a car.↩︎

  7. In the chinampas, a form of traditional mesoamerican agriculture, of Xochimilco, in south-eastern Mexico City.↩︎

  8. recently↩︎

  9. campensinos, or peasants, who I am told are paid a fare wage for their labor.↩︎

  10. For a real world example of this, see https://play.aidungeon.io/, an infinite text adventure game entirely built by AI. The user can essentially interact in a Pickering-style dance of agency” with the AI in order to build a world together — and to uncover the inner workings of it.↩︎