What AI can teach us about ourselves? – Joscha Bach @ Chaos Communication Congress [30C3, 31C3, 32C3]
For the last three years, Joscha Bach, a researcher at MIT, has given a talk at the Chaos Communication Congress and provided concise introductions into various fields of AI research as well as how these may (or may not) relate to our understanding of ourselves and the societies we live in. Because of the incredible information density of these talks, as well as Joscha’s ability to unpack them into comprehensible form, I can highly recommend to watch these lectures and follow Joscha’s work.
How to Build a Mind – Artificial Intelligence Reloaded [30c3]
A foray into the present, future and ideas of Artificial Intelligence. Are we going to build (beyond) human-level artificial intelligence one day? Very likely. When? Nobody knows, because the specs are not fully done yet. But let me give you some of those we already know, just to get you started.
While large factions within the philosophy of mind still seem to struggle over the relationship between mind, world, meaning, intentionality, subjectivity, phenomenal experience, personhood and autonomy, Artificial Intelligence (AI) offers a clear and concise set of answers to these basic questions, as well as avenues of pursuing their eventual understanding. In the view of AI, minds are computational machines, whereby computationalism is best understood as the most contemporary version of the mechanist world view. In the lecture, I will briefly address some of the basic ideas that will underlie a unified computational model of the mind, and especially focus on a computational understanding of motivation and autonomy, representation and grounding, associative thinking, reason and creativity.
From Computation to Consciousness [31c3]
How can the physical universe give rise to a mind? I suggest to replace this confusing question by another one: what kind of information processing system is the mind, and how is the mind computed? As we will see, even our ideas of the physical universe turn out to be computational. Let us explore some fascinating scenery of the philosophy underlying Artificial Intelligence.
How do minds work? In my view, this is the most interesting question of all, and our best bet at answering it lies in building theories that we can actually test in the form of computer programs, that is, in building Artificial Intelligence. Let us explore some of the philosophical ideas that explicitly or implicitly form the basis of Artificial Intelligence.
The idea that minds are some kind of machine, mechanical contraptions, seems to be unconvincing, even offending to many people, even if they accept that the physical universe is a machine, and minds are part of that universe. Computer science has revolutionized our concept of machines, though: no longer do we see machines as mechanical arrangements of parts that pull and push against each other, but as arbitrary, stable causal arrangements that perform regular changes on their environment. We can think about mathematical machines, like cellular automatons, about financial, social or ecological machines. Machines do not have to be human-made artifacts, they are a way of conceptualizing regular processes and dynamic systems. In the case of conceptualizing the human mind, what matters is not biology, chemistry, or structural properties of the brain, but what these implement: a class of machine that is capable to process information, in very specific ways. The mind is not necessarily a mechanical machine, but certainly an information processing machine, a computational system.
Computationalism is the notion that minds can and have to be modeled as computational, and in its strong form, it maintains that the mind actually _is_ a computer, implemented by a physical mechanism. But the ideas of computation have permeated our understanding of the world even further. Our understanding of physics no longer conforms to mechanical world views (i.e. parts and particles pulling and pushing against each other), but requires us to switch to the broader notion of how the universe processes information. The foundational theories of physics are concerned with how the universe is computed.
In the view of universal computationalism, the question of what sort of thing minds are resolves into the question whether hypercomputation is possible, and if not, what classes of computation are involved in their functionality.
Computationalism systematizes the intuitions we get naturally while we program computers, and it helps us understand some of the deepest questions of cosmology, epistemology and the nature of the mind in ways that did not exist in the past.
Computational Meta-Psychology [32c3]
Computational theories of the mind seem to be ideally suited to explain rationality. But how can computations be subverted by meaning, emotion and love?
Minds are computational systems that are realized by causal functionality provided by their computational substrate (such as nervous systems). Their primary purpose is the discovery and exploitation of structure in an entropic environment, but they are capable to something much more sinister, too: they give rise to meaning.
Minds are the solution to a control problem: in our case, this problem amounts to navigating a social primate through a complex open environment in an attempt to stave off entropy long enough to serve evolutionary imperatives. Minds are capable of second-order control: they create representational structures that serve as a model of their environment. And minds are capable or rationality: they can learn how to build models that are entirely independent of their subjective benefit for the individual.
Because we are the product of an evolutionary process, our minds are constrained by powerful safeguards against becoming fully rational in the way we construct these models: our motivational system can not only support our thinking and decision making to optimize individual rewards, but censor and distort our understanding to make us conform to social and evolutionary rewards. This opens a security hole for mind-viruses: statebuilding systems of beliefs that manage to copy themselves across populations and create causal preconditions to serve neither individuals nor societies, but primarily themselves.
I will introduce a computational model of belief attractors that can help us to explain how our minds can become colonized and governed by irrational beliefs that co-evolve with social institutions.
This talk is part of a series of insights on how to use the epistemology of Artificial Intelligence to understand the nature of our minds.