Michael Timothy Bennett, Sean Welsh, Anna Ciaunica
Preprint, presented at MoC5 and ASSC27, 2024
We tackle the hard problem of consciousness taking the naturally-selected, self-organising, embodied organism as our starting point. We provide a mathematical formalism describing how biological systems self-organise to hierarchically interpret unlabelled sensory information according to valence and specific needs. Such interpretations imply behavioural policies which can only be differentiated from each other by the qualitative aspect of information processing. Selection pressures favour systems that can intervene in the world to achieve homeostatic and reproductive goals. Quality is a property arising in such systems to link cause to affect to motivate real world interventions. This produces a range of qualitative classifiers (interoceptive and exteroceptive) that motivate specific actions and determine priorities and preferences. Building upon the seminal distinction between access and phenomenal consciousness, our radical claim here is that phenomenal consciousness without access consciousness is likely very common, but the reverse is implausible. To put it provocatively: Nature does not like zombies. We formally describe the multilayered architecture of self-organisation from rocks to Einstein, illustrating how our argument applies in the real world. We claim that access consciousness at the human level is impossible without the ability to hierarchically model i) the self, ii) the world/others and iii) the self as modelled by others. Phenomenal consciousness is therefore required for human-level functionality. Our proposal lays the foundations of a formal science of consciousness, deeply connected with natural selection rather than abstract thinking, closer to human fact than zombie fiction.
Michael Timothy Bennett
Awarded "Best Student Paper" and published in Proceedings of The 16th International Conference on Artificial General Intelligence, 2023
To make accurate inferences in an interactive setting, an agent must not confuse passive observation of events with having intervened to cause them. The do operator formalises interventions so that we may reason about their effect. Yet there exist pareto optimal mathematical formalisms of general intelligence in an interactive setting which, presupposing no explicit representation of intervention, make maximally accurate inferences. We examine one such formalism. We show that in the absence of a do operator, an intervention can be represented by a variable. We then argue that variables are abstractions, and that need to explicitly represent interventions in advance arises only because we presuppose these sorts of abstractions. The aforementioned formalism avoids this and so, initial conditions permitting, representations of relevant causal interventions will emerge through induction. These emergent abstractions function as representations of one’s self and of any other object, inasmuch as the interventions of those objects impact the satisfaction of goals. We argue that this explains how one might reason about one's own identity and intent, those of others, of one's own as perceived by others and so on. In a narrow sense this describes what it is to be aware, and is a mechanistic explanation of aspects of consciousness.
Michael Timothy Bennett
Proceedings of The 17th International Conference on Artificial General Intelligence, 2024
Simplicity is held by many to be the key to general intelligence. Simpler models tend to “generalise”, identifying the cause or generator of data with greater sample efficiency. The implications of the correlation between simplicity and generalisation extend far beyond computer science, addressing questions of physics and even biology. Yet simplicity is a property of form, while generalisation is of function. In interactive settings, any correlation between the two depends on interpretation. In theory there could be no correlation and yet in practice, there is. Previous theoretical work showed generalisation to be a consequence of “weak” constraints implied by function, not form. Experiments demonstrated choosing weak constraints over simple forms yielded a 110-500% improvement in generalisation rate. Here we show that all constraints can take equally simple forms, regardless of weakness. However if forms are spatially extended, then function is represented using a finite subset of forms. If function is represented using a finite subset of forms, then we can force a correlation between simplicity and generalisation by making weak constraints take simple forms. If function is determined by a goal directed process that favours versatility (e.g. natural selection), then efficiency demands weak constraints take simple forms. Complexity has no causal influence on generalisation, but appears to due to confounding.
Michael Timothy Bennett
Awarded "Best Student Paper" and published in Proceedings of The 17th International Conference on Artificial General Intelligence, 2024
The concept of intelligent software is flawed. The behaviour of software is determined by the hardware that "interprets" it. This undermines claims regarding the behaviour of theorised, software superintelligence.
Here we characterise this problem as "computational dualism", where instead of mental and physical substance, we have software and hardware. We argue that to make objective claims regarding performance we must avoid computational dualism. We propose a pancomputational alternative wherein every aspect of the environment is a relation between irreducible states. We formalise systems as behaviour (inputs and outputs), and cognition as embodied, embedded, extended and enactive. The result is cognition formalised as a part of the environment, rather than as a disembodied policy interacting with the environment through an interpreter. This allows us to make objective claims regarding intelligence, which we argue is the ability to "generalise", identify causes and adapt. We then establish objective upper bounds for intelligent behaviour. This suggests AGI will be safer, but more limited, than theorised.
Michael Timothy Bennett
Proceedings of The 16th International Conference on Artificial General Intelligence, 2023
We integrate foundational theories of meaning with a mathematical formalism of artificial general intelligence (AGI) to offer a comprehensive mechanistic explanation of meaning, communication, and symbol emergence. This synthesis holds significance for both AGI and broader debates concerning the nature of language, as it unifies pragmatics, logical truth conditional semantics, Peircean semiotics, and a computable model of enactive cognition, addressing phenomena that have traditionally evaded mechanistic explanation. By examining the conditions under which a machine can generate meaningful utterances or comprehend human meaning, we establish that the current generation of language models do not possess the same understanding of meaning as humans nor intend any meaning that we might attribute to their responses. To address this, we propose simulating human feelings and optimising models to construct weak representations. Our findings shed light on the relationship between meaning and intelligence, and how we can build machines that comprehend and intend meaning.
Michael Timothy Bennett
Proceedings of The 16th International Conference on Artificial General Intelligence, 2023
If A and B are sets such that A is a subset of B, generalisation may be understood as the inference from A of a hypothesis sufficient to construct B. One might infer any number of hypotheses from A, yet only some of those may generalise to B. How can one know which are likely to generalise? One strategy is to choose the shortest, equating the ability to compress information with the ability to generalise (a ``proxy for intelligence”). We examine this in the context of a mathematical formalism of enactive cognition. We show that compression is neither necessary nor sufficient to maximise performance (measured in terms of the probability of a hypothesis generalising). We formulate a proxy unrelated to length or simplicity, called weakness. We show that if tasks are uniformly distributed, then there is no choice of proxy that performs at least as well as weakness maximisation in all tasks while performing strictly better in at least one. In other words, weakness is the pareto optimal choice of proxy. In experiments comparing maximum weakness and minimum description length in the context of binary arithmetic, the former generalised at between 1.1 and 5 times the rate of the latter. We argue this demonstrates that weakness is a far better proxy, and explains why Deepmind's Apperception Engine is able to generalise effectively.
Michael Timothy Bennett
Springer Nature, LNAI, January 2022
The following defines intent, an arbitrary task and its solutions, and then argues that an agent which always constructs what is called an Intensional Solution would qualify as artificial general intelligence. We then explain how natural language may emerge and be acquired by such an agent, conferring the ability to model the intent of other individuals labouring under similar compulsions, because an abstract symbol system and the solution to a task are one and the same.
Michael Timothy Bennett and Yoshihiro Maruyama
Springer Nature, LNAI, January 2022
We attempt to define what is necessary to construct an Artificial Scientist, explore and evaluate several approaches to artificial general intelligence (AGI) which may facilitate this, conclude that a unified or hybrid approach is necessary and explore two theories that satisfy this requirement to some degree.
Michael Timothy Bennett
Springer Nature, LNAI, January 2022
The following briefly discusses possible difficulties in communication with and control of an AGI (artificial general intelligence), building upon an explanation of The Fermi Paradox and preceding work on symbol emergence and artificial general intelligence. The latter suggests that to infer what someone means, an agent constructs a rationale for the observed behaviour of others. Communication then requires two agents labour under similar compulsions and have similar experiences (construct similar solutions to similar tasks). Any non-human intelligence may construct solutions such that any rationale for their behaviour (and thus the meaning of their signals) is outside the scope of what a human is inclined to notice or comprehend. Further, the more compressed a signal, the closer it will appear to random noise. Another intelligence may possess the ability to compress information to the extent that, to us, their signals would appear indistinguishable from noise (an explanation for The Fermi Paradox). To facilitate predictive accuracy an AGI would tend to more compressed representations of the world, making any rationale for their behaviour more difficult to comprehend for the same reason. Communication with and control of an AGI may subsequently necessitate not only human-like compulsions and experiences, but imposed cognitive impairment.
Michael Timothy Bennett and Yoshihiro Maruyama
IEEE, TCDS, July 2021
In order to construct an ethical artificial intelligence (AI) two complex problems must be overcome. Firstly, humans do not consistently agree on what is or is not ethical. Second, contemporary AI and machine learning methods tend to be blunt instruments which either search for solutions within the bounds of predefined rules, or mimic behaviour. An ethical AI must be capable of inferring unspoken rules, interpreting nuance and context, possess and be able to infer intent, and explain not just its actions but its intent. Using enactivism, semiotics, perceptual symbol systems and symbol emergence, we specify an agent that learns not just arbitrary relations between signs but their meaning in terms of the perceptual states of its sensorimotor system. Subsequently it can learn what is meant by a sentence and infer the intent of others in terms of its own experiences. It has malleable intent because the meaning of symbols changes as it learns, and its intent is represented symbolically as a goal. As such it may learn a concept of what is most likely to be considered ethical by the majority within a population of humans, which may then be used as a goal. The meaning of abstract symbols is expressed using perceptual symbols of raw, multimodal sensorimotor stimuli as the weakest (consistent with Ockham’s Razor) necessary and sufficient concept, an intensional definition learned from an ostensive definition, from which the extensional definition or category of all ethical decisions may be obtained. Because these abstract symbols are the same for both situation and response, the same symbol is used when either performing or observing an action. This is akin to mirror neurons in the human brain. Mirror symbols may allow the agent to empathise, because its own experiences are associated with the symbol, which is also associated with the observation of another agent experiencing something that symbol represents.
Ashitha Ganapathy and Michael Timothy Bennett
IEEE, 21CW, May 2021
The disruption caused by the pandemic has called into question industrial norms and created an opportunity to reimagine the future of work. We discuss how this period of opportunity may be leveraged to bring about a future in which the workforce thrives rather than survives. Any coherent plan of such breadth must address the interaction of multiple technological, social, economic, and environmental systems. A shared language that facilitates communication across disciplinary boundaries can bring together stakeholders and facilitate a considered response. The origin story of cybernetics and the ideas posed therein serve to illustrate how we may better understand present complex challenges, to create a future of work that places human values at its core.
Michael Timothy Bennett
Copyright © 2022 Michael Timothy Bennett - All Rights Reserved.
Powered by GoDaddy