Michael Timothy Bennett

Michael Timothy BennettMichael Timothy BennettMichael Timothy Bennett
  • Home
  • Research
  • Commentary
  • Enquiries
  • Bio
  • More
    • Home
    • Research
    • Commentary
    • Enquiries
    • Bio

Michael Timothy Bennett

Michael Timothy BennettMichael Timothy BennettMichael Timothy Bennett
  • Home
  • Research
  • Commentary
  • Enquiries
  • Bio

Research Papers

Google Scholar

The Optimal Choice of Hypothesis Is the Weakest, Not the Shortest

Michael Timothy Bennett

Under Review, 2023

If A and B are sets such that A is a subset of B, generalisation may be understood as the inference from A of a hypothesis sufficient to construct B. One might infer any number of hypotheses from A, yet only some of those may generalise to B.  How can one know which are likely to generalise? One strategy is to  choose the shortest, equating the ability to compress information with  the ability to generalise (a ``proxy for intelligence”). We examine this  in the context of a mathematical formalism of enactive cognition. We  show that compression is neither necessary nor sufficient to maximise  performance (measured in terms of the probability of a hypothesis  generalising). We formulate a proxy unrelated to length or simplicity,  called weakness. We show that if tasks are uniformly distributed, then  there is no choice of proxy that performs at least as well as weakness  maximisation in all tasks while performing strictly better in at least  one. In other words, weakness is the pareto optimal choice of proxy. In  experiments comparing maximum weakness and minimum description length in  the context of binary arithmetic, the former generalised at between 1.1 and 5 times the rate of the latter. We argue this demonstrates that weakness  is a far better proxy, and explains why Deepmind's Apperception Engine  is able to generalise effectively.

Read Paper

On the Computation of Meaning, Language Models and Incomprehensible Horrors

Michael Timothy Bennett

Under Review, 2023

We bring together foundational theories of meaning and a mathematical  formalism of artificial general intelligence to provide a mechanistic  explanation of meaning, communication and symbol emergence. We establish  circumstances under which a machine might mean what we think it means  by what it says, or comprehend what we mean by what we say. We conclude  that a language model such as ChatGPT does not comprehend or engage in  meaningful communication with humans, though it may exhibit complex  behaviours such as theory of mind.

Read Paper

Emergent Causality & the Foundation of Consciousness

Michael Timothy Bennett

Under Review, 2023

To make accurate inferences in an interactive setting, an agent must not confuse passive observation of events with having participated in causing those events. The ``do'' operator formalises interventions so that we may reason about their effect. Yet there exist at least two pareto optimal mathematical formalisms of general intelligence in an interactive setting which, presupposing no explicit representation of intervention, make maximally accurate inferences. We examine one such formalism. We show that in the absence of an operator, an intervention can still be represented by a variable. Furthermore, the need to explicitly represent interventions in advance arises only because we presuppose abstractions. The aforementioned formalism avoids this and so, initial conditions permitting, representations of relevant causal interventions will emerge through induction. These emergent abstractions function as representations of one’s self and of any other object, inasmuch as the interventions of those objects impact the satisfaction of goals. We argue (with reference to theory of mind) that this explains how one might reason about one's own identity and intent, those of others, of one's own as perceived by others and so on. In a narrow sense this describes what it is to be aware, and is a mechanistic explanation of aspects of consciousness.

Read Paper

Enactivism & Objectively Optimal Super-Intelligence

Michael Timothy Bennett

Under Review, 2023

Software's effect upon the world hinges  upon the hardware that interprets it. This tends not to be an issue,  because we standardise hardware. AI is typically conceived of as a  software mind running on such interchangeable hardware. The hardware  interacts with an environment, and the software interacts with the  hardware. This formalises mind-body dualism, in that a software mind can  be run on any number of standardised bodies. While this works well for  simple applications, we argue that this approach is less than ideal for  the purposes of formalising artificial general intelligence (AGI) or  artificial super-intelligence (ASI). The general reinforcement learning agent AIXI is pareto optimal.  However, this claim regarding AIXI's performance is highly subjective,  because that performance depends upon the choice of interpreter. We  examine this problem and formulate an approach based upon enactive  cognition and pancomputationalism to address the issue. Weakness is a  measure of simplicity, a ``proxy for intelligence'' unrelated to  compression. If hypotheses are evaluated in terms of weakness, rather  than length, we are able to make objective claims regarding performance.  Subsequently, we propose objectively optimal notions of AGI and ASI  such that the former is computable and the latter anytime computable  (though impractical).

Read Paper

Computable Artificial General Intelligence

Michael Timothy Bennett

Under Review, May 2022

Artificial general intelligence (AGI) may herald our extinction,  according to AI safety research. Yet claims regarding AGI must rely upon  mathematical formalisms -- theoretical agents we may analyse or attempt  to build. AIXI appears to be the only such formalism supported by proof  that its behaviour is optimal, a consequence of its use of compression  as a proxy for intelligence. Unfortunately, AIXI is incomputable and  claims regarding its behaviour highly subjective. We argue that this is  because AIXI formalises cognition as taking place in isolation from the  environment in which goals are pursued (Cartesian dualism). We propose  an alternative, supported by proof and experiment, which overcomes these  problems. Integrating research from cognitive science with AI, we  formalise an enactive model of learning and reasoning to address the  problem of subjectivity. This allows us to formulate a different proxy  for intelligence, called weakness, which addresses the problem of  incomputability. We prove optimal behaviour is attained when weakness is  maximised. This proof is supplemented by experimental results comparing  weakness and description length (the closest analogue to compression  possible without reintroducing subjectivity). Weakness outperforms  description length, suggesting it is a better proxy. Furthermore we show  that, if cognition is enactive, then minimisation of description length  is neither necessary nor sufficient to attain optimal performance.  These results undermine the notion that compression is closely related  to intelligence. We conclude with a discussion of limitations,  implications and future research. There remain several open questions  regarding the implementation of scale-able general intelligence. In the  short term, these results may be best utilised to improve the  performance of existing systems. For example, our results explain why  Deepmind's Apperception Engine is able to generalise effectively, and  how to replicate that performance by maximising weakness. Likewise in  the context of neural networks, our results suggest both limitations of  ``scale is all you need", and how those limitations can be overcome.

Read Paper

Symbol Emergence and The Solutions to Any Task

Michael Timothy Bennett

Springer Nature, LNAI, January 2022

The  following defines intent, an arbitrary task and its solutions, and then  argues that an agent which always constructs what is called an  Intensional Solution would qualify as artificial general intelligence.  We then explain how natural language may emerge and be acquired by such  an agent, conferring the ability to model the intent of other  individuals labouring under similar compulsions, because an abstract  symbol system and the solution to a task are one and the same.

Read Paper

The Artificial Scientist

Michael Timothy Bennett and Yoshihiro Maruyama

Springer Nature, LNAI, January 2022

We  attempt to define what is necessary to construct an Artificial  Scientist, explore and evaluate several approaches to artificial general  intelligence (AGI) which may facilitate this, conclude that a unified  or hybrid approach is necessary and explore two theories that satisfy  this requirement to some degree.                

Read Paper

Compression, The Fermi Paradox and Artificial Super-Intelligence

Michael Timothy Bennett

Springer Nature, LNAI, January 2022

The  following briefly discusses possible difficulties in communication with  and control of an AGI (artificial general intelligence), building upon  an explanation of The Fermi Paradox and preceding work on symbol  emergence and artificial general intelligence. The latter suggests that  to infer what someone means, an agent constructs a rationale for the  observed behaviour of others. Communication then requires two agents  labour under similar compulsions and have similar experiences (construct  similar solutions to similar tasks). Any non-human intelligence may  construct solutions such that any rationale for their behaviour (and  thus the meaning of their signals) is outside the scope of what a human  is inclined to notice or comprehend. Further, the more compressed a  signal, the closer it will appear to random noise. Another intelligence  may possess the ability to compress information to the extent that, to  us, their signals would appear indistinguishable from noise (an  explanation for The Fermi Paradox). To facilitate predictive accuracy an  AGI would tend to more compressed representations of the world, making  any rationale for their behaviour more difficult to comprehend for the  same reason. Communication with and control of an AGI may subsequently  necessitate not only human-like compulsions and experiences, but imposed  cognitive impairment.

Read Paper

Philosophical Specification of Empathetic Ethical Artificial Intelligence

Michael Timothy Bennett and Yoshihiro Maruyama

IEEE, TCDS, July 2021

In order to construct an ethical artificial intelligence (AI) two  complex problems must be overcome. Firstly, humans do not consistently  agree on what is or is not ethical. Second, contemporary AI and machine  learning methods tend to be blunt instruments which either search for  solutions within the bounds of predefined rules, or mimic behaviour. An  ethical AI must be capable of inferring unspoken rules, interpreting  nuance and context, possess and be able to infer intent, and explain not  just its actions but its intent. Using enactivism, semiotics,  perceptual symbol systems and symbol emergence, we specify an agent that  learns not just arbitrary relations between signs but their meaning in  terms of the perceptual states of its sensorimotor system. Subsequently  it can learn what is meant by a sentence and infer the intent of others  in terms of its own experiences. It has malleable intent because the  meaning of symbols changes as it learns, and its intent is represented  symbolically as a goal. As such it may learn a concept of what is most  likely to be considered ethical by the majority within a population of  humans, which may then be used as a goal. The meaning of abstract  symbols is expressed using perceptual symbols of raw, multimodal  sensorimotor stimuli as the weakest (consistent with Ockham’s Razor)  necessary and sufficient concept, an intensional definition learned from  an ostensive definition, from which the extensional definition or  category of all ethical decisions may be obtained. Because these  abstract symbols are the same for both situation and response, the same  symbol is used when either performing or observing an action. This is  akin to mirror neurons in the human brain. Mirror symbols may allow the  agent to empathise, because its own experiences are associated with the  symbol, which is also associated with the observation of another agent  experiencing something that symbol represents.

Read Paper

Cybernetics and the Future of Work

Ashitha Ganapathy and Michael Timothy Bennett

IEEE, 21CW, May 2021

The  disruption caused by the pandemic has called into question industrial  norms and created an opportunity to reimagine the future of work. We  discuss how this period of opportunity may be leveraged to bring about a  future in which the workforce thrives rather than survives. Any  coherent plan of such breadth must address the interaction of multiple  technological, social, economic, and environmental systems. A shared  language that facilitates communication across disciplinary boundaries  can bring together stakeholders and facilitate a considered response.  The origin story of cybernetics and the ideas posed therein serve to  illustrate how we may better understand present complex challenges, to  create a future of work that places human values at its core.

Read Paper

Intensional Artificial Intelligence

Michael Timothy Bennett and Yoshihiro Maruyama

Manuscript, April 2021

We  argue that an explainable artificial intelligence must possess a  rationale for its decisions, be able to infer the purpose of observed  behaviour, and be able to explain its decisions in the context of what  its audience understands and intends. To address these issues we present  four novel contributions. Firstly, we define an arbitrary task in terms  of perceptual states, and discuss two extremes of a domain of possible  solutions. Secondly, we define the intensional solution. Optimal by some  definitions of intelligence, it describes the purpose of a task. An  agent possessed of it has a rationale for its decisions in terms of that  purpose, expressed in a perceptual symbol system grounded in hardware.  Thirdly, to communicate that rationale requires natural language, a  means of encoding and decoding perceptual states. We propose a theory of  meaning in which, to acquire language, an agent should model the world a  language describes rather than the language itself. If the utterances  of humans are of predictive value to the agent's goals, then the agent  will imbue those utterances with meaning in terms of its own goals and  perceptual states. In the context of Peircean semiotics, a community of  agents must share rough approximations of signs, referents and  interpretants in order to communicate. Meaning exists only in the  context of intent, so to communicate with humans an agent must have  comparable experiences and goals. An agent that learns intensional  solutions, compelled by objective functions somewhat analogous to human  motivators such as hunger and pain, may be capable of explaining its  rationale not just in terms of its own intent, but in terms of what its  audience understands and intends. It forms some approximation of the  perceptual states of humans.

Read Paper

Michael Timothy Bennett

Copyright © 2022 Michael Timothy Bennett - All Rights Reserved.

Powered by GoDaddy