Michael Timothy Bennett

Michael Timothy BennettMichael Timothy BennettMichael Timothy Bennett
  • Home
  • Research
  • Commentary
  • Enquiries
  • More
    • Home
    • Research
    • Commentary
    • Enquiries

Michael Timothy Bennett

Michael Timothy BennettMichael Timothy BennettMichael Timothy Bennett
  • Home
  • Research
  • Commentary
  • Enquiries

Research Papers

Symbol Emergence and The Solutions to Any Task

Michael Timothy Bennett

Springer Nature, LNAI, January 2022

The  following defines intent, an arbitrary task and its solutions, and then  argues that an agent which always constructs what is called an  Intensional Solution would qualify as artificial general intelligence.  We then explain how natural language may emerge and be acquired by such  an agent, conferring the ability to model the intent of other  individuals labouring under similar compulsions, because an abstract  symbol system and the solution to a task are one and the same.

Read Paper

The Artificial Scientist

Michael Timothy Bennett and Yoshihiro Maruyama

Springer Nature, LNAI, January 2022

We  attempt to define what is necessary to construct an Artificial  Scientist, explore and evaluate several approaches to artificial general  intelligence (AGI) which may facilitate this, conclude that a unified  or hybrid approach is necessary and explore two theories that satisfy  this requirement to some degree.                

Read Paper

Compression, The Fermi Paradox and Artificial Super-Intelligence

Michael Timothy Bennett

Springer Nature, LNAI, January 2022

The  following briefly discusses possible difficulties in communication with  and control of an AGI (artificial general intelligence), building upon  an explanation of The Fermi Paradox and preceding work on symbol  emergence and artificial general intelligence. The latter suggests that  to infer what someone means, an agent constructs a rationale for the  observed behaviour of others. Communication then requires two agents  labour under similar compulsions and have similar experiences (construct  similar solutions to similar tasks). Any non-human intelligence may  construct solutions such that any rationale for their behaviour (and  thus the meaning of their signals) is outside the scope of what a human  is inclined to notice or comprehend. Further, the more compressed a  signal, the closer it will appear to random noise. Another intelligence  may possess the ability to compress information to the extent that, to  us, their signals would appear indistinguishable from noise (an  explanation for The Fermi Paradox). To facilitate predictive accuracy an  AGI would tend to more compressed representations of the world, making  any rationale for their behaviour more difficult to comprehend for the  same reason. Communication with and control of an AGI may subsequently  necessitate not only human-like compulsions and experiences, but imposed  cognitive impairment.

Read Paper

Philosophical Specification of Empathetic Ethical Artificial Intelligence

Michael Timothy Bennett and Yoshihiro Maruyama

IEEE, TCDS, July 2021

In order to construct an ethical artificial intelligence (AI) two  complex problems must be overcome. Firstly, humans do not consistently  agree on what is or is not ethical. Second, contemporary AI and machine  learning methods tend to be blunt instruments which either search for  solutions within the bounds of predefined rules, or mimic behaviour. An  ethical AI must be capable of inferring unspoken rules, interpreting  nuance and context, possess and be able to infer intent, and explain not  just its actions but its intent. Using enactivism, semiotics,  perceptual symbol systems and symbol emergence, we specify an agent that  learns not just arbitrary relations between signs but their meaning in  terms of the perceptual states of its sensorimotor system. Subsequently  it can learn what is meant by a sentence and infer the intent of others  in terms of its own experiences. It has malleable intent because the  meaning of symbols changes as it learns, and its intent is represented  symbolically as a goal. As such it may learn a concept of what is most  likely to be considered ethical by the majority within a population of  humans, which may then be used as a goal. The meaning of abstract  symbols is expressed using perceptual symbols of raw, multimodal  sensorimotor stimuli as the weakest (consistent with Ockham’s Razor)  necessary and sufficient concept, an intensional definition learned from  an ostensive definition, from which the extensional definition or  category of all ethical decisions may be obtained. Because these  abstract symbols are the same for both situation and response, the same  symbol is used when either performing or observing an action. This is  akin to mirror neurons in the human brain. Mirror symbols may allow the  agent to empathise, because its own experiences are associated with the  symbol, which is also associated with the observation of another agent  experiencing something that symbol represents.

Read Paper

Cybernetics and the Future of Work

Ashitha Ganapathy and Michael Timothy Bennett

IEEE, 21CW, May 2021

The  disruption caused by the pandemic has called into question industrial  norms and created an opportunity to reimagine the future of work. We  discuss how this period of opportunity may be leveraged to bring about a  future in which the workforce thrives rather than survives. Any  coherent plan of such breadth must address the interaction of multiple  technological, social, economic, and environmental systems. A shared  language that facilitates communication across disciplinary boundaries  can bring together stakeholders and facilitate a considered response.  The origin story of cybernetics and the ideas posed therein serve to  illustrate how we may better understand present complex challenges, to  create a future of work that places human values at its core.

Read Paper

Intensional Artificial Intelligence

Michael Timothy Bennett and Yoshihiro Maruyama

Manuscript, April 2021

We  argue that an explainable artificial intelligence must possess a  rationale for its decisions, be able to infer the purpose of observed  behaviour, and be able to explain its decisions in the context of what  its audience understands and intends. To address these issues we present  four novel contributions. Firstly, we define an arbitrary task in terms  of perceptual states, and discuss two extremes of a domain of possible  solutions. Secondly, we define the intensional solution. Optimal by some  definitions of intelligence, it describes the purpose of a task. An  agent possessed of it has a rationale for its decisions in terms of that  purpose, expressed in a perceptual symbol system grounded in hardware.  Thirdly, to communicate that rationale requires natural language, a  means of encoding and decoding perceptual states. We propose a theory of  meaning in which, to acquire language, an agent should model the world a  language describes rather than the language itself. If the utterances  of humans are of predictive value to the agent's goals, then the agent  will imbue those utterances with meaning in terms of its own goals and  perceptual states. In the context of Peircean semiotics, a community of  agents must share rough approximations of signs, referents and  interpretants in order to communicate. Meaning exists only in the  context of intent, so to communicate with humans an agent must have  comparable experiences and goals. An agent that learns intensional  solutions, compelled by objective functions somewhat analogous to human  motivators such as hunger and pain, may be capable of explaining its  rationale not just in terms of its own intent, but in terms of what its  audience understands and intends. It forms some approximation of the  perceptual states of humans.

Read Paper

Michael Timothy Bennett

Copyright © 2022 Michael Timothy Bennett - All Rights Reserved.

Powered by GoDaddy