Kenneth Stanley
(1975-)
Kenneth Stanley is an artificial intelligence researcher, author, and was professor of computer science at the University of Central Florida from 2006 to 2020. There he created the Neuroevolution of Augmenting Topologies (NEAT) algorithm. NEAT is a topology and weight evolving artificial neural network (TWEANN) which attempts to learn weight values and an appropriate topology for a neural network.
NEAT begins with a perceptron-like feed-forward network of only input neurons and output neurons. As evolution progresses through discrete steps, the complexity of the network's topology may grow, either by inserting a new neuron into a connection path, or by creating a new connection between (formerly unconnected) neurons. NEAT represents information in a "genotype" as an ordered set of neurons. Other genomes could include the same neurons, but in a different order.
In 2007, he created PicBreeder, software that uses NEAT to allow users to evolve pictures by randomly generating images and having the user pick which image will produce children. This allows users to shape random blobs into recognizable shapes like animals or cars.
In 2015, with Joel Lehman he coauthored
Why Greatness Cannot Be Planned: The Myth of the Objective, which claims that pursuing novelty instead of a specific objective is more likely to succeed in a creative task.
He co-founded and was co-chief scientist of
Geometric Intelligence, which was acquired by Uber at the end of 2016 to create
Uber AI Labs, where he led core (basic) AI research. In 2020 he joined OpenAI and led the Open-Endedness Team there until 2022, when he went off and founded and led
Maven, an AI-driven social network, which he left in 2024.
In 2025 Stanley joined
Lila Sciences and their quest for "Scientific Superintelligence." At Lila he is the Senior Vice President for "Open-Endedness."
His latest paper is on the representation problem in AI, entitled
Questioning Representational Optimism in Deep Learning: The Fractured Entangled Representation Hypothesis. He writes...
We compare neural networks evolved through an open-ended search process to networks trained via conventional stochastic gradient descent (SGD) on the simple task of generating a single image. This minimal setup offers a unique advantage: each hidden neuron's full functional behavior can be easily visualized as an image, thus revealing how the network's output behavior is internally constructed neuron by neuron. The result is striking: while both networks produce the same output behavior, their internal representations differ dramatically. The SGD-trained networks exhibit a form of disorganization that we term fractured entangled representation (FER). Interestingly, the evolved networks largely lack FER, even approaching a unified factored representation (UFR). In large models, FER may be degrading core model capacities like generalization, creativity, and (continual) learning. Therefore, understanding and mitigating FER could be critical to the future of representation learning.
Questioning Representational, p.1
We hypothesize that when deep learning models are trained to achieve a specific objective, typically through
backpropagation and stochastic gradient descent, the resultant representation tends to be fractured, which
means that information underlying
the same unitary concepts (e.g. how to add numbers) is split into disconnected pieces. Importantly, these
pieces then become redundant as a result of their fracture: they are separately invoked in different contexts
to model the same underlying concept, when ideally, a single instance of the concept would have sufficed. In
other words, where there would ideally be the reuse of one deep understanding of a concept, instead there
are different mechanisms for achieving the same function.
At the same time, these fractured (and hence redundant) functions tend to become entangled with other
fractured functions, which means that behaviors that should be independent and modular end up influencing
each other in idiosyncratic and inappropriate ways. For example, a set of neurons within an image generator
that change hair color might also cause the foliage in the background to change as well, and separating these
two effects could be impossible.
Questioning Representational, pp.2-3
Normal |
Teacher |
Scholar