COGS Seminars provide a forum for internationally recognised researchers from all corners of cognitive science research to present and discuss their latest findings. All are welcome to attend.
Spring 2025
Tuesdays 16:00-17:30
Date | Seminar | Venue |
---|---|---|
Feb 5 |
The Thinking Game Abstract: The Thinking Game takes you on a fascinating journey into the heart of DeepMind, one of the world’s leading AI labs, as it strives to unravel the mysteries of artificial general intelligence (AGI).Inside DeepMind’s London headquarters, founder Demis Hassabis and his team are relentlessly pursuing the creation of AI that matches or surpasses human abilities on a wide range of tasks. Filmed over five years, the documentary puts viewers in the room for the pivotal moments of this quest, including the groundbreaking achievement of AlphaFold, a program that solved a 50-year grand challenge in biology. This film captures the exhilaration of historic breakthroughs like AlphaFold, the crushing weight of disappointment during setbacks, and the unwavering pursuit of knowledge that defines Demis’ commitment to scientific innovation. This film invites viewers to witness one of the most important scientific adventures of our time, exploring the potential of AGI to reshape our world. |
Chichester Lecture Theatre |
Feb 18 |
The cognitive foundations of the attention economy Abstract: Herbert Simon’s slogan that information abundance implies attention scarcity is generally taken to be foundational to the very concept of the attention economy, the economic system in which human attention is the scarce resource. My first aim in this talk is to assess how fitting Simon’s framework is for understanding attending in the attention economy. I will argue that it is not, and in the second half will draw upon predictive processing and other more action-oriented frameworks to better understand the challenges faced by human attention in the attention economy. |
online Passcode: 924265 |
Feb 25 |
Out-Of-Distribution Thinking: Philosophical Insights From Machine Learning Abstract: Why do people struggle to agree on metaphysical questions? In this talk, I propose the hypothesis that metaphysical reasoning in humans resembles "out-of-distribution" (OOD) generalisation in machine learning (ML). According to this hypothesis, our conceptual structures are well-enough aligned for practical purposes because there are strong external pressures (from embodied experience and social context) towards convergence in our everyday conceptual practices. Unfortunately, our conceptual practices in metaphysical domains are only weakly constrained by those forces; in consequence, sophisticated thinkers can disagree profoundly (and apparently irresolvably) on highly abstract conceptual questions. This hypothesis has pragmatic consequences for how we approach philosophy. In particular, if our concepts of "truth" and "reality" turn out to be underconstrained when applied in a metaphysical domain, then there is nothing to reliably anchor metaphysical disagreements to, at least if we see them as candidates for truth evaluation. This might incline some to eliminativism about metaphysics, but I argue instead for adopting a flexible, pluralistic approach. |
tba Passcode: 183366 |
Mar 4 |
tba Abstract: tba. |
online Passcode: 068196 |
Mar 18 |
Impossible Languages and Possible Solutions Abstract: The success of Large Language Models draws attention to longstanding questions about the extent to which general learning mechanisms can account for human language acquisition. Those who advocate for strong innate biases point to the underlying shared properties of superficially diverse languages and argue that learning from the noisy, limited data that is available to children requires that the set of possible languages is substantially constrained. In other words, there are certain types of structure which are extremely difficult, if not impossible, for humans to learn. Recent experiments with LSTMs and Transformers show that certain types of "impossible language" are easily learnable by these language models. This suggests that they lack biases towards the structures typically found in natural languages, and so they are learning in a much wider hypothesis space than human language learners. This potentially explains why they require so much more data and yet continue to make surprising failures in generalising to novel inputs. This talk will discuss some of the experiments on impossible languages and potential solutions to the problems of constraining learning and obtaining stronger generalisation. |
Pevensey 1-2D4 Passcode: 172549 |
Mar 25 |
tba Abstract: Tba. |
Pevensey 1-2D4 Passcode: 614864 |
Apr 1 |
tba Abstract: Tba. |
Pevensey 1-2D4 Passcode: 941352 |
Contact COGS
For suggestions for speakers, contact Simon Bowes
For publicity and questions regarding the website, contact Simon Bowes.
Please mention COGS and COGS seminars to all potentially interested newcomers to the university.
A good way to keep informed about COGS Seminars is to be a member of COGS. Any member of the university may join COGS and the COGS mailing list. Please contact Simon Bowes if you would like ot be added.
Follow us on Twitter: