Impossible Languages and Possible Solutions
Tuesday 18 March 16:00 until 17:30
ßÏßÏÊÓƵ Campus : Pevensey 1-2D4
Speaker: Dr Jeff Mitchell (ßÏßÏÊÓƵ)
Part of the series: COGS Research Seminars

Abstract: The success of Large Language Models draws attention to longstanding questions about the extent to which general learning mechanisms can account for human language acquisition. Those who advocate for strong innate biases point to the underlying shared properties of superficially diverse languages and argue that learning from the noisy, limited data that is available to children requires that the set of possible languages is substantially constrained. In other words, there are certain types of structure which are extremely difficult, if not impossible, for humans to learn.
Recent experiments with LSTMs and Transformers show that certain types of "impossible language" are easily learnable by these language models. This suggests that they lack biases towards the structures typically found in natural languages, and so they are learning in a much wider hypothesis space than human language learners. This potentially explains why they require so much more data and yet continue to make surprising failures in generalising to novel inputs.
This talk will discuss some of the experiments on impossible languages and potential solutions to the problems of constraining learning and obtaining stronger generalisation.
Passcode: 172549
By: Simon Bowes
Last updated: Wednesday, 12 February 2025