In this lecture I will review some of the most recent achievements in machine learning underpinned by learning representations in an unsupervised (self-supervised) paradigm. Such techniques are at the heart of the latest and best performing language models (BERT, GPT-3), computer vision or protein folding predictors (AlphaFold).
The common feature for these techniques is an attempt to build a fundamental (often causal) understanding of the problem i.e. its world model, before a subsequent attempt is made to solve the given task. This is in stark contrast to the state-of-the-art techniques (including ML/AI) currently used in digital networks, where the algorithms are specifically crafted to solve the given task(s) and trained from the outset to achieve this. Can digital networks perform better by initially learning their own digital world models? I will present the case in favour of this view, and not shy from listing augments against it.
You can read more and register
here.
You can watch the previous events on the
NG-CDI website.