Accessibility Tools
When Frank Rosenblatt introduced the first artificial neuron in 1958, his inspiration came directly from biology. That biological grounding continued to shape the development of neural network architectures - for example, convolutional neural networks were modeled after the structure of the cat’s visual cortex. When John Jumper and Demis Hassabis presented Alphafold2, their neural network model for predicting tertiary protein structure, the direction of iinspiration had reversed. Their model was built not on biological intuition, but on abstract mathematical constructs such as tensors, attention mechanisms, and the transformer architecture, initially developed for natural language translation. These tools from the world of artificial intelligence were used to address one of the central challenges in biology: protein folding.
In this talk, I will outline key aspects of AlphaFold2’s inner workings, framing them within the broader historical shift from biologically inspired architectures to mathematically grounded paradigms - a transformation reflected even in the evolving terminology, as tensors replaced neurons along the way. This perspective will be set against the backdrop of recurrent cycles of enthusiasm in neural network research, each periodically constrained by the technological limitations of its time.
Coffee and tea will be available on the spot.