Everything Old is New Again
Lisp was one of the first high-level programming languages. And, in its conception, it was the language associated with "AI" systems. But this AI, in its primitive form in the 80's and 90's was largely a sham. It consisted mainly of rule based engines that tried in vain to model the world. It simply was not enough. But Lisp reigned nonetheless - because it was powerful. And though now it is less popular - it has influenced countless languages since - from JavaScript to Rust.
Now, In a post GPT world, we are able to witness astounding progress in AI. But all of this started with a model - the Neural Network. We have left the classical (pre Neural Network world) behind us. But how many of us really understand the neural networks. How do they work? Why are they built with matrices as the foundation? What is going in there?
This talk will demonstrate the use of features of Lisp to build a neural network from scratch. I am particularly interested in the functional features - and how they might influence our idea of neural networks. For example, a closure in Lisp is a combination of a function and an attached set of variable bindings. They are active, have local state, and we can create multiple instances of them. Where could we use such a structure? A network neuron of course - which is also just a set of variables associated with an activation function. Lazy evaluation is the practice of not computing an expression's value until it is needed. Usually, this allows for calling functions at run-time (and stopping them when needed). This could create a network that changes as it runs. Macros help in changing the compilation order of an expression. What could such control allow a neural network to do? The talk aims to find the answers.
Functional programs are known for their elegance and beauty. They are wrongfully associated with academics and theorising. My hope is that this experiment shows where these features can be efficient. And if they are not efficient, what is the reason for that? When do we trade performance for clear abstractions? These are all questions I hope to answer, for myself and others.
The hope is that others come away understanding a little bit more about both Neural Networks, some history of AI and a great appreciation of Lisp and functional programming.
Well written proposal and an actually technical perspective compared to tens of other AI related talks. I'm not sure how much FP experience does the proposer have but leaning towards yes. Would like to do a mock presentation for this one.
This has been proposed as a BoF (by mistake?) but the proposal mentions a talk so will need to clear that up.
Thank you for submitting your proposal for IndiaFOSS 2025. Your submission was well-received and progressed to our final review stages.
Unfortunately, due to the high volume of excellent proposals this year, we were unable to select your talk for the final program. We appreciate the effort you put into your submission and encourage you to apply again for future events.