AIML@LU WS: AI & ML Technologies
When: 30 August at 9.30 - 15.30
Where: MA:6 Annexet*, Sölvegatan 20, Lund, Sweden, LTH, Lund University
9.30 Fika and mingle
10.30 Ongoing projects
Martin Karlsson, Lund University: Robot Programming by Demonstration Based on Machine Learning [VIDEO]
Abstract: Whereas humans would prefer to program on a high level of abstraction, for instance through natural language, robots require very detailed instructions, for instance time series of desired joint torques. In this research, we aim to meet the robots half way, by enabling programming by demonstration.
Marcus Klang, Lund University: Finding Things in Strings [VIDEO]
Abstract: Things such as organizations, persons, or locations are all around us, particularly in the news, forum posts, facebook updates, and tweets. With named things, we can introduce background in news articles, summarize articles, build question-answering systems, and much more. However, it is challenging to find and link them, as they often may be ambiguous. In this work, we aim to enrich the knowledge graph Wikidata with new relations and things only found in the articles of multilingual Wikipedia. The long term goal is the development of a multilingual system that can answer any natural question and improve how we find new relevant information.
Najmeh Abiri, Lund University: Variational Autoencoders
Joakim Johnander, Linköping University: Deep Recurrent Neural Networks for Video Object Segmentation [VIDEO]
Abstract: Given a video with a target or object marked in the first frame, we aim to track and segment the target throughout the video. A fundamental challenge is to find an effective representation of the target and background appearance. In this work, we propose to tackle this challenge by integrating a probabilistic model as a differentiable and end-to-end trainable deep neural network module.
12.00 Lunch and mingle
13.00 Future trends and interesting examples
Michael Green, Desupervised2: Bayesian Deep Probabilistic Programming: Are we there yet? [VIDEO]
Abstract: Not many would argue against the Bayesian paradigm being the most useful one in modeling problems where parameter estimations are inherently uncertain. But unfortunately most interesting models, especially the ones we know from deep learning, have been very hard to fit in any reasonable amount of time. When dealing with +10 million parameters and +100 thousand data points, Markov Chain Monte Carlo just isn't a viable option. This is why almost every practitioner in deep learning defaults to maximum likelihood estimates through optimization via stochastic gradient descent, because it's much faster. In this talk we'll explore a promising way of doing full Bayesian inference on large scale models via stochastic black box variational inference.
Erik Gärtner, Lund University: Intrinsic Motivation - Curiosity and learning for the sake of learning [VIDEO]
Abstract: Humans as well as other animals are curious beings that develop cognitive skills on their own without the need for external goals or supervision.
Inspired by this, how can we encourage AIs to learn and solve tasks by themselves?
This talk presents the fascinating area of intrinsic reward in the context of reinforcement learning by showcasing recent articles and results.
14.30 Summary and conclusions
15.00 Fika and mingel
If you have any questions, suggestions or would like to contribute to the program please contact one of:
- Jacek Malec
- Mathias Ohlsson,
- Kalle Åström
- Jonas [dot] Wisbrant [at] cs [dot] lth [dot] se (subject: AIML_WS_11_April_2019) (Jonas Wisbrant)
* Former known as 'Matteannexet'.