Here you can find all our available master projects.
Incremental learning techniques can solve one task after the next without starting from scratch, every time starting from the model learned on the previous task. A current limitation is that these techniques have hyperparameters, controlling for instance how fast the model can adapt to …
In incremental learning, when a new learning task arrives, a deep neural network is trained to map the input to the output space. As a result, at the end of the learning, we have 𝑡 different states of the learner, each starting from the …
In incremental learning, the learner is presented with a sequence of t learning tasks. These tasks are typically sampled randomly, provided in an arbitrary order, which does not align with the natural learning progression observed in lifelong human learners. In human learning, we typically …
Transfer is a ubiquitous concept in machine learning. The most common form is transfer learning from big pretrained models (e.g. by finetuning or zero-shot predictions), but it is also present in multi-task learning, meta-learning, and continual learning. Still, we have very little understanding of …
On-device learning/federated learning on limited resource devicesAddressing the following questions from NXP perspective:What is the latest state of the art for embedded on-device learning?How does on-device training differ from a regular (desktop/cloud) backprop-setup?Are approaches using one/few-shot training efficient and competitive?What deployment toolchains exist for …
Domain generalization over radar sensors, configurations and datasetsExplore the extent of domain gaps in Radar DNNs over different sensors and configurations;Investigate & improve sota sensor domain generalization techniques for radar-based ADAS;Leverage data-efficient sensor domain gap mitigation, e.g. via active learning.Contact the TU/e supervisor (Joaquin …
Automatic joint design and optimization of neural networksNeural Networks can be made more efficient and more accurate through a wide variety of techniques (Neural Architecture Search, Quantization, Pruning, …), but it is an open question on how and when to leverage these techniques in …
Design and automated optimization of DNNs for radar-based ADAS (advanced driver assistance systems)Improving state of the art approaches on object detection, classification, and segmentation in radar spectrum and/or 'point cloud' data with neural network architectures;Leveraging radar-domain specifics to improve reliability or efficiency of the …
Neural Architecture Search (NAS) is an attractive methodology to design and optimize good and efficient Neural Networks, but expensive for large scale models and or on high-bandwidth datasets. To enable NAS for a wide variety of domains requires exploring, improving and inventing e.g.:Zero-cost proxies …
Motivation. Recently, vision transformer architecture, ViT, excels at many tasks in computer vision, such as image recognition [1], image segmentation [2], image retrieval [3], image generation [4], visual object tracking [5] or object detection [6]. However, all these different sub-tasks require domain expertise, such as the type, …
The success (and the cost) of a machine learning product or project depends to a great extend on the quality of the available data. If the data has significant flaws, it may make a project much more expensive and much more time consuming than …
Photo-chemistry is a technique where input chemicals are first ionised and then new molecules are synthesised through interactions with photons. The exact amount of input chemicals is very important to make such reactions run effectively. Currently, Bayesian Optimization is used to find the optimal mixtures, …
There is an infinite number of ways to design a machine learning system, and many careful decisions need to be made based on prior experience. The field of automated machine learning (AutoML) aims to make these decisions in a data-driven, objective, and automated way. …
There is an infinite number of ways to design a machine learning system, and many careful decisions need to be made based on prior experience. The field of automated machine learning (AutoML) aims to make these decisions in a data-driven, objective, and automated way.In …
There are an infinite number of ways to design a machine learning system, and many careful decisions need to be made based on prior experience. The field of automated machine learning (AutoML) aims to make these decisions in a data-driven, objective, and automated way.There …
Humans are very efficient learners because we can very efficiently leverage prior experience when learning new tasks. For instance, a child first learns how to walk, and then efficiently learns how to run (obviously without starting from scratch).Several areas of machine learning aim to …
Bayesian Optimization is often used in Automated machine learning to predict which models to evaluate next. It works by learning a 'surrogate model' that is trained on previous models tried and can predict which models are interesting to try next. In all current AutoML …
No currently assigned Projects.
No finished Projects are available.