Here you can find all our available master projects.
Designing 3D printable materials has been, so far, a trial-and-error process dependent on human knowledge and effort; hence time-consuming and wasteful. To predict certain properties of 3DCP, material scientists have used modelling and simulations for decades. While helpful in many ways, models mostly require …
Continual reinforcement learning (CRL) stands as a pivotal paradigm in the AI landscape, fostering the development of adaptive and lifelong learning agents. This project delves into the intersection of CRL and natural language processing within the immersive realm of 3D simulation environments. The integration …
The project delves into the realm of Natural Language Processing (NLP) to analyze, understand, and derive insights from multi-party conversations. This study focuses on unraveling the distinct characteristics, complexities, and patterns within conversations involving more than two participants, aiming to enhance the comprehension and …
A popular paradigm in robotic learning is to train a policy from scratch for every new robot. This is not only inefficient but also often impractical for complex robots. The project revolves around the exploration and advancement of techniques for transferring policies between different …
The project aims to explore the utilization of sophisticated language models in the domain of text-based games. This endeavor seeks to harness the capabilities of large language models, such as GPT (Generative Pre-trained Transformer), in the context of interactive narratives, text adventures, and other …
The project is a pioneering initiative that combines Natural Language Processing (NLP) and Reinforcement Learning (RL) methodologies to create intelligent agents capable of understanding natural language instructions and participating in playing card games. This project aims to develop AI-driven agents that not only comprehend …
Multi-Agent Reinforcement Learning (MARL) is a field in artificial intelligence where multiple agents learn to make decisions in an environment through reinforcement learning. In the context of cooperative tasks, it involves agents working together to achieve common goals, sharing information and coordinating their actions …
BackgroundMelanoma is a form of skin cancer that originates in melanin-producing cells known as melanocytes. While other skin cancer types occur more frequently, melanoma is most dangerous due to the high likelihood of metastasis if not treated early. The incidence rate of melanoma has …
For an infrastructure asset, an effective maintenance strategy requires careful planning of both inspection and maintenance activities. Inspection aims to detect any damage and identify the underlying condition of the asset so that future inspection and maintenance can be scheduled. Alternatively, maintenance is performed …
The performance of various machine learning models highly depends on the choice of the hyperparameters used in their training, which becomes even a more difficult task by emerging complex deep learning models with multitude of parameters. Recent advances in hyperparameter optimization enable automatic configuration …
Incremental learning techniques can solve one task after the next without starting from scratch, every time starting from the model learned on the previous task. A current limitation is that these techniques have hyperparameters, controlling for instance how fast the model can adapt to …
In incremental learning, when a new learning task arrives, a deep neural network is trained to map the input to the output space. As a result, at the end of the learning, we have 𝑡 different states of the learner, each starting from the …
In incremental learning, the learner is presented with a sequence of t learning tasks. These tasks are typically sampled randomly, provided in an arbitrary order, which does not align with the natural learning progression observed in lifelong human learners. In human learning, we typically …
Transfer is a ubiquitous concept in machine learning. The most common form is transfer learning from big pretrained models (e.g. by finetuning or zero-shot predictions), but it is also present in multi-task learning, meta-learning, and continual learning. Still, we have very little understanding of …
On-device learning/federated learning on limited resource devicesAddressing the following questions from NXP perspective:What is the latest state of the art for embedded on-device learning?How does on-device training differ from a regular (desktop/cloud) backprop-setup?Are approaches using one/few-shot training efficient and competitive?What deployment toolchains exist for …
Domain generalization over radar sensors, configurations and datasetsExplore the extent of domain gaps in Radar DNNs over different sensors and configurations;Investigate & improve sota sensor domain generalization techniques for radar-based ADAS;Leverage data-efficient sensor domain gap mitigation, e.g. via active learning.Contact the TU/e supervisor (Joaquin …
Automatic joint design and optimization of neural networksNeural Networks can be made more efficient and more accurate through a wide variety of techniques (Neural Architecture Search, Quantization, Pruning, …), but it is an open question on how and when to leverage these techniques in …
Design and automated optimization of DNNs for radar-based ADAS (advanced driver assistance systems)Improving state of the art approaches on object detection, classification, and segmentation in radar spectrum and/or 'point cloud' data with neural network architectures;Leveraging radar-domain specifics to improve reliability or efficiency of the …
Neural Architecture Search (NAS) is an attractive methodology to design and optimize good and efficient Neural Networks, but expensive for large scale models and or on high-bandwidth datasets. To enable NAS for a wide variety of domains requires exploring, improving and inventing e.g.:Zero-cost proxies …
I plan to offer a few assignments on counterfactual explanationsCounterfactual explanations on evolving dataFeasibility, actionability and personalization of counterfactual explanationsCounterfactual explanations for spotting unwanted biased in predictive model behaviourValue alignment for counterfactual explanations (in collaboration with Emily Sullivan)Counterfactual explanations for behaviour change
The goal of this project would be to come up with a transformer or any other smart solution to (in a one sentence oversimplified description) find mappings between an image of the current patient condition, possible surgery actions and preferred outcome image. A more detailed …
In recent years, imprecise-probabilistic choice functions have gained growing interest, primarily from a theoretical point of view. These versatile and expressive uncertainty models have demonstrated their capacity to represent decision-making scenarios that extend beyond simple pairwise comparisons of options, accommodating situations of indecision as …
The work on generative random forests has started, but there is a long way to make them practical. This project aims at studying the drawbacks of such models and improving them with better ensemble ideas, gradient boosting, and/or other techniques already employed with decision …
This project aims to compare two different types of generative models: tractable probabilistic circuits and Bayesian networks of bounded tree-width, and potentially have tools to translate between them (when possible). Probabilistic circuits have been recently applied to a number of tasks, but there is …
This internal project aims at developing and testing (for example in classification tasks) a generative model based on probabilistic graphical models for domains with continuous and categorical variables. We want to learn both the graph structure and parameters of such models while constraining their …
An arguably major difficulty for improving causal inferences is the lack of availability of data. While observational data are abundant, interventional data are not. This internal project aims at creating software tools to generate data that can be useful for testing causal learning approaches. …
This internal project aims at designing and development a usable software package for learning and reasoning with probabilistic circuits. Probabilistic circuits are models which can represent complicated mixture models and their computation circuit can be wide and deep. Because they have a structure which …
The design of collective intelligence, i.e. the ability of a group of simple agents to collectively cooperate towards a unifying goal, is a growing area of machine learning research aimed at solving complex tasks through emergent computation [1, 2]. The interest in these techniques …
In recent years, imprecise-probabilistic choice functions have gained growing interest, primarily from a theoretical point of view. These versatile and expressive uncertainty models have demonstrated their capacity to represent decision-making scenarios that extend beyond simple pairwise comparisons of options, accommodating situations of indecision as …
In recent years, imprecise-probabilistic choice functions have gained growing interest, primarily from a theoretical point of view. These versatile and expressive uncertainty models have demonstrated their capacity to represent decision-making scenarios that extend beyond simple pairwise comparisons of options, accommodating situations of indecision as …
In recent years, imprecise-probabilistic choice functions have gained growing interest, primarily from a theoretical point of view. These versatile and expressive uncertainty models have demonstrated their capacity to represent decision-making scenarios that extend beyond simple pairwise comparisons of options, accommodating situations of indecision as …
Whittle sum-product networks [1] model the joint distribution of multivariate time series by leveraging the Whittle approximation, casting the likelihood in the frequency domain, and place a complex-valued sum-product network over the frequencies. The conditional independence relations among the time series can then be …
Knowledge graph embeddings are an important area of research inside machine learning and has become a necessity due to the importance of reasoning about objects, their attributes and relations in large graphs. There have been several approaches that have been explored and can be …
Safety is a core challenge for the deployment of reinforcement learning (RL) in real-world applications [1]. In applications such as recommender systems, this means the agent should respect budget constraints [2]. In this case, the RL agent must compute a policy condition of the …
Sample complexity is one of the core challenges in reinforcement learning (RL)[1]. An RL agent often needs orders of magnitude more data than supervised learning methods to achieve a reasonable performance. This clashes with problems with safety requirements, where the agent should minimize the …
Reinforcement Learning (RL) deals with problems that can be modeled as a Markov decision process (MDP) where the transition function is unknown. When an arbitrary policy was already in execution, and the experiences with the environment were recorded in a dataset, an offline RL …
Nowadays, most software systems are configurable, meaning that we can tailor the settings to the specific needs of each user. Furthermore, we may already have some data available indicating each user's preferences and the software's performance under each configuration. This way, we can compute …
See PDF
See PDF
See PDF
See PDF. As attachment, see also https://wwwis.win.tue.nl/~wouter/MSc/Niels.pdf
See PDF
TL;DR: In this project, you will focus on developing a model architecture that can efficiently simulate fluid dynamics, while taking into account the vast amount of domain knowledge in the field in the form of symmetries, as well as the modeling of stochastic effects …
There are numerous methods for out-of-distribution (OOD) detection and related problems in deep learning, see e.g. [1] for an overview. Many of these however only work well in highly fine-tuned settings and are not well understood in broader context. In this project, you would …
In order to get some insight into the inner workings of deep neural network classifiers, a method that enables the interpretation of learned features would be very helpful. This master project is loosely based on the approach presented in [1], where a GAN is …
Deep clustering is a well-researched field with promising approaches. Traditional nonconvex clustering methods require the definition of a kernel matrix, whose parameters vastly influence the result, and are hence difficult to specify. In turn, the promise of deep clustering is that a feature transformation …
TL;DR: In this project, you will develop a framework for integrating domain knowledge into generative models for cellular dynamics simulations, and apply the method to (synthetic) data of e.g. cancer cell migration. Project description: Studying the variety of mechanisms through which cells migrate and interact …
Thermonuclear fusion holds the promise of generating clean energy on a large scale. One promising approach for controlled fusion power generation is the tokamak, a torus-shaped device that magnetically confines the fusion plasma in its vessel. Currently, not all physical processes in these plasmas …
n recent years, the urgency of addressing the climate crisis, resulting from escalating greenhouse gas emissions, has increased. A potential solution for the increasing amount of CO2 in the air is carbon capture. Zeolites are potential candidate materials for carbon capture, as they are …
The black-box nature of neural networks prohibits their application in impactful areas, such as health care or generally anything that would have consequences in the real world. In response to this, the field of Explainable AI (XAI) emerged. State-of-the-art methods in XAI define a …
In order to metastasize, cancer cells need to move. Estimating the ability for cells to move, i.e. their dynamics, or so-called migration potential, is a promising new indicator for cancer patient prognosis (overall survival) and response to therapy. However, predicting the migration potential from …
Soft, porous metamaterials are materials that consist of a flexible base material (e.g., rubber-like material) with pores of a carefully designed shape in it. Under external loading (a pressure applied on the outside surface, mechanical constraints, or other interactions), they deform which in turn …
While deep learning has become extremely important in industry and society, neural networks are often considered ‘black boxes’, i.e., it is often believed that it is impossible to understand how neural networks really work. However, there are a lot of aspects we can and …
Recent work has shown that neural networks, such as fully connected networks and CNNs, learn to distinguish between classes from broader to finer distinctions between those classes [1,2] (see Fig. 1). Figure 1: Illustration of the evolution of learning from broader to finer distinctions between …
While deep learning has become extremely important in industry and society, neural networks are often considered ‘black boxes’, i.e., it is often believed that it is impossible to understand how neural networks really work. However, there are a lot of aspects we can and …
Project description:In the dynamic landscape of mobile robotics, object detection remains a foundational challenge, critical for enabling machines to interact intelligently with their surroundings. At Avular, a pioneering mobile robotics company in Eindhoven, we are excited to explore novel and innovative approaches in this …
Project description:Generative AI has become one of the leading approaches to (conditional) molecule generation. Like Large Language Models can learn (to some degree) rules governing natural language, could Large Chemistry Models learn rules governing atoms (quantum chemistry)? This is the leading research question of …
Project description:Diffusion Models are deep-learning models that achieve state-of-the-art performance in many image synthesis tasks. They are typically parameterized with UNets and consist of billions of weights. Expressing their weights in float32 leads to models that cannot be easily deployed on edge devices (e.g., …
Project description:Large Language Models (LLMs) are well-known for knowledge acquisition from large-scale corpus and for achieving SOTA performance on many NLP tasks. However, they can suffer from various issues, such as hallucinations, false references, made-up facts. On the other hand, Knowledge Graphs (KGs) can …
Project description:Large Language Models (LLMs) are deep-learning models that achieve state-of-the-art performance in many NLP tasks. They typically consist of billions of weights. As a result, expressing weights in float32 leads to models of size at least 1GB. Such large models cannot be easily …
Offline Reinforcement Learning (RL) deals with the problems where simulation or online interaction is impractical, costly, and/or dangerous, allowing to automate a wide range of applications from healthcare and education to finance and robotics. However, learning new policies from offline data suffers from distributional shifts …
One of the main concerns in the recent AI research is that most data-driven approaches preserve the bias or unfairness available in the collected (offline) data in the resulting models, which could lead to harmful social and ethical effects in the society. Fairness-aware machine learning has …
Finding pairs of locations that present interesting correlations or similarities (e.g., in their weather, development rate, or population statistics through time) can provide useful insights in different contexts/domains. For example, if a country observes that two different cities have a high similarity on the …
Synopses are extensively used for summarizing high-frequency streaming data, e.g., input from sensors, network packets, financial transactions. Some examples include Count-Min sketches, Bloom filters, AMS sketches, samples, and histogram. This project will focus on designing, developing, and evaluating synopses for the discovery of heavy …
Correlations are extensively used in all data-intensive disciplines, to identify relations between the data (e.g., relations between stocks, or between medical conditions and genetic factors). Most algorithms consider one-dimensional time series. For example, in the context of finance, the time series might represent the …
Correlations are extensively used in all data-intensive disciplines, to identify relations between the data (e.g., relations between stocks, or between medical conditions and genetic factors). The 'industry-standard' correlations are pairwise correlations, i.e., correlations between two variables. Multivariate correlations are correlations between three or more …
The relational algebra is used under-the-hood in every commercial relational database. Often, however, data is not relational. Indeed, data scientists often deal with matrices instead of relations. A counterpart of the relational algebra for the matrix data model, called MATLANG, has been introduced in …
Company: Marel Location: BoxmeerBackgroundWork order descriptions provided by engineers are of utmost importance in various industries as they serve as essential documentation for maintenance, repairs, and other technical tasks. These descriptions provide detailed information about the required work, including the scope, specifications, and any …
Company: Marel Location: BoxmeerBackgroundUser manuals and service manuals play a crucial role in guiding individuals on how to effectively and safely operate and maintain machinery. They serve as invaluable resources, providing step-by-step instructions, troubleshooting tips, and essential information. However, one of the challenges that …
Company: Marel Location: BoxmeerBackgroundInstalled base data refers to information about products or services that have been sold and installed at customer sites. This data is typically stored in a relational data warehouse, which can make it difficult to access and integrate with other data …
Distribution shifts between a source and a target domain have been a prominent problem in machine learning for several decades [1-3]. Covariance shift (as well as its assumption) is the most commonly used and studied in theory and practice in distribution shifts [1-3]. Handling …
It is widely known that training deep neural networks on huge datasets improves learning. However, huge datasets and deep neural networks can no longer be trained on a single machine. One common solution is to train using distributed systems. In addition to traditional data-centers, …
GeneralAn internship at Accenture about prompt engineering for LLMs.RequirementsFrom our students we expect the following: high independence (including proposing own ideas);good understanding of mathematics (algebra, calculus, statistics, probability theory);good programming skills (Python + ML/DL libraries, preferably PyTorch). Thesis templatePlease take a look at this …
Proving a theorem is similar to programming: in both cases the solution is a sequence of precise instructions to obtain the output/theorem given the input/assumptions. In fact, there are programming languages such as Lean, Coq, and Isabelle that can be used to prove theorems. …
--update--: This project is now taken by Davis EisaksThe goal of this project is to study how to train a machine learning model in a gossip-based approach, where if two devices (e.g smartwatches) pass each other in the physical space, they could exchange part of …
Node-based BNNs assign latent noise variables to hidden nodes of a neural network. By restricting inference to the node-based latent variables, node stochasticity greatly reduces the dimension of the posterior. This allows for inference of BNNs that are cheap to compute and to communicate, …
Motivation. Recently, vision transformer architecture, ViT, excels at many tasks in computer vision, such as image recognition [1], image segmentation [2], image retrieval [3], image generation [4], visual object tracking [5] or object detection [6]. However, all these different sub-tasks require domain expertise, such as the type, …
Knowledge Graphs (KG) are an upcoming way of modeling data and knowledge, as opposed to the traditional tabular DBMS. Not only does it allow for structured querying, it also provides possibilities for semantic search and a new way of looking at “related entities”.Building a …
ASML has recently re-confirmed there two projects; a couple more will likely be confirmed in the coming weeksXAI in Exceptional Model Mining (--- update --- this project is taken by Yasemin Yasarol)In the semiconductor industry there are different, diverse and unique failure modes that impact …
--- update --- These projects are no longer available. Theonymfi Anogeianaki will work on FairML.1. Bayesian inferenceWe have been doing ‘traditional’ machine learning for years now at Floryn but never investigated Bayesian modeling. We currently make use of probability measures that come from our (frequentist) machine learning …
The success (and the cost) of a machine learning product or project depends to a great extend on the quality of the available data. If the data has significant flaws, it may make a project much more expensive and much more time consuming than …
This internal project aims at studying and devising new bounds for the computational complexity of inferences in probabilistic circuits and their robust/credal counterpart, including approximation results and fixed-parameter tractability. It requires mathematical interest and good knowledge of theory of computation. This is a theoretical …
This internal project aims at implementing a new approach to learning the structure and parameters of Bayesian networks. It is mostly an implementation project, as the novel ideas are already established (but never published, so the approach is novel). It requires high expertise in …
This is a wildcard for projects in (knowledge) graph data management.If you took EDS (Engineering Data Systems) and liked what we did there, we offer research+engineering projects in the scope of our database engine AvantGraph (AvantGraph.io). Topics include (but not limited to):- graph query …
Photo-chemistry is a technique where input chemicals are first ionised and then new molecules are synthesised through interactions with photons. The exact amount of input chemicals is very important to make such reactions run effectively. Currently, Bayesian Optimization is used to find the optimal mixtures, …
There is an infinite number of ways to design a machine learning system, and many careful decisions need to be made based on prior experience. The field of automated machine learning (AutoML) aims to make these decisions in a data-driven, objective, and automated way. …
There is an infinite number of ways to design a machine learning system, and many careful decisions need to be made based on prior experience. The field of automated machine learning (AutoML) aims to make these decisions in a data-driven, objective, and automated way.In …
There are an infinite number of ways to design a machine learning system, and many careful decisions need to be made based on prior experience. The field of automated machine learning (AutoML) aims to make these decisions in a data-driven, objective, and automated way.There …
Humans are very efficient learners because we can very efficiently leverage prior experience when learning new tasks. For instance, a child first learns how to walk, and then efficiently learns how to run (obviously without starting from scratch).Several areas of machine learning aim to …
Bayesian Optimization is often used in Automated machine learning to predict which models to evaluate next. It works by learning a 'surrogate model' that is trained on previous models tried and can predict which models are interesting to try next. In all current AutoML …
Autonomous vehicles and robots need 3D information such as depth and pose to traverse paths safely and correctly. Classical methods utilize hand-crafted features that can potentially fail in challenging scenarios, such as those with low texture [1]. Although neural networks can be trained on …
Schema languages are critical for data system usability, both in terms of human understanding and in terms of system performance [0]. The property graph data model is part of the upcoming ISO standards around graph data management [4]. Developing a standard schema language for …
Context of the work: Deep Learning (DL) is a very important machine learning area nowadays and it has proven to be a successful tool for all machine learning paradigms, i.e., supervised learning, unsupervised learning, and reinforcement learning. Still, the scalability of DL models is …
Context of the work: Deep Learning (DL) is a very important machine learning area nowadays and it has proven to be a successful tool for all machine learning paradigms, i.e., supervised learning, unsupervised learning, and reinforcement learning. Still, the scalability of DL models is …
Nowadays, data changes very rapidly. Every day new trends appear on social media with millions of images. New topics rapidly emerge from the huge number of videos uploaded on Youtube. Attention to continual lifelong learning has recently increased to cope with this rapid data …
With the rapid development of multi-media social network platforms, e.g., Instagram, Tiktok, etc., more and more content is generated in the multi-modal format rather than pure text. This brings new challenges for researchers to analyze the user generated content and solve some concrete problems …
Deep neural networks (DNN) deployed in the real world are frequently exposed to non-stationary data distributions and required to sequentially learn multiple tasks. This requires that DNNs acquire new knowledge while retaining previously obtained knowledge. However, continual learning in DNNs, in which networks are …
Every second, around 107 to 108 bits of information reach the human visual system (HVS) [IK01]. Because biological hardware has limited computational capacity, complete processing of massive sensory information would be impossible. The HVS has therefore developed two mechanisms, foveation and fixation, that preserve perceptual performance …
Every second, around 107 to 108 bits of information reach the human visual system (HVS) [IK01]. Because biological hardware has limited computational capacity, complete processing of massive sensory information would be impossible. The HVS has therefore developed two mechanisms, foveation and fixation, that preserve perceptual performance …
Every second, around 107 to 108 bits of information reach the human visual system (HVS) [IK01]. Because biological hardware has limited computational capacity, complete processing of massive sensory information would be impossible. The HVS has therefore developed two mechanisms, foveation and fixation, that preserve perceptual …
Self-supervised learning [1, 2] solves pretext prediction tasks that do not require annotations in order to learn feature representations. Recent empirical research has demonstrated that deeper and wider models benefit more from task-agnostic use of unlabeled data than their smaller counterparts; i.e., smaller models …
It is well-known that processing of complex analytical queries over large graph datasets introduces a major pain point - runtime memory consumption. To address this, recently, a method based on factorized query processing (FQP) has been proposed. It has been shown that this method …
Deep clustering is a well-researched field with promising approaches. Traditional nonconvex clustering methods require the definition of a kernel matrix, whose parameters vastly influence the result, and are hence difficult to specify. In turn, the promise of deep clustering is that a feature transformation …
There exists a wide variety of benchmarks available for graph databases: both synthetic and real-world-based. However, one important problem with current state of the art in graph database benchmarking is that all of the existing benchmarks are inherently based on workloads from relational databases, …
IntroductionThe Observe, Orient, Decide and Act (OODA) loop [1] shapes most modern military warfare doctrines. Typically, after gathering sensor and intelligence data in the Observe step, a common tactical operating picture of the monitored aerial, maritime and/or ground scenario is built and shared among …
Since DRAM is still relatively expensive and contemporary graph database workloads operate with billion-node-scale graphs, contemporary graph database engines still have to rely on secondary storage for query processing. In this project, we explore how novel techniques such as variable-page sizes and pointer swizzling can …
Influence blocking and fake news mitigation have been the main research direction for the network science and data mining research communities in the past few years. Several methods have been proposed in this direction [1]. However, none of the proposed solutions has proposed feature-blind …
In this project, we will analyze social media dataset to answer interesting questions about human behavior. We aim to study biases using social media data and propose fair solutions. The project also aims to model human behavior on social media (depends on the topic).This …
In the past 10-15 years, a massive amount of social networking data has been released publicly and analyzed to better understand complex networks and their different applications. However, ensuring the privacy of the released data has been a primary concern. Most of the graph …
In real-world networks, nodes are organized into communities and the community size follows power-law distribution. In simple words, there are a few communities of bigger size and many communities of small size. Several methods have been proposed to identify communities using structural properties of …
Deep neural networks (DNN) are achieving superior performance in perception tasks; however, they are still riddled with fundamental shortcomings. There are still core questions about what the network is truly learning. DNNs have been shown to rely on local texture information to make decisions, …
Context:Financial sector is a tightly regulated environment. All models used in the financial sector, are studied under the microscope of developers, validators, regulators, and eventually the end users – the clients, before these models can be deployed and used.To assess whether a customer should be …
Reinforcement learning (RL) is a general learning, predicting, and decision-making paradigm and applies broadly in many disciplines, including science, engineering, and humanities. Conventionally, classical RL approaches have seen prominent successes in many closed world problems, such as Atari games, AlphaGo, and robotics. However, dealing …
The goal of this thesis is to develop techniques to generate knowledge graph(s) (KG) by: recognizing and extracting entities and predicates from selected structural parts of transcriptions of our customer contacts - via chat and call - with our Customer Services center; mapping the …
At KPN we collect the transcriptions of our customer contacts with our Customer Services centers executed via the chat and call channels. The structure of such dialogues is made by a number of classifiable parts some of which always occur, for example greetings, customer …
Customers reach out to KPN for various purposes and one of the easiest way for them is to call customer service. There is already a process of analyzing customer calls to classify correct cause of the call but there are some challenges in terms of …
Neural networks typically consist of a sequence of well-defined computational blocks that are executed one after the other to obtain an inference for an input image. After the neural network has been trained, a static inference graph comprising these computational blocks is executed for …
Wikidata is an open collaboratively built knowledge base. In the Wikidata community groups of editors who share interest in specific topics form WikiProjects. As part of their regular work, members of WikiProjects would like to regularly test the conformance of entity data in Wikidata against schemas for entity classes. …
In the collaboratively built knowledge base Wikidata some editors would appreciate suggestions of how to improve the completeness of items. Currently some community members use an existing tool, Recoin, described in this paper, to get suggestions of relevant properties to use to contribute additional statements. This process could …
The JSON data format is one of the most popular human-readable data formats, and is widely used in Web and Data-intensive applications. Unfortunately, reading (i.e., parsing) and processing JSON data is often a performance bottleneck due to the inherent textual nature of JSON. Recent …
Machine-learning based approaches [3] are increasingly used to solve a number of different compiler optimization problems. In this project, we want to explore ML-based techniques in the context of the Graal compiler [1] and its Truffle [2] language implementation framework, to improve the performance …
Data processing systems such as Apache Spark [1] rely on runtime code generation [2] to speedup query execution. In this context, code generation typically translates a SQL query to some executable Java code, which is capable of delivering high performance compared to query interpretation. …
Profile-guided optimization (PGO) [1] is a compiler optimization technique that uses profiling data to improve program runtime performance. It relies on the intuition that runtime profiling data from previous executions can be used to drive optimization decisions. Unfortunately, collecting such profile data is expensive, …
Language Virtual Machines such as V8 or GraalVM [3] use Graphs to represent code. One example Graph representation is the so-called Sea-of-nodes model [1]. Sea-of-nodes graphs of real-world programs have millions of edges, and are typically very hard to query, explore, and analyze. In …
In the Database group, we like to learn more about students’ understanding of query languages. We often do this through user studies, in which we also ask questions about their prior experience with the language. This prior experience may have a large influence on …
SQL has proven to be difficult for students to use effectively. Various papers have been written on the types and frequencies of SQL errors. However, this does not mean that all errors are equal. Some errors may inhibit query formulation much more than others. …
Project description This project is concerned with the recognition of symbols of piping and process equipment together with the instrumentation and control devices that appear on piping and instrumentation diagrams (P&ID). Each item on the P&ID is associated with a pipeline. Piping engineers often receive drawings …
Bayesian networks are a popular model in AI. Credal networks are a robust version of Bayesian networks created by replacing the conditional probability mass functions describing the nodes by conditional credal sets (sets of probability mass functions). Next to their nodes, Bayesian networks are …
In anomaly detection, we aim to identify unusual instances in different applications, including malicious users detection in OSNs, fraud detection, and suspicious bank transaction detection. Most of the proposed anomaly detection methods are dependent on network structure as some specific structural pattern can convey …
Reinforcement learning (RL) is a computational approach to automating goal-directed decision making. In this project, we will use the framework of Markov decision processes. Fairness in reinforcement learning [1] deals with removing bias from the decisions made by the algorithms. Bias or discrimination in …
Reinforcement learning (RL) is a computational approach to automating goal-directed decision making. Reinforcement learning problems use either the framework of multi-armed bandits or Markov decision processes (or their variants). In some cases, RL solutions are sample inefficient and costly. To address this issue, some …
Reinforcement learning (RL) is a computational approach to automating goal-directed decision making using the feedback observed by the learning agent. In this project, we will be using the framework of multi-armed bandits and Markov decision processes. Observational data collected from real-world systems can mostly …
Simulation plays an important role in analyzing complex industrial systems when analytical solutions are unavailable. It has been successfully applied to a variety of areas, such as supply chain systems, healthcare systems, and manufacturing systems.Simulation optimization, i.e., the search for a design or solution …
See PDF. As attachment, see also https://wwwis.win.tue.nl/~wouter/MSc/Bart.pdf
See PDF
In wind farms, one source of reduction in power generation by the turbines is the reduction of wind speed in the wake downstream of each turbine's rotor. Namely, a turbine downstream in the wind direction of another will effectively experience wind with a reduced …
Recommender Systems (RSs) have emerged as a way to help users find relevant information as online item catalogs increased in size. There is an increasing interest in systems that produce recommendations that are not only relevant, but also diverse [1]. In addition to users, increased …
---UPDATE---: This project is now taken by Jonas NiederleNanopore sequencing is a third-generation sequencing method that directly measures long DNA or RNA (Figure 1). The method works by translocating a single DNA strand through a Nanopore in which an electric current signal is measured. The …
--update--: This project is now taken byTijs TeulingsThe topic of the project is simulation of bubbles with deep generative models. Bubbles are a fascinating phenomenon in multiphase flow, and they play an important role in chemical, industrial processes. Bubbles can be simulated well with …
--- UPDATE ---: This project is now taken by Tim van EngelandMeta-learning (also referred to as learning to learn) is a set of Machine Learning techniques that aim to learn quickly from a few given examples in changing environments [1]. One instantiation of the meta-learning …
Your lecturers here at the university spend a lot of time creating new exercises for our students, both for weekly assignments as for exams. If you extrapolate this to universities and professional training globally, this is a tremendous effort and use of time. It …
SQL is difficult to use effectively, and creates many errors. Error types and frequency in SQL have been analyzed by various researchers, such as Ahadi, Prior, Behbood and Lister, and Taipalus and Siponen. One method of problem solving that computer scientists apply is posting …
--- Subproject 1 has been filled. Subproject 2 is still open.In this project, we work together with the Dutch south-west Early Psoriatic Arthritis Registry (DEPAR) which is a collaboration of 15 medical centers in the Netherlands that aim to investigate which patient characteristics, measurements …
See PDF
See PDF
Introduction: Artificial intelligence (AI) has shown great promise in different domains including the clinical domain. However, the applications of the developed AI model in clinical practices remained limited mainly due to the lack of model explainability. Clinicians, in general, want to know why an …
See PDF
Query formulation in SQL is difficult for novices, and many errors are made in query formulation. Existing research has focused on registering error types and frequencies. Not much attention has been paid to solving these problems. One of the problems in SQL is with …
Company: Datacation / aerovision.aiLocation: Eindhoven (AI Innovation Center at High Tech Campus) or Amsterdam (VU)Project descriptionAerovision.ai is a start-up that is building a no-code A.I. platform for drone companies. With this A.I. platform, companies can train, deploy and evaluate their customized computer vision algorithms, …
Correlations are extensively used in all data-intensive disciplines, to identify relations between the data (e.g., relations between stocks, or between medical conditions and genetic factors). The 'industry-standard' correlations are pairwise correlations, i.e., correlations between two variables. Multivariate correlations are correlations between three or more variables. …
Correlations are extensively used in all data-intensive disciplines, to identify relations between the data (e.g., relations between stocks, or between medical conditions and genetic factors). The 'industry-standard' correlations are pairwise correlations, i.e., correlations between two variables. Multivariate correlations are correlations between three or more variables. …
Granger causality is among the standard functions for quantifying causal relationships between time series (e.g., closing prices of stocks). However, naïve computation of Granger causality requires pairwise comparisons between all time series, which comes with quadradic complexity. In this project you will focus on …
(irrelevant for self-defined project)
(irrelevant)
In a classification task, some instances are classified more robustly than others. Namely, even with a large modification of the training set, these instances (in the test set) will be assigned to the same class. Other instances are non-robust in the sense that a …