back to list

Project: Feature selection, sparse neural networks, truly sparse implementations, and societal challenges


Context of the work:

Deep Learning (DL) is a very important machine learning area nowadays and it has proven to be a successful tool for all machine learning paradigms, i.e., supervised learning, unsupervised learning, and reinforcement learning. Still, the scalability of DL models is limited due to many redundant connections in densely connected artificial neural networks. Stemming from our previous work [1], the emerging field of sparse training suggests that by training sparse neural networks directly from scratch can lead to better performance than dense training, while having reduced computational costs [2] and, implicitly, reduced energy consumption [4] and low environmental impact. 

Short description of the assignment:

As a new field, sparse training has many open research questions, while opening many new research directions which are not explored yet. The goal of this assignment is to implement the feature selection method WAST (Where to pay Attention during Sparse Training) proposed in [3] in a truly sparse manner. Algorithmic adjustments will also have to be made. The starting point can be the implementations from [4,5,6]. The findings will be tested out on very high dimensional data in the biological domain (e.g., genomic data).

Possible expected outcomes:

algorithmic novelty, open-source software, publishable results. 


Basic Calculus and Optimization

Very good programming skills

Good understanding of artificial neural networks

Learning Objectives:

Upon successful completion of this project, the student will have learnt:

How to address a basic research question

Fundamental concepts behind sparse neural networks

Practical skills to implement artificial neural networks and to create an open-source software product 

Examples of previous MSc thesis on the topic: 


[1] D.C. Mocanu, E. Mocanu, P. Stone, P.H. Nguyen, M. Gibescu, A. Liotta: “Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science”, Nature Communications, 2018,  

[2] D.C. Mocanu, E. Mocanu, T. Pinto, S. Curci, P.H. Nguyen, M. Gibescu, D. Ernst, Z.A. Vale: “Sparse Training Theory for Scalable and Efficient Agents”, AAMAS 2021, 

[3] G. Sokar, Z. Atashgahi, M. Pechenizkiy, D.C. Mocanu: ”Where to Pay Attention in Sparse Training for Feature Selection? ”, NeurIPS 2022, the came-ready version can be provided by email at request.

[4] Z. Atashgahi, G. Sokar, T. van der Lee, E. Mocanu, D.C. Mocanu, R. Veldhuis, M. Pechenizkiy: “Quick and Robust Feature Selection: the Strength of Energy-efficient Sparse Training for Autoencoders”, Machine Learning (ECMLPKDD 2022 journal track), 

[5] S. Liu, D.C. Mocanu, A.R.R. Matavalam, Y. Pei, M. Pechenizkiy, “Sparse evolutionary Deep Learning with over one million artificial neurons on commodity hardware”, NCAA journal 2021, 

[6] S. Curci, D.C. Mocanu, M. Pechenizkiy: “Truly Sparse Neural Networks at Scale”, 2021, 

Detailed description
View PDF
Mykola Pechenizkiy
Secondary supervisor
Ghada Sokar
Get in contact