back to list

Project: Generalizable, fair and explainable default predictors

Description

Context:

Financial sector is a tightly regulated environment. All models used in the financial sector, are studied under the microscope of developers, validators, regulators, and eventually the end users – the clients, before these models can be deployed and used.


To assess whether a customer should be allowed to lease assets the characteristics of the customer have to be considered.  According to the recent European regulations on Artificial Intelligence, the use of Machine Learning (ML) models for scoring individuals (for example, to decide whether to provide them a loan or not) is considered to be a high-risk application.


At DLL several ML models are deployed in different countries such that the outputs (predictions) of these models are used to automatically decide whether financing should be granted to a customer or not, based on the information provided by the customer at the moment of credit application. The decisions taken by these models must be explained to the customer upon request, but they also have to be available for audit purposes to ensure that the model is still functioning as expected.


Risk Analytics team of DLL will be offering one or more MSc thesis assignments focused on studying

  • generalisability of ML models to new cases and application settings, including distribution shift, OOD, missing data etc.
  • interpretability of ML models,
  • auditing and performance profiling of ML models.
Details
Supervisor
Mykola Pechenizkiy
Secondary supervisor
DD
DLL
External location
DLL
Interested?
Get in contact