--- update --- These projects are no longer available. Theonymfi Anogeianaki will work on FairML.
1. Bayesian inference
We have been doing ‘traditional’ machine learning for years now at Floryn but never investigated Bayesian modeling. We currently make use of probability measures that come from our (frequentist) machine learning models but we are very interested in seeing if a Bayesian approach would be suited for the problem as well.
2. Weak supervision
Our prediction model that runs when a company requests a loan from us is currently trained on target labels from our domain experts (risk underwriters). The experts judge the financial health of a company and decide if we can provide them with a loan or not. We then try to reproduce this decision using a machine learning model. We consider our target labels noisy since humans can also make mistakes, are we learning the right thing or are the labels that we use ‘weak’ labels?
A potential solution for this might be Weak Supervision, for example using Snorkel, and we are eager to explore this with a graduate student.
3. Money laundering / fraud / payment problem detection
Similar to banks, Floryn has compliance duties, e.g. we need to be certain that we don’t support money laundering, fraud or other sketchy business. We have tools in place for this already but exploring alternatives using machine learning is definitely worthwhile to explore.
4. Ethical and trustworthy AI
On 8 April 2019, the High-Level Expert Group on AI of the European Union presented Ethics Guidelines for Trustworthy Artificial Intelligence. The Guidelines put forward a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy.
What can we do in Floryn to ensure that our usage of machine learning is ethical and trustworthy? What tools can we use to evaluate and monitor that our decisions contain less bias?