top of page

Navigating the Crossroads: Striking a Balance between Fintech Innovation and EU Consumer Protection in the Era of AI

Fintech, or financial technology, refers to the innovative solutions developed for various financial services such as online banking, mobile payments, and cryptocurrency. These technologies are becoming increasingly integrated into consumers’ daily lives. They support the growth of niche markets in Europe, including alternative finance, crowdfunding, peer-to-peer lending, automated loans, and investment management. None of these services would work so efficiently or even at all if it was not for the Artificial Intelligence (AI) systems and Machine-Learning (ML) enabled devices that underpin and facilitate the billions of daily computations needed for these services.

These automated systems are designed with a self-learning mechanism, and, thus, are developed and trained to adapt and evolve continually, making it difficult to understand their decision-making processes. This in turn makes it difficult to understand how and why financial decisions are being made. For instance, why some people are approved for loans and why others are rejected. The machines decide… and we trust them.

Just one question. Is this automated decision making, with no human oversight legal?

The global fintech industry is growing, due to a surge in startups[1]. In 2022, 22% of European unicorns were fintech, raising $22.2 billion[2]. However, by late 2022, the sector saw restructuring and layoffs. In the first half of 2023, European fintech funding dropped to €4.6 billion, down from €15.3 billion, due to tighter financial markets and a shift away from high-risk investments[3].

In tightening markets, banks and fintech companies look to reduce costs. An increasingly popular and cost effective business model employed by fintech companies is what is called an alternative credit scoring model[4]. This model uses easily available data such as digital footprints to determine creditworthiness4.

Using alternative credit scoring models, fintech companies and financial institutions evaluate people based on a multitude of non-traditional parameters like their mobile spending history, their utility bill payments, their social media environments and their mobile in-app purchases.

Lenders use the phone number or email provided on loan applications to look up customers’ social media profiles on Meta, Twitter, Telegram, Snapchat and their accounts on other platforms like Airbnb, LinkedIn, Pinterest, Microsoft 365, and Discord.

Manually reviewing massive amounts of alternative credit data in all its various formats is where AI algorithms and ML models become indispensable. An AI system in finance can, very cheaply and efficiently, scan vast arrays of publicly available data and identify patterns, even in unstructured data.

Combining traditional and alternative data sources for lending helps enrich conventional banking data. Details like behavioural analyses of customers' different social accounts and connections with other people are collected and used. Their interactions with websites, even text and audio data from credit applications and previously recorded customer service conversations gets collected and inferences are made about the creditworthiness of individuals. All personal data that can be found on the Internet about individuals are used and analysed.

Many Fintech companies are already using alternative credit scoring models in their financial products by developing credit applications or buying credit reports or credit scores from credit reference agencies (CRAs). These credit scores automate creditworthiness calculations and reduces costs.

As already pointed out above, there is, however, limited scope for human oversight in these AI-fuelled models as the models learn and adapt by themselves. CRAs use AI to generate credit scores for individuals, which they then sell to fintech or financial institutions, who in turn use these scores to decide whether the individual is creditworthy or not, placing 100% faith in the machines’ calculations.  

The whole aim of these models is to build algorithms that can process vast amounts of personal data, and make predictions, from that data, i.e. replacing the human aspect in the approach. This means that the algorithm operates dynamically, adapting itself to changes in the data, relying not only on statistics, but also on mathematical optimisation. The best part about these models is that they reduce costs and time by eliminating the intervention of humans in the equation.

These AI and ML models, however, are not perfect.  When creating these models, developers must use large quantities of data to train them. If this data contains biases, the models can learn and replicate these biases, leading to skewed or prejudiced outcomes. Similarly, errors can be introduced if the initial programming or the training data is of low quality and contains mistakes or inaccuracies. The performance and fairness of AI and ML models, therefore, heavily depend on the quality and representativeness of the data upon which they are trained and upon the soundness of the algorithms used by the system. Hence the need for humans vigilance and to check that these systems are making sound and fair choices before using them to make life-changing decisions about individuals.

The European Digital Regulation Landscape

In light of the above, the obligations of fintech and financial institutions under the GDPR are problematic when using alternative credit scoring models. Article 22 of the General Data Protection Regulation (GDPR) requires human oversight when decisions are automated by technology:

The data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.’

Up until recently, it has never been clear whether fintech companies or traditional financial institutions employing alternative credit scoring models are compliant with these data protection obligations in their AI-fuelled business models.  That is until December 2023 when the situation was clarified by the European Court of Justice (ECJ). The case, known as Case C-634/21, was the first time the ECJ was asked to interpret Article 22 of the GDPR[5].

Article 22 grants data subjects the right "not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her"4. This case primarily dealt with whether the automated establishment of a credit score amounts to an automated decision within the meaning of Article 22 (1) of the GDPR.

The ECJ ruled that a credit reference agency engages in automated individual decision-making when it creates credit repayment probability scores from automated processing and where lenders use that probability value[6] to make financial decisions about the individual.

The implications of this case are significant not only for credit scoring companies but also for fintech and financial institutions. Fintech will now need to provide more transparency about their scoring methods. They will need to implement safeguards for individuals, such as the right to obtain human intervention, to express their point of view, and to contest the automated decision. 

December 2023 was a bad month for fintech when considering AI-driven business models for yet another reason also. As well as Case C-634/21, a provisional agreement on the forthcoming AI Act was made on the 9th of December. The EU AI Act also sets out six general principles for the use of AI systems in the EU and the first general principle stipulates human agency and oversight for all AI systems.

Moving into 2024, observing how fintech and traditional financial institutions navigate the intricacies of compliance with both Article 22 of the GDPR and the forthcoming EU AI Act promises to be a fascinating and complex journey that may last not only months but years as Europe seeks to keep pace with AI innovation while also ensuring compliance with regulation. Undoubtedly, the spotlight will be on research and cross-disciplinary collaboration between legal experts and technologists in the fintech landscape for the foreseeable future.



15 views0 comments


bottom of page