The question faced by the finance industry of how best to manage data, and maximise the value of it, is nothing new. As a speaker at the FIMA Europe event in London outlined yesterday morning, regulators have now been considering this for almost thirty years.
As far back as 1995, the industry grappled with the first piece of regulation designed to protect and regulate personal data, the European directive on data protection, which set the groundwork for a period of sustained growth in how banks and companies deployed data, particularly that which had been sourced online.
Perhaps the next major event was the global financial crisis in 2008, which prompted the industry to reexamine its operations across practically every function. Banks increasingly sought to work out how their use of data effected risk management and how their use – or indeed misuse – of data could impact trading positions and other things.
This led to the emergence of the landmark BCBS 239 rules from the Basel Committee, a set of 14 principles that established a banking standard for risk data aggregation and reporting. For the first time, there was a heavy emphasis on the internal procedures banks and financial institutions had to follow in order to ensure they were using high-quality data in a safe and transparent way.
In more recent years, we have had more of a focus on GDPR rules and the rights individuals have over their personal data – the power that consumers should have to own their personal data and understand exactly how their financial institutions are using it.
The Covid-19 pandemic, which saw life and business shift practically fully online for several years, prompted more businesses and individuals to think about their data in a more considered way.
As several FIMA delegates remarked, questions over how financial institutions collect and deploy data are therefore nothing new. But they have arguably become of more prominence recently, particularly given their centrality to practices such as quantitative trading, which are overwhelmingly focused on gathering, cleaning, and normalising data from disparate sources to power the models that hopefully will generate alpha.
The fractured regulatory and geopolitical climate is another element to the data issue that has become more prominent, one FIMA speaker noted. “There are questions around data sovereignty analysis – how do we have information and creative process installed in a compliant way as a result of country specific laws concerning data access, data sharing, and data transfers?”
“When we get into the world of how we move data from application to application, we’ve got a question from a geopolitical standpoint: what data do we need to sell? What data should we sell? And to which countries?”
The rise of artificial intelligence and machine learning technology – which are able to analyse whole swathes of data instantly and with astonishingly high degrees of accuracy – mean that further such thinking about how finance approaches data is critical in the years ahead. As another speaker argued, “AI is powered by data, and you cannot deliver AI at all without having high-quality, well-governed data. But with AI and other technology, we can bring together disparate parts of information to enable acceleration and digital transformation.”
But perhaps the rise of AI, and the questions around data it raises, boil down fundamentally to the same issue the industry has faced for decades – how best to leverage data to achieve optimal commercial outcomes, while protecting consumers and managing risk. “The key is thinking about the commercial opportunity, having joined up thinking in terms of the data, and using that to find new understanding.”
Author: Harry Clynch
#FIMA #Data #Finance #Regulation #AI