Markets by Trading view

7 things for firms to consider before taking the plunge into AI

Facebook
Twitter
LinkedIn
AI Artificial Intelligence Revolution

Globally ranked influencer and AI expert Mark Lynd runs through the issues that financial organizations should weigh up before rushing into the technology – and the potential pitfalls for those that don’t.

AI is a buzzword, and certainly not without cause; the potential behind such a development in how we can use and leverage data is mindboggling. The power to streamline, cut costs, enhance vision and generate further revenue for the foreseeable future is very real. For firms that struggle under the weight of legacy structures, this could make the difference between closing or moving forward with the times. We’ve been talking about, researching and testing AI in larger firms for years, but now is the period where banks and financial firms of all sizes are calling for pilot tests of their own and deciding how they, too, are going to re-invent their systems, structures and services. This explains why the AI acronym is suddenly everywhere, and why I think it’s important that firms think hard and concretely about it. It’s not about ‘Do we jump in?’; it’s ‘Should we, and how?’.

Is using AI the best way to solve some of our strategic problems?

It is important to truly understand the financial services use-cases before wading into the AI arena. Artificial intelligence is a big buzzword with lots of hype, so selecting and using it when other existing technologies could reasonably solve it faster or for less is wasteful and will likely put a big target on it, should strategic or internal priorities change – and they will. You can often solve simpler challenges
with some combination of automation, code and/or analytics.

In the financial services arena, AI-powered solutions are often used to detect problems predictively before they occur, drive efficiency in the environment, help customers to select and use the right services, identify patterns that humans cannot, and even perform algorithmic trading with greater accuracy and speed – to name just a few. If you are able to determine that there is a strong use-case that necessitates AI in the form of machine learning, deep learning or neural networks, for example, then the next step is to ensure the availability and access of the proper data, so the system can consume, learn and help your organization meet its defined goals.

It is important to ensure that the results that are derived from AI-powered projects are explainable and put into terms that the business can use to make or support big decisions. Unfortunately, this is not always the case and it can have a serious impact on the return on investment of a given project or, at a minimum, the perception of the success of the project by others within the business.

Does our organization have the data needed be successful with AI?

AI takes a lot of data to form revealing models. It takes lots of data to properly train an AI model. Not just the historical data on hand, but future collections or streams of data are important to consider at, or before, the start of an AI project – it is a common rule of thumb that: “AI is only as good as it’s data”, and one that should be fully vetted prior.

I would contend that humans, like many AI models, use the data to learn and, as with humans and in real life, their overall performance correlates with the quality and quantity of the data than any specific algorithm the model uses to learn. In fact, in the larger AI arena, there have been substantive arguments or discussions over the past several years on whether data or the algorithms are more important for learning.

Many contend that to properly tune the models for greater precision and reduce the affect of noise, you need more and more data. While there are many theories and opinions on whether data or the algorithm is more important, it should be noted that both are required, and it is usually the use-case that drives that answer to this question.

The majority of top vendors in this space (IBM, Microsoft and Google) provide integrated development options that assist in collecting, cleaning and consuming the data for the model’s benefit. The larger the amount of proper data you have and the more fine-tuned your algorithms using this data, the better your model will perform.

AI Artificial Intelligence in the Financial Sector

Is our existing technology environment and infrastructure capable of supporting AI?

How will AI be supported and by whom – IT, finance, new group? Technology environments today are usually hybrid, with a mix of cloud and on-premise environments in enterprises both large and small. These environments are in a constant state of flux and often have competing projects/agendas that change the amount of crucial resources for any given project, so securing the appropriate resources up front and planning ahead for future resource needs is critical to success.

Due to the large data sets that many tend to use to properly train their models for greater precision, it has become a strong trend to use cloud environments that can scale dynamically and support large and growing AI models.

This often has a nice side effect in that moving the data into the cloud ensures there is greater knowledge and care regarding the data and allows for fine-tuning the data for its ultimate use within your model(s).

Another thing to keep in mind is that many of these algorithms can take some time to run through the data. For example, if you are building a model about seismic activity in and around the San Andreas fault, you would like to use seismic, survey, residence, geographic information systems, satellite and GPS data as available to drive stronger learning and, ultimately, greater precision and insights.

This would add up to many millions of data points for the project, so having an environment that can support these kinds of activities is fundamentally important.

Another great example is if you were using a model to support low-latency trading of stocks on a trading platform where decisions must be made in microseconds, based on an ever-changing set of streaming data. These kind of applications for AI require lots of data very fast and can be challenging, but the payoff can be massive for those firms willing to assume some of the risk of automated trading based on artificial intelligence algorithms. Obviously, a low-latency environment is required, so technology environments become even more important in these particular uses.

Many technology projects fail, so what are the risks of failure for AI in our organization?

It is well known that many technology projects fail because of a variety of reasons, such as unclear goals, constrained resources and juggling priorities, and AI could be just another blip on the failure radar, so identifying the risks before moving forward with AI is crucial.

When assessing the risks around artificial intelligence projects, it is important to determine:
» The goals and whether they are achievable
» Does the leadership support our efforts?
» Can the organization support the project, resource-wise?
» Do we have the expertise to support the project?
» Can we decipher the results, so the business can take action(s)?
» What is our remediation or backup plan should the project go off course?

Obviously, there are other considerations as well, but if these cannot be adequately answered then you should reconsider and seek better conditions.

How will using AI help the organization identify more customers and/or meet our strategic goals?

Clearly understanding the potential gains in utilizing artificial intelligence is one of the most important first steps before moving forward. If there are limited or unclear gains then leadership should be involved in determining whether the use of often scarce resources – such as time, money and people – is justified at this time.

One example: Hade Technologies uses the AI-based SaaS offering platform, called Hadeplatform.com, to successfully go through huge amounts of trading and company data using machine learning to better predict stocks. They do this by going deeper in the collection of data than some of their larger competitors.

Hade uses its own machine-learning algorithms that have been finely tuned, but are still learning and improving results, and then showcase these results with their MatriX Portfolio. This is an actively managed fund with holdings that are influenced by top-rated MatriX stocks. Since its inception in 2015, this AI-powered portfolio has outperformed the S&P 500 by more than 200%, is 63% more accurate than Wall Street and is 85% more accurate on risk alerts.

Hade’s artificial intelligence-based offering is so extensive that its AI can also predict several years of product, regional and segment categories for top companies, more than 200 long-term predictions, so their clients know what to expect in the future.

In the ultra-competitive financial services industry, it is often these kinds of successful use-cases that can make a difference between growth and failure.

AI Artificial Intelligence for companies in banking

Does the organization truly understand the ethic and bias issues when considering AI?

Ethics and bias are just now coming to the forefront of AI use. Because humans are making the decisions regarding use-cases, programming the models and selecting the data to use, there is a great deal of bias that can creep into these projects. Several notable projects have been called out for bias in their models. Additionally, several high-profile people, such as Elon Musk and the late Professor Stephen Hawking, questioned whether we as a species understand that our use of AI can have potentially sinister outcomes in the future.

Interestingly there are several other important, but not so visible, issues with implementing and utilizing AI in a broader sense. These include:

» Distribution of wealth – will greater use of AI by a few
cause an even greater disparity in the distribution of wealth for the majority, which could even impact on the stability of a country or region?

» Can AI and/or bots be trusted to trade huge blocks of stocks
without any human interaction or oversight? Does the organization employing these systems have enough reserves to survive a substantial error by the AI-powered system(s)? We have seen a little of this already, but as more of these projects are employed then we should expect more drama on a larger scale.

» Is there racial, geographical or ethnic bias in the systems that financial services firms offer and deliver to their clients? What testing, safeguards and compliance are in place to avoid these issues?

» As we interact with even more artificial systems, will it blur the line between behaviours and outcomes? What unintended outcomes will arise? Is the organization prepared to act responsibly should these issues arise?

Unfortunately, many organizations are treating these systems as information technology projects and not in the broader business sense, and that often leads to mismatched goals, minimal oversight and under-resourced projects. So it is critical for leadership to be involved and to consider ethics and bias, along with the other considerations, before green-lighting a given AI-based project.

Does our organization have a strategic plan to utilize, monitor and evaluate AI?

It is often the thing that you did not think would be a problem or damages you that ultimately does. It is because of this the organization should strongly consider setting up an effective method or group to actively monitor and evaluate the questions above. This will ensure that many of the surrounding issues, challenges and outcomes are appropriately considered, giving the organization the best chance to be successful using AI.

AI-powered technologies are dramatically changing the way that financial institutions carry out their business transactions and improving how they conduct business with their customers. However, there is still much more to be done because, at present, most AI-powered technologies are still not free from human intervention. Therefore, financial institutions must take practical steps in introducing AI to various areas across their firm. In the long run, the organization will primarily benefit from the increased efficiency and competitive advantage that AI-powered devices offer to the financial industry.


Mark Lynd, Relevant Track

Managing Partner, IBM Watson Influencer  

Named an Ernst & Young’s “Entrepreneur of Year – Southwest Region” Finalist and consistently ranked in the world’s most influential figures on emerging technology, Mark Lynd is a thought leader dedicated to advising C-level on AI, blockchain, fintech, cybersecurity, and due diligence.  

Mark and his firm Relevant Track are a regular source of expertise for firms such as IBM Watson and numerous publications including Wall Street Journal, Information Week, eWeek, CSO, Forbes and more.

Related Posts

Trending

Write your email to verify subscription

Loading...

Sign up for our free newsletter and receive the latest banking and fintech stories, straight to your inbox - every week