AI expertise for financial professionals

With the spread of AI, not only data scientists need to learn AI. AI literacy is quickly becoming a requirement for professionals across all industries. I recently attended an overview of AI for finance professionals organized by SLASSCOM Sri Lanka for finance professionals in Asia. Here are the main points I’ve covered:

  • AI can be intimidating. Only recently (and sometimes even now!) did many people believe that AI was only accessible to those with PhDs and a solid math background. However, this is not true. If you want to create new types of AI, this level of knowledge is required. However, it is not required if you want to use AI in your field (where you have relevant expertise). In this case, all that is required is that you understand enough about AI to know how to apply it effectively in your domain, understand what tools and services are available to you, and know what AI regulations you need for your domain must follow in order to use the AI ​​safely and securely.
  • The rest of this article answers these three questions for the financial industry in general.

The AI ​​Lifecycle

Although there are thousands of AI techniques and tools available, the enterprise AI lifecycle tends to follow a predictable pattern – shown in Figure 1. The lifecycle begins with identifying the business needs. Next, relevant data is collected and processed. Once the data is available, an AI algorithm will be selected through experimentation and evaluation. A chosen model that works well at the experimental level can be deployed (put into production) and integrated into the enterprise. Once integrated into the business use case, the AI ​​is monitored to determine if it actually helped meet the business needs. This cycle often repeats itself many times, with the AI ​​improving in each iteration based on lessons learned from previous iterations.

While the life cycle itself is generally similar across industries, the specifics within each stage are of course determined by the industry and its requirements. For example. Heavily regulated industries like finance are likely to enforce security requirements at all stages of data and AI, and require extensive documentation before AI that can affect people’s livelihoods can go into production. As an example, here is an SEC requirement for model risk management.

Lots of tools!

The good news is that there are now many tools available to help carry out the AI ​​lifecycle outlined in Figure 1. Tools also range from turnkey services to infrastructure software – allowing you and your organization to choose those that match your (desired) level of expertise. For example

  • If your goal is to have the AIs created and used by finance professionals with minimal or no data science experience, there are a number of SaaS (Software as a Service) options that have pre-trained AIs customized to your needs can become. These are typically for more general services (e.g. customer-facing chatbots, marketing information, etc.) that don’t require custom sensitive data from your organization.
  • If you need to create custom AI that learns from your data, there are still many tools available ranging from no-code to low-code to code. You can find some examples here, and there are many more. Additionally, the trend of AutoML has enabled many professionals to access a wide range of AI algorithms without needing a deep understanding of how they are constructed (or the code expertise needed to program them). However, it helps to understand which algorithms are appropriate for different use cases, especially if your organization or use case is subject to industry regulations.

risk management

As mentioned several times, finance is one of the most regulated industries – not just in AI, but in general. Unlike some industries where AI regulation is just beginning, finance already has regulations on data protection and model risk. In addition, new general rules on consumer data protection and the right of explanation in laws such as the GDPR and the CCPA also apply. Some additional areas of risk management to consider when applying AI include:

  • Privacy (and Best Privacy Practices). Are you allowed to use the data you want to use to train your AI? Do you handle the data carefully to minimize risk? Here are some guidelines for good data practices.
  • Fairness and bias (AI trust). What are you doing in your AI lifecycle to ensure your AI is not biased against a subset of the population?
  • AI correctness in production. Once your AI is in production, what steps do you take to ensure the AI ​​is making reasonable predictions? Check out a reference here for an overview of AI integrity.
  • AI security. What steps have you taken to ensure your AI cannot be hacked or to determine if your AI has been hacked?

AI has already proven to be tremendously valuable to finance, and we are probably only at the beginning of what AI can achieve. The three areas above will hopefully help finance professionals develop the necessary AI skills to bring this value to their organization.

.

Leave a Comment