The advent of generative artificial intelligence (AI) and large language models (LLM), like OpenAI’s ChatGPT, is significantly disrupting the financial services landscape. Previous applications of AI in banking focused on purpose-built use cases such as fraud detection, loan decisions and marketing strategies. However, the emergence of generative AI solutions represents a significant shift, enabling financial institutions to complete hundreds of disparate tasks.
Generative AI and LLMs have democratized the field, making AI widely available, cost-effective and intuitive to apply across several domains. This increasing pervasiveness and accessibility necessitate the need for financial institutions to implement strategic policies and tools to control AI usage.
AI Integration Brings Risks
While generative AI offers many benefits, financial institutions must be wary of inherent risks. It is estimated that three in four employees are using AI tools secretly, which can increase the vulnerability of sensitive data.
Despite AI assistants being versatile tools, they often lack specialized context, requiring financial institution employees across all roles to receive additional training to ensure all outputs are accurate and avoid bias.
AI is also changing the fraud landscape as data leakage poses a significant threat to financial institutions of all sizes. Although banks and credit unions have effectively combatted fraud in the past, AI allows malicious actors to conduct crimes quicker and more effectively.
Since generative AI tools are easily accessible, any data entered by employees and vendors can be extracted by criminals or exposed during a data breach. Account holders are particularly vulnerable, lacking the means and experience to identify fraudulent schemes adequately. This is prompting financial institutions to invest resources into customer education and processing design, helping to reduce vulnerability to known fraud methods.
Vendor Selection Becomes a Priority
Selecting the appropriate generative AI vendor can be overwhelming, considering how saturated the market has become. There are countless variations and productizations of LLMs, and staying informed on the vendor space can be resource-intensive for both banks and credit unions.
The vendor selection process may seem daunting, but financial institutions can simplify this process in several ways. Although thousands of product offerings are on the market, most of these products are merely superficial configurations of a select few foundational models.
Banks and credit unions can operate assuming that most vendor solutions share over 80% of their DNA with the foundational models they were built upon. Having this knowledge makes sifting through vendors a more manageable task and helps financial institutions make better decisions around vendor selection.
Additionally, financial institutions should know how their vendors use AI. It is common for banks to unknowingly use AI through their vendors with tools like Adobe Acrobat and Salesforce, which implement generative AI features. Banks can secure their data across multiple contracts by understanding how their vendors incorporate AI and what foundational models they are building their solutions around.
Many vendors have begun acquiring, building or partnering with new AI technologies to increase efficiency, with notable examples including ServiceNow, Alteryx and Adobe Acrobat. As a result, compliance can become compromised without proper protective clauses in existing contracts. This concern has caused financial institutions to delay adoption and miss out on operational efficiencies and cost savings.
Considerations for Safe Adoption
Taking foundational steps to ensure safe AI adoption enables financial institutions to implement AI risk-consciously. A successful AI policy that addresses data privacy, risk management and regulatory compliance helps curb the potential for internal misuse and data leakage.
Moreover, implementing internal LLM assistants effectively prevents employees from sidestepping AI restrictions. Financial institutions should also provide secure internal tools for proprietary data, giving employees safe alternatives for proprietary data (e.g., Azure OpenAI, Microsoft Copilot).
The banking ecosystem will be shaped by AI. As more financial institutions explore and eventually adopt these technologies, they must maintain security and compliance through a structured and proactive strategy.
Connor Heaton is the director of artificial intelligence at SRM, an advisory firm serving financial institutions in North America and across the globe. He leads client engagements focused on artificial intelligence and helps organizations understand and adopt disruptive technologies.
To learn more about SRM’s expertise in technology, payments and strategic sourcing, contact Colorado Representative Phillip Foster at pfoster@srmcorp.com or (303) 588-1484.