- Anker's newest USB-C cables are a smart way to future-proof your tech
- Deal alert: Our favorite noise-canceling headphones of 2024 are at their lowest price ever for Black Friday
- Where to find the best Linux support, no matter your skill level: 5 options
- Why the Meta Quest 3S is the ultimate 2024 holiday present
- Greater Security for Small Businesses: Why Do SMEs Need a SIEM System?
Generative AI: Balancing security with innovation
The speed at which artificial intelligence (AI)—and particularly generative AI (GenAI)—is upending everyday life and entire industries is staggering. Slowing the progression of AI may be impossible, but approaching AI in a thoughtful, intentional, and security-focused manner is imperative for fintech companies to nullify potential threats and maintain customer trust while still taking advantage of its power.
AI threats to fintech companies
When I think about possible AI threats, top of mind to me is how AI can be weaponized:
- Threats to identity. Whether it’s deep fakes or simply more sophisticated phishing attempts, AI is making it easier to steal identities and ramping up the need for more accurate, faster authentication.
- Misinformation and manipulation of data. As AI becomes more powerful, its ability to manipulate data is increasing and making it difficult to stem the tide of misinformation. Additionally, related issues during use are risk of hallucinations and prompt engineering.
- Exploiting technology vulnerabilities. Bad actors have the potential to train AI to spot and exploit vulnerabilities in tech stacks or business systems.
While we can’t plan for every new threat that AI poses, it’s imperative to have the right AI usage guardrails in place at Discover® Financial Services and know how to quickly address any vulnerabilities.
Our approach to securing against AI threats and ensuring Responsible AI
At Discover, we’ve established an AI Governance Council, which consists of a cross-functional team of data scientists, cybersecurity experts, audit and compliance personnel, legal representatives, technologists, and decision-makers who collaborate to set standards to establish a framework for the adoption of AI in a responsible manner.
By including a wide range of participants who represent different facets of how AI is being used, unique use cases, and differing perspectives, we can create AI guardrails applicable across business units within Discover. Additionally, it’s paramount within the financial services sector to ensure responsible AI and adherence to regulatory guidance for model risk. Keeping our AI approach interpretable and managing bias becomes crucial.
At a high level, these guardrails relate to:
- Limiting access to all public large language models and preventing employees from using customer data within any public generative AI models
- Clear intake process that teams complete when they want to use public, vendor, or homegrown AI tools and models.
- Established risk management framework to evaluate the use cases and validate the controls to manage relevant risks
- Continuous authentication and authorization to maintain the principles of least privilege and context of user entitlement.
- Proper data labeling and logging to maintain confidentiality
- Human-in-the-loop validation to ensure each AI use case is reviewed and approved by a subject matter expert to ensure the accuracy and quality of the output is fit for purpose
- Recording inputs and documenting what we’re inputting into any language models—and the outcomes to ensure the integrity of the processes
- Established feedback loops so that we’re quickly getting and responding to feedback about using the models
- Required training for any employee using AI models in their work to ensure their work adheres to standards related to AI trust, transparency, trustworthiness, and the like.
As we deploy our guardrails, we also evangelize across teams at Discover through our internal learning platform, Discover Technology Academy, through various events and emails and required security training.
Managing GenAI testing and access with trusted partners
We don’t have the luxury of waiting to see how AI evolves before it affects our everyday life. We must deal with the threats it poses in real time—while taking advantage of the competitive advantages it offers.
To us, that takes shape by using closed language models, with AI partners we trust, to run proof of concepts and other tests that help us understand how to use GenAI in a trustworthy and transparent way. We have partnerships with large tech companies to test their AI offerings and tools in controlled, managed experiments.
Conclusion
As the Chief Information Security Officer (CISO) at Discover, I am both excited and sober about how generative AI will change the fintech landscape in the coming years. The trust we build with our customers is our most important asset—and we don’t take that for granted. Having clear guidelines for how employees can engage with and use AI models and mechanisms to enforce guidelines will help us enable innovation while ensuring the security of our customers, their data, and their assets.
Visit Discover Technology to learn more about Discover’s approach to security, AI, reliability and more.
Author
Shaun currently serves as the Senior Vice President, Chief Information Security Officer for Discover Financial Services. In this role, he is responsible for implementing the information security strategy, enabling the business, and securing customer data, digital assets, and payments with a focus on enabling digital transformation.
Shaun has over 20 years of IT experience with specialization in information security and risk management. Shaun has held roles in increasing responsibility at the Department of Defense, culminating in the role of Chief Information Security Officer for the Department of Homeland Security, US Customs and Border Protection. He was Vice President, Chief Information Security Officer at Freddie Mac and most recently, he served as Managing Director, Chief Information Security Officer at Barclays International.
He serves on the board of the Kohl Children’s Museum, is an adjunct professor at Carnegie Mellon University, and an independent director at Valimail, a venture backed e-mail security company. Shaun is also a Certified Information Systems Security Professional (CISSP), Certified Ethical Hacker (CEH), and a graduate of the Department of Defense Executive Leadership Development Program.