Financial institutions have for years collected data on customers’ habits and routines, and used the information to pinpoint cards that may have been compromised. That’s the reason your card may be declined if you make an unusually large purchase, shop at a store for the first time, or buy gas in a place far from home. The system is essentially deciding whether that behavior seems normal according to predetermined questions, then accepting or declining the purchase based on its decision.
“The old method was using tests and thresholds and other sorts of rules. With a rules-based approach, you get a tremendous amount of false positives,” said Todd Marlin, a principal in Ernst & Young’s forensic technology and discovery services practice.
“What we’re seeing is sort of a gold rush into artificial intelligence,” Marlin said.
Machine learning can help fraud-detection systems become smarter about what fraud actually looks like, both across the network and on an individual level. For example, the system might detect that you haven’t shopped at a particular merchant in the past, but still accept the purchase because customers with a similar spending history shop there often. Or perhaps you travel to a certain state or country often enough that the system learns purchases there are likely to be legitimate.
Citing a survey from Javelin Advisory Services, MasterCard estimates that $118 billion in sales were declined due to falsely identified fraud in the United States in 2014 — well more than the $9 billion lost to actual instances of fraud. That’s a large sum of money that retailers — and credit card purveyors like MasterCard — could be pocketing.
MasterCard’s new Decision Intelligence software pulls in data, sometimes hundreds of pieces of data, about a specific transaction at the moment a customer swipes his or her credit card. The system then combines all of that information to yield a score indicating how likely the transaction is to be fraudulent. Each score builds on the one before it and informs the one after it such that the computer’s algorithm gets better at detecting fraud without a programmer having to engineer every change. The company called it “the first use of AI being implemented on a global scale directly on the MasterCard network.”
“For this to work we need all kinds of data coming in,” said Ajay Bhalla, the president of enterprise risk and security at MasterCard. “We’ve been working on a strategy of getting more data points into the network.
“We see this as a critical component of our future strategy,” Bhalla added. “We are embedding AI in everything we’re looking at as a company.”
For the last decade, Visa has deployed its Visa Advanced Authorization system to detect fraud. The volume of data and the speed at which it’s processed have increased considerably over that time, said Mark Nelsen, the company’s senior vice president of risk products and business intelligence. Visa parses through those large sets of data to discern what characteristics distinguish legitimate and fraudulent spending, and then uses those characteristics to assess the veracity of future purchases.
“We can look at that transaction and, knowing what we’ve seen in the past, we can predict if this is going to be good or bad,” Nelsen said.
Digital payment platforms are also embracing machine learning, though fraud remains more difficult to detect in online or mobile commerce. PayPal has developed its own artificial intelligence software to combat fraud, allowing the company to move from a system that can analyze tens of data points to one that can analyze thousands. “There’s a magnitude of difference — you’ll be able to analyze a lot more information and identify patterns that are a lot more sophisticated,” PayPal’s senior director of global risk and data sciences, Hui Wang, told American Banker.
But Visa, MasterCard and PayPal have the financial heft and risk appetite to explore artificial intelligence in ways that smaller and more regional players in the financial services industry do not, said Celent analyst Arin Ray. While most in the industry think AI holds promise, there remain questions about how it will impact regulatory compliance and that makes some hesitant to embrace the technology, according to a report Ray co-authored in August.
“The concept of AI has been there for 50 years, but it really has had a rebirth of sorts in the last three or four years. The volume of data has grown exponentially in recent times, so there is definitely the need for tools to [analyze it],” Ray said in an interview.
“One [concern] is how this technology will impact their existing operations. Since it’s still unproven technology, they don’t want to take that risk if they have to make major changes to their existing infrastructure,” he said.