AI, financial crime, and learning

Be warned. This is a long blog post.

The question of learning is a deeply fascinating one.

What does it mean to learn? What are the processes for learning? When we are learning, what is happening to us and for us? What can we learn? What can’t we? Can some people learn more than others? What does it mean to have learning difficulties? How do we enhance learning? Is there a limit to how much we can learn?

It’s these kinds of questions that many people are trying to replicate in discreet form with artificial intelligence. AI being that which doesn’t happen naturally. A system can be created and programming written that allows for elements of learning to take place.

Probably the best known example of this currently is in chess. When a machine defeated a grand master of the game. Someone deeply skilled and knowledgeable about the game. Arguably, though, this wasn’t genuine AI. This was machine learning at scale. The coders at IBM of Deep Blue simply fed the machine 1000s of iterations of movements and let it play continuously until the machine learned which moves would be calculated and create a winning outcome.

Whether we’re talking AI or machine learning, what we’re discussing here is the handling of data at massive scale, optimising an outcome, and having the capability to share that with other connected networks so you have a vastly improved and efficient system. This is one of the ways Tesla is able to be so successful. Their self-driving cars take the hive mind approach. If one car is involved in an accident, the engineers will interrogate the evidence, re-write that part of the coding and deploy to the whole fleet.

Ok. With me so far?

$2 trillion. That’s how extensive financial crime is globally. It is a staggering number. Just immense.

1%. That’s how much is stopped/caught. Just 1%.* It feels like nothing. Not even a dent.

One of the problems is that criminals carrying out financial crime in the digital age are using processes like AI and big systems to create coordinated efforts of crime. Not just hacking banks, but creating fake bank accounts, funneling money, identity fraud. It’s proper scary stuff.

Banks and financial institutions and credit agencies are trying to make things secure. And they have good processes in place. But what they can’t cope with is the scale and multi-coordinated attack.

If Criminal 1 decides to create a fake account at Bank A, there’s a chance they’ll get caught out. But if Criminal 1 uses Fake ID 1-2000 and submits all that to Banks A-H, you’ve got numerous permutations of the ways that data can be submitted and checked. There’s a high likelihood most of those applications will just be accepted.

The human processes we have in place can’t operate at that scale. We are restricted by our individual capacity. Even with a group of people, you’re still limited in what you can achieve.

There are also limits on how the data being received by these companies can be shared amongst one another – simple answer it really can’t be shared easily due to GDPR. These institutions have to find other ways to collaborate to reduce the scale of the attacks they face.

This is one of those examples where AI can be used most effectively – but we’re at proper early doors with much of the collaboration happening in this space. We know the kinds of markers to look for: account activity which looks normal but isn’t, fake ID, abnormal transactions. What we also have is plenty of people data: why do people open bank accounts, how do they use their accounts, what kind of activity do they carry out, demographic data, personal data. You can quite accurately predict the lifestyle and demographics of an individual from their online activity.

Now take that individual data and put it at scale. Hundreds of thousands of people’s data available to be monitored. You can identify so much around normal behaviours that identifying outliers becomes more sophisticated. Banks et al are then better able to work with law enforcement to provide them insight into what problematic behaviour looks like and how they can detect it.

It’s all very impressive stuff, and worryingly we’re a long way off reducing the level of financial crime we’re already facing. The scale of financial crime is only set to increase over the coming years. If we can’t counter $2 trillion worth of financial crime today, how will we detect and stop bigger activity in the future?

So, being the L&D minded guy I am, I started thinking about the application of AI / machine learning to our common learning problems we face.

If AI can spot patterns of behaviour and create models of behaviour for discreet instances, how could we use that for learning scenarios?

One way I think is by identifying learning behaviour at work. At what point does a worker need to know something? What kind of behaviour do they engage in which prompts them to search for a solution? If they go online, what are they searching for? If they get answers, where are those answers coming from? If they’re getting answers which are immediately helpful, what’s the impact on their productivity?

I don’t think we genuinely understand what learning behaviour at work looks like. We know from surveys and interviews what people do, but we’ve not really got a way to focus on actual learning behaviour. If someone reads something, how long are they reading for? If they’re watching a video, how long do they watch for? If they’re doing multiple searches for a problem, when do they give up and try alternative options like talking to someone?

There’s a level of understanding patterns of workplace learning which we make a lot of assumptions about and have our own conclusions. What we lack in nearly every instance is the actual data led behaviour.

Once you understand the patterns of behaviour, we can start to use the same techniques to provide solutions at the point of need.

Now let me be clear, I believe we’re a long way from being anywhere close to this kind of intervention. L&D aren’t ready for it. We just about accept virtual training as a credible method of learning solution delivery and there are 1000s of L&D/trainer types who are very invested in their own models and interventions. Most couldn’t work with actual business data, let alone patterns of behaviour and deriving computational models of improving performance.

Also, I’m not talking about our companies using AI. I remember being at a conference 2 years ago with heads of L&D claiming they were using AI. They weren’t. Their companies had started introducing AI engineering into their processes, but the L&D teams themselves weren’t even close to it.

And I’m also not talking about smart algorithms via machine learning about “suggested content”. Yes, it’s very clever to detect patterns of content consumption in a particular system, but that’s a discreet use case.

What I’d love to see happening is understanding workplace learning behaviour at scale. Not just what does it look like for tens of thousands in a workforce, but for hundreds of thousands across sectors and industries. That would be genuinely fascinating data to be able to work with and create targeted learning solutions for each pattern and identifiable need.

The clever thing about the crossover of data science, AI and machine learning is that we have a lot of good understanding of these things in many companies. That learning, can be taken and applied to many more situations and enable some really impressive learning solution design.

*This was quoted at a CogX panel I attended last week. I didn’t catch the reference point for it.

I could be very wrong about the examples and terminology I have used above. I am not studied in AI at all, and this piece was written as my reflection from attending CogX 2019 as a way to increase and improve my own learning. I am very happy to be corrected on anything I have stated incorrectly.

Published by

Sukh Pabial

I'm an occupational psychologist by profession and am passionate about all things learning and development, creating holistic learning solutions and using positive psychology in the workforce.

Say something...

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s