
In this blog, we’re going to talk about some of the uses of AI in risk management. With ever-evolving threats, incredible amounts of data to handle, and an ever-increasing legal and compliance complexity around that data, AI is an important part of the risk and threat mitigation toolkit for the modern enterprise.
More Confidence in Your Contracts
Contracts are one of the pillars of business. Unfortunately for large enterprises, managing contracts becomes a large time and human capital burden. Not only will a larger company deal with a volume of contracts that will require a sizable internal legal department, but these contracts individually will often be incredibly long and cover a very wide scope.
Mistakes made in the negotiation of a contract can cost a company millions, or worse, their very business in the worst case. This is where AI can assist.
Any good lawyer will tell you they’re not ready to leave important documents like this to AI any more than they would monkeys on a typewriter, and they’re absolutely right. But that’s not what risk management is about- it’s about supplementing and minimizing error. We’re not using AI to write documents or make arguments.
For contracts and other binding legal documents, you can make use of AI to-
- Identify vague language
- Compare key points to your internal legal document frameworks
- Finding omitted areas using other documents as a framework
- More easily interpret individual revisions/redlines among substantial changes
- Assess risks and oversights
- Highlight any clauses that were similar to those in the subject of legal cases that might need to be changed
Overall, it’s likely that you’re not going to be able to slim down your legal department, at least not yet. The real power of AI with contracts is in risk management, and it can assist your legal team. The more eyes on a document, the more likely oversights are to be spotted. Why not add a pair of robot eyes as well? Yes, AI is famous for making legal mistakes like hallucinating case law, but if you’re using it as another set of checks instead of on its own for things you may have missed and are prompted to investigate, this risk is minimal to non-existent.
Physical threat assessment
AI has been used for threat assessment since its inception. You might have even seen it make news for how poorly it worked in certain situations, or how governments are using it to identify dissenters. Consider this- the demand is so high for using facial recognition in threat assessment and even law enforcement that some localities knowingly implemented poor early systems with more than 80% failure rate or worse. The UK capital police’s system had a 98% failure rate in 2018! It has been the subject of false scrutiny and even false arrests in the US, which took months to fight. Worse, a lot more of these cases are going to be coming to light in the years to come, as a lot of police departments are not straightforward with the fact that they used facial recognition as a determining factor, sometimes as the sole factor, in these cases.
This is important to talk about as it will affect the perception of threat assessment with AI for years to come. Both bad training and early products, as well as the association with despotic governments, have already colored the perception of facial recognition. Remember that 98% failure rate? That was more than 8 years ago, and as you are probably aware, 8 years might as well be a lifetime ago with the rapid advancement of AI, and these existing systems have improved a significant amount. These products are getting better, and it’s all about how they are used and how the users are trained.
How does this relate to a private company? Consider the famous case back in 2022 of the lawyer who was escorted out of Radio City Music Hall based on facial recognition. The attorney in question was removed for being involved with litigation against MSG Entertainment, the venue’s owner. Here’s where it gets interesting- she wasn’t involved in the case directly. Instead, she simply worked for the law firm that was representing the other party in the case.
Perhaps you might think MSG Entertainment’s policy was a step too far, and we shouldn’t be barring people from entering businesses because they work for a competitor or litigant, but that’s not what’s important here. What’s important is how well the AI worked to mitigate risk according to the company’s policy. Where they got the facial data could have been anywhere- perhaps it was already part of a dataset they purchased access to, and they just fed the name to the AI, or perhaps they used the company’s about us/leadership page or anyone connected on LinkedIn for the data. There are tons of possible sources, but all they had to do was feed a name and face to their system and security was alerted that this person was potentially in an incredibly crowded venue.
This will, without a doubt, be used widely. It has been common practice for years to ban people from private businesses- sometimes single locations, sometimes entire chains. Businesses have the right to ban anyone, and entering after being asked not to come back can lead to an arrest for trespassing. Walmart and other department stores are famous for this practice, but they have usually restricted it to a single location for most offenders to give their security a smaller “be on the lookout” list. The success of actually banning someone from a location is mixed, and it depends on the vigilance of employees and security. A banned person is unlikely to even be noticed unless someone who works for the business says, “Hey, I know that guy!”.
This is all changing thanks to AI. A company like Walmart will be able to issue and enforce nationwide bans on shoplifters, vandals, and people who disturb the peace. Even if your company isn’t a department store, if you have larger offices, multiple locations, or anything publicly accessible, this should be considered. You can protect your property and employees against the former employee who has made threats or the laid-off IT worker who shouldn’t be near any equipment any longer. For a secure location, you can use AI to ensure anyone who’s not on the whitelist is removed, adding another layer of security beyond keycards and door locks. As for the horror stories of bad usage talked about above, your business isn’t a police department, and the decisions you make won’t be directly responsible for landing people in jail. In any case, it’s about training and verifying what the AI warns you about instead of accepting it as the final answer.
Better risk mitigation by making better data-driven decisions
All modern businesses rely on data. In the pre-AI ‘big data’ age, the average enterprise was hoovering up data without much of a plan on what to do with such vast amounts of possibly useful data points drowned out in a sea of noise. The strategy in a lot of cases was to collect now, figure it out later.
It’s finally ‘later’.
AI has been helping companies make sense of and draw new insights from years of data in ways humans can’t. Maybe it might help the advertiser identify a market trend, or help a popular app design an interface optimal for the longest engagement. Its uses for customer data are obvious, don’t discount its use for risk assessment. Pulling as much as you can from your mountain of data can help your organisation identify:
- Products or batches with abnormally high failures caused by a defect using social media posts
- Trouble ahead in your vertical or the entire market
- A trend in negative customer sentiment across the internet
- Potential PR problems before they trend
- Identify breaches, i.e., intrusion detection
- Identify new attack vectors and security holes, i.e., intrusion prevention
- Prevent merchandise from sitting in a warehouse with better demand predictions
AI and fraud prevention
For many businesses, especially those that accept payments online, preventing fraud is of significant importance. There are many services you can employ to help prevent fraud, and AI will be a big part of fraud detection going forward. Whether you use it directly or a 3rd party you employ for fraud prevention uses it, it’s likely your business will be sniffing out fraudsters using AI in the near future- if not already!
It’s in use by the banking and payment processing industry, and has been before the modern AI models. AI has already helped to build a profile on each customer and easily identify activity that doesn’t match their profile, and modern models have made it better. This not only detects more fraud and identity theft, but also reduces false positives that may lead customers to ignore or sign off on fraud warnings without looking carefully. It’s also in use to help streamline and automate investigations for fraud claims and chargebacks.
Understanding how AI is used in relevant risk management is essential
Whatever your business vertical is, understanding the ever-changing use of AI for your particular field will be important. It is practically impossible that AI won’t make up a large part of any enterprise even after this ‘bubble’ deflates, and avoiding it will be impossible, both as a customer and as a business. New threats will be born to combat the better threat assessment and risk management provided by AI, and your business needs to be up to the task. Not using AI as a risk management tool will be, in itself, a risk.
Every corrective action is learned, and because of this, it is important to get in early. There is, of course, no guarantee that the system that you will use now will be nearly as good or forward compatible with whatever vastly improved system may come along in the future, but the longer you have to build your system by tweaking and giving feedback, the more accurate results you’ll have and organizational knowledge you’ll acquire, sooner.