Artificial intelligence is a powerful tool – but it comes with risks – Indianapolis Business Journal

Scott Kosnoff, an Indianapolis-based Faegre Drinker Biddle & Reath partner, co-leads the company’s multidisciplinary algorithmic decision-making and artificial intelligence team that the company calls AI-X.

Scott Kosnoff, partner in the Indianapolis office of Faegre Drinker Biddle & Reath LLP in New York, has been advising insurers on risk for a few decades when he began reading more about big data and artificial intelligence and how that began to drive decision-making, initially for tech companies but more growing for all companies.

He immediately saw the impact that AI can have in the insurance industry, where companies are interested in collecting data and forecasting risks. But the more Kosnov knows about the field, the more he understands that algorithms — or the rules that determine how computers analyze data — carry their own risks.

So about six years ago, he decided that to protect his customers, he had to start talking to them about AI — even if they weren’t ready to hear it. And what you learn applies to industries of all kinds.

“While AI and algorithmic decisions are at the forefront of the insurance world, they are essentially the same set of issues for other sectors of the economy,” he told IBJ. “So, I think in the future, I probably won’t be devoted exclusively to insurance because these AI elements are everywhere.”

Last month, Faegre Drinker announced that Kosnoff (with Washington, D.C. attorney Bennett Boren) will co-lead a multidisciplinary team of artificial intelligence and computational decision-making that the company calls AI-X. IBJ spoke to Kosnoff about the team.

Why has artificial intelligence become so important in the insurance industry?

Insurance companies work to predict the future. When they take requests from policyholders, they are trying to figure out who is a good risk and who is not a good risk. Who is likely to live a long life, who may die prematurely, and who is most likely not to have a car accident.

AI and computational decision-making are really powerful because they allow us to look at an ocean of data points and see previously unknown connections between data points and the outcomes we care about. So, if you’re an auto insurance company and you’re trying to figure out who would be a good risk from a driving point of view, you just have a wealth of data to look at and you can start drawing links between different behaviors, different realities, and the probability of an accident, for example.

The same applies to life insurance or to lending – who gets a loan and at what interest rate. Who studies in college, and who gets any scholarship. Who gets a job interview – Who gets the job after the interview. The use of AI is so pervasive that it at least opens up the possibility of predicting the future more accurately than we can do on our own.

What are the legal implications?

Policymakers and regulators have expressed concerns about fair justice. Is the algorithm that generates these predictions of the future fair? Another concern expressed by policy makers and regulators is about interpretability and transparency. Humans love to understand how important it is to have access to the decisions that affect their lives.

But the most important thing that’s really got airtime, at least in the US, is that decision-making using AI and algorithms has the potential to perpetuate the distinction in the past. On the other hand, turning the decision-making process into a machine seems to be a good thing, right? Because the machine will be free of human biases and prejudices, and it will just do what it does.

But the problem… is that the data that machines use does not exist in a vacuum. The data is a reflection of society and everything that has happened up to this point. Thus, when any piece of data is used for purposes of predicting the future, it may contain what we call embedded bias, as a kind of reflection of past societal norms. This can creep into the decision-making process in a way that no one can really predict.

How did the AI-X team get toGather?

When I started doing this business, our company was still known as Faegre Baker Daniels. And we haven’t yet merged with Drinker Biddle. Only after the merger was actually approved did I find out that we had this subsidiary of data consulting, which had a list of data scientists who were working on things like this. And it was one of those stories, Complete Me.

One of the partners in the law firm, Bennett Borden, is also the founder of Tritura (the firm’s data consulting group). He is wickedly smart. Both are Lawyer and data scientist. And he and I do a lot of this work together.

So one of your clients can come to you and not only get tips on AI but also have the team look at their algorithms and see if they see any issues.

This is absolutely true. We can look at your algorithm, for example, and help you determine if it is inadvertently discriminating on the basis of race. Which is powerful and obviously very important. But of course, there are other types of protected classes as well that are independent of race. This is much more difficult from the standpoint of testing the algorithm.

give me an example.

Well, let’s say you live in a country where discrimination based on sexual orientation or preference is illegal. We do not have a reliable way to collect information about these two factors. So there is no good way to test it.

If you could give one piece of advice to any company considering making the jump to using AI for any purpose, what would it be?

I think participation in artificial intelligence will become sourvival skill for most organizations. But you have to be smart about it. You have to think carefully about what could go wrong, and you have to get out before that. Waiting until the problem appears is too late in the game. So, when you start thinking about AI, it’s important to have a risk management framework that outlines not just the benefits of its intended uses, but what can go wrong. Then you need to think about the severity of the potential consequences and what you can do to try to avoid them. •