Skip to content

Internet Explorer is no longer supported by this website.

For optimal browsing we recommend using Chrome, Firefox or Safari.

Publications

Your AI Is Scary: Explainability and Interpretability in Artificial Intelligence

February 10, 2022 - The Legal Intelligencer

Publications

Your AI Is Scary: Explainability and Interpretability in Artificial Intelligence

February 10, 2022 - The Legal Intelligencer

“The oldest and strongest emotion is fear, and the oldest and strongest kind of fear is fear of the unknown.” —H.P. Lovecraft

In science fiction, artificial intelligence is often portrayed as something incomprehensible to humans—including the humans who created it. Something, if not to be outright feared, then at least not to be trusted. While we ultimately believe that this “incomprehensibility” will come to fruition, as many of the prognostications of good sci-fi do, there is much that is needed to be understood in the here and now. The concepts of interpretability and explainability in AI are at the forefront of discussions of AI applications in medicine, finance, and defense to name a just few. In many cases, you will find the definitions of interpretability and explainability conflated. And frankly, for many discussions this not a serious breach—one we will perpetuate here. However, initial clear and concise definitions are necessary to gain an understanding of just how transparent we are attempting to coax that AI black box into being.

Read the article by Tod Northman and Brad Goldstein, principal at ProCrysAI, here.

Authors

Related News

Experimenting Thoughtfully with Artificial Intelligence

Tod Northman, published in Westlaw Expert Analysis More