Explainable AI is gaining momentum among VCs and startups

Shaan
8 min readOct 24, 2020

--

Raise of responsible and answerable AI is apparent with the increasing adoption of machine learning models across industries. AI applications are inherently stochastic and “black box” in their decision-making, meaning they offer little if any, discernible insight into how they arrived at the outcome.

An inability to rationalize the decisions is acceptable when the impact of AI decisions are relatively trivial, such as a recommendation on Spotify. But as AI expands into areas that affect humans, like medical diagnosis, military, autonomous vehicles, recruitment decisions, and policing, it becomes increasingly crucial that AI can explain why it has reached a particular decision.

Source: Explainable Artificial Intelligence — Demystifying the Hype by Dipanjan Sarkar

This picture proves that machines are not cognitive. Machine learning models have rolled out undesirable results causing serious and costly implications to the companies. For example, recently Amazon took down their resume parsing algorithm, which was biasing women vs men. Imagine — a credit-decision system has been shown to approve loans only for Caucasians; or diagnostic AI discriminate based on race, color, religion, or genome — the ramifications could be serious and life-altering

Complex machine learning models can be incomprehensible for end-users who are not experts in it. And when stakes are high, trust becomes desideratum for organizations, regulators, governments, and customers. While a model may have high accuracy on held-out test sets, users may still raise concerns about fairness, bias, and transparency.

Can need for Accuracy preponderate explainability

As model accuracy increases so does model complexity but at the cost of interpretability. A model with fewer parameters is easier to interpret like a linear regression model, logistic regression, and decision tree (of modest size) may be understandable. But complex models like deep neural networks remain black box in nature. It is easy to build a case in favor of weighting model accuracy over interpretability, as the high cost of an error is an incentive for higher performance. For example, IBM’s Watson successfully diagnosed a rare form of leukemia that oncologists misdiagnosed. But imagine if Watson was wrong. A patient would have been set on an expensive, time-consuming, and potentially harmful course of treatment.

For machine learning algorithms accuracy is paramount, but that accuracy is only as good as the data set on which the model was trained. Many times data sets may carry inherent bias, like having most images of a particular race, leading to a challenge called “concept drift”, which is a divergence of decision boundary when presented with new data from that of a model built. Explainable AI can help to identify drift just in a few steps, providing data scientists with the insight needed to improve datasets and debug model performance.

Source: Explainable Artificial Intelligence — Demystifying the Hype by Dipanjan Sarkar

What is explainability — a Black box turning into a Glass box

Explainable AI (XAI) applications help to capture the causation of machine learning models. It provides actionable intelligence that’s auditable and interpretable for human users. With XAI, we can receive a score explaining how much each factor contributed to the model predictions, sample the prediction from trained machine learning models and provide ground truth labels for prediction inputs. We can also investigate model performances for a range of variables in the dataset; optimization strategies; and even manipulations to individual data point values.

So, how do we explain a machine learning model?

In simple terms, a layer of interpretability is added on top of a complex machine learning model like a random forest to generate both prediction and an explanation. Various methods for generating explanations exist ranging from classic black box analysis approaches to the latest methods designed for deep neural networks. Some of the techniques available include — Sensitivity analysis; Local Interpretable Model Agnostic Explanations (LIME); Shapley Additive Explanations (SHAP); Tree interpreters; Integrated gradients interpretability; DeepLIFT; Activation maximization (AM); Model evaluation; Continuous evaluation; and Model transparency.

Advantages of Explainability

XAI helps in tracing, understanding, and reasoning the black box predictions. It understands the weakness of a model and maximizes its performance, thus builds a high-quality model that not only achieves test accuracy hurdles but also generalizes and performs well in the real world. Explainability also verifies predictions in order to fine-tune models and gain newer insights to solve the problem at hand. An ability to explain AI not only provides confidence among users and regulatory authorities but also improves customer retention rate and better management of business variables.

How large is the market for Explainable AI?

AI has a large potential to contribute to global economic activity. It could potentially deliver an additional economic output of over $15 trillion by 2030 according to PwC. As explainability becomes an increasingly vital feature for organizations implementing AI models, its total addressable market size will grow exponentially. Gartner projected that “30% of government and large enterprise contracts will require XAI solutions by 2025.”

Use cases for explainable AI are far-reaching and ecomaps any instance that impacts people’s lives and could be corrupted by bias. The growth of explainable AI is expected to predominantly come from industries where AI is envisaged to revolutionize. It includes banking, health care, manufacturing, insurance, digital government, smart transportation infrastructure, and autonomous vehicles. In these industries, customers and regulators place a high value on trust and transparency and demand accountability.

VCs and Startups in explainable AI

The third wave of AI investing is in its sunshine and rainbows phase with venture funding in AI companies reaching a mind-blowing $61 billion from 2010 through the first quarter of 2020. Venture investors pumped in $12.6 billion into AI & ML companies in Q2 of 2020. It appears that currently, VC’s investment strategy is more focused on mundane applications of AI focused on transforming existing markets from retail to manufacturing, than moonshot projects. According to John Frankel, partner at ff Venture Capital, VCs are shifting their strategy towards the vertical application of AI in which startups are taking technologies that already exist and building enterprise-specific applications.

With emerging XAI startups, explainability AI has gained reasonable traction among VCs with few notable investments in the past few years. Venture capital firms like UL Ventures, Intel Capital, Light Speed, and Greylock among others were seen actively investing in explainable AI.

XAI Startup Landscape — Explainable AI companies are segmented into core AI companies offering XAI as added offerings like Kyundi, Darwin AI, and Sparkcognition; and purpose-built AI companies like Fiddler Labs.

XAI Startups Landscape

Kyndi

Kyndi offers auditable AI solutions to government, financial services, and life sciences verticals. Kyndi platform enables organizations to analyze the long-form unstructured text in a smarter, faster, and more explainable way. The company raised $28.5 million in total funding. Its latest round was $20 million in Series B funding led by Intel Capital, with participation from UL Ventures, and PivotNorth Capital.

Fiddler Labs

Founded in 2018, Fiddler Labs uses the explainable AI engine to enable data science, product, and business users to understand, analyze, validate, and manage their AI solutions, providing transparent and reliable experiences to their end-users. The company raised $13.2 million in total funding, the latest round was $10.2 million in Series A funding lead by Lightspeed Venture Partners and Lux Capital, with participation from Haystack Ventures, Bloomberg Beta, Amazon Alexa Fund, and Lockheed Martin Ventures.

Pachyderm

Pachyderm is an enterprise-grade, open-source data science platform that makes explainable, repeatable, and scalable AI possible. Having closed $28.1 million in total funding, the company recently raised $6 million in Series B financing from M12, Decibel Ventures, Benchmark, Y Combinator, and others.

DataRobot

DataRobot delivers AI technology and ROI enablement services along with interpretation techniques to global enterprises. The company serves finance, healthcare, sports, retail, marketing, and agriculture sectors. DataRobot raised $400 million in total funding. It raised $206 million in Series E funding led by Sapphire Ventures, with a valuation of above $1 billion.

KenSci

KenSci provides a risk prediction platform based on explainable machine learning models for healthcare. KenSci’s platform is engineered to ingest, transform, and integrate healthcare data across clinical, claims, and patient-generated sources. The company has $30.5 million in total funding. It closed $22 million in Series B funding, led by Polaris Partners, and followed by Ignition Partners, Osage University Partners, Mindset Ventures, and UL Ventures.

Sisu Data

Sisu Data is an operational analytics platform that monitors business KPIs and provides an explanation by contextualizing the most relevant data points for the organizations. Sisu Data raised $66.7 million in total funding. The company recently raised $52.5M in a Series B funding round by NEA, Andreessen Horowitz, the a16z Cultural Leadership Fund, and Green Bay Ventures

Factmata

London based, Factmata is developing contextual understanding algorithms and applying explainability to media and news analysis, tackling the issue of fake news. The company raised $3 million from eyeo GmbH and other angles like Mark Cuban and Lawrence Braitman.

DarwinAI

DarwinAI develops “Generative synthesis,” for developers to understand, interpret their models. Its explainability toolkit performs network performance diagnostics, which helps in network debugging, design improvement, and addressing regulatory compliance. The company raised $3 million from Obvious Ventures, iNovia Capital, and angels from the Creative Destruction Lab in Toronto.

Truera

Truera built a Model Intelligence Platform powered by enterprise-class AI explainability for companies to analyze, improve, and build trust in their machine learning models. Truera raised $5.1 million from Greylock, Wing VC, Conversion Capital, and Aaref Hilaly.

Imandra

Imandra is a cloud-native company that provides an automated reasoning system designed to bring governance to critical algorithms. It provides “Reasoning as a Service” to the financial sector, robotics, autonomous vehicles, and reinforcement learning applications. The company raised $7.6 million from LiveOak Venture Partners, AlbionVC, Anthemis Group, and IQ Capital

Kubit AI

Kubit offers product team analytics software that can determine how large datasets of customers interact with eCommerce and mobile application products and automatically discover the root causes of anomalies and deliver clear answers. The company raised $4.5 million from Shasta Ventures

Ginie AI

Ginie AI is a legal tech startup developing an intelligent contract editor. The company raised seed money of $2.5 million from Connect Ventures, and Entrepreneur First

Craft AI

Craft ai developed plug and play API’s for product & operational teams to quickly deploy and run explainable AIs.

Untangle AI

Untangle ai is building developer tools for model auditing. The company is seed funded by Entrepreneur First.

Conclusion

Explainable AI is not without its challenges. Its efficiency depends on its underlying models, meaning under or overfitting models will result in misleading interpretations regarding true feature effects and importance scores, as the model does not match the underlying data. The performance of prediction models should measure up to comprehensibility, succinctness, actionability, reusability, accuracy, and completeness. Given that these are still early days of AI explainability and bias detection, this segment is ripe for development as more and more organizations want their AI technology to build confidence, align with ethical norms, and operate within publicly acceptable boundaries.

Sources :

https://www.pwc.co.uk/audit-assurance/assets/pdf/explainable-artificial-intelligence-xai.pdf

https://pitchbook.com/newsletter/pandemic-pushes-automation-as-ai-ml-investment-tops-12b-in-q2-UnS

https://insidebigdata.com/2020/07/28/artificial-intelligence-startups-raised-61-6bn-in-total-funding-a-35-jump-in-a-year/

https://www.forbes.com/sites/cognitiveworld/2020/08/09/the-changing-venture-capital-investment-climate-for-ai/#69f8cae5765b

https://www.crn.com/slide-shows/virtualization/gartner-s-top-10-technology-trends-for-2020-that-will-shape-the-future/8

https://www.forbes.com/sites/cognitiveworld/2020/08/09/the-changing-venture-capital-investment-climate-for-ai/#69f8cae5765b

https://www.crn.com/slide-shows/virtualization/gartner-s-top-10-technology-trends-for-2020-that-will-shape-the-future/8

--

--

Shaan
Shaan

Written by Shaan

Business Analyst and Startup enthusiast focused on Artificial Intelligence and Deep Learning

Responses (1)