Truera - ML That You Can TrustJake Flomenberg | August 3, 2020
The future of software is powered by Machine Learning (ML).
A new breed of intelligent applications is on the rise. These applications are powered by data and machine learning to minimize data entry, generate real-time predictions, and improve over time. Intelligent applications are becoming the norm. Incumbents will adapt or be disrupted. At the same time, many business processes involving decisions powered by more traditional algorithms are now transitioning to ML based approaches.
As machine learning becomes a core part of software development, numerous solutions have emerged for model building, management, and deployment from companies ranging from startups to public cloud vendors.
However, there is something missing from these solutions - model explainability. There are many situations in which the score produced by a model is simply not sufficient - even if high accuracy is achieved. ML models are also far from perfect. The cost of an error can be quite high - either economically in the case of financial models or even in terms of human life in the case of medical models. Data scientists are constantly trying to improve the quality of their models but this task is difficult without tools to measure the quality of models.There is also a need for explainability to support regulatory frameworks bias detection. The type of explainability required to create understanding and trust does not consist of lightweight, highly localized explanations but rather requires a much deeper and more fundamental causal analysis. Ultimately, trust is required for users to be willing to adopt and rely on intelligent applications for significant decisions.
Intelligent application providers and back office systems with ML powered decisions will increasingly need to “show their work” to explain why the model reached the output it gave. Unfortunately, most modern machine learning techniques are referred to as Black Box Models. The implication of this term is that it is not possible to gain insight into the inner workings of these algorithms. This is incorrect. While it is arguably challenging to do, it is possible.
When I first met the Truera team, I was immediately struck by their depth of thought around the domain of machine learning quality and explainability. Professor Anupam Datta along with one of his top students - Shayak Sen - had spent years together at Carnegie Mellon developing novel explainability techniques. Their research is foundational. Most of what is available in open source in this domain pulls in one way or another on this body of work.
In our first meeting together they indicated that they had a solution for this black box ML explainability problem. Furthermore, they indicated that explainability should not be thought of as a discrete step in the ML lifecycle. Rather, they posited that explainability is a missing primitive that is part of every step of the ML lifecycle.
With the help of their 3rd co-founder and CEO Will Uppington, who has experience bringing ML based systems to market, they went on to explain what their model intelligence software could do and how that mapped to the various stages of the ML lifecycle. From a technology perspective, Truera would be capable of exposing causal drivers of model predictions for all popular model types, surfacing trends relevant to addressing model quality challenges like unfair bias and instability, and even providing confidence bounds on model performance. This, in turn, would enable them in the fullness of time to support AI development use cases including analytically driven model development and improvement, model review and validation, machine learning monitoring, and even data set selection. By the end of the meeting, I was convinced that this was the right team with the right technology to build an enterprise quality explainability product to help companies bring ML into production with confidence and trust.
All of us at Wing are thrilled to partner with the team at Truera alongside Greylock and Conversion Capital. Wing invested behind their vision to improve model quality, address unfair bias and build trust.
Since we partnered with them a year ago they’ve gone on to build an initial product that can meet the needs of customers like Standard Chartered's data science organization who have sizable ML teams and demanding requirements.
We are excited to publicly welcome Truera to the portfolio as they launch their Model Intelligence platform and announce their seed financing. See Venturebeat coverage here or sign up for a demo here.
Share this Article