Hi Ryan,
Thanks for the encouraging words! You’re right Explainability is one of the biggest blockers to accelerate AI adoption in the Enterprise.
Getting natural language explanations is awesome, however in our experience we found that even getting to the point of explaining to the ML practitioners or stakeholders inside a company is a huge challenge. In my experience working at Facebook on the News Feed team, Explainability became critical when ML models failed to work. Often times it is pretty easy to provide explanations for the obvious cases, for example why is the picture of the image a Dog etc, the tricky part is how well the explanations work when the model performance is worse. Because most of the time the need of explanations is in cases where someone is debugging a bad prediction by the model.
And then how Explainability ties into Fairness, Data Privacy and Model Performance is also very interesting. Because the spectrum of the needs as to why companies want explainability could be many — all the way from regulatory/fairness needs to doing large-scale performance analysis of ML models.
I am excited to learn more about the symbolic systems approach you’re taking to make explanations user friendly. Happy to chat sometime over coffee :)