The Whys of Transparent Artificial Intelligence (TAI)


"He who has a why to live for can bear almost any how." ― Friedrich Nietzsche


This German philosopher can get quite dramatic, but he does have a point. A strong why is help deal with the details that the question "how?" entails. And if the how is sorted, everything else gradually falls into place. So, perhaps the best way to start our exploration in the topic of Transparent AI would be to figure out the why of it. A similar approach is preached by Simon Sinek, an influential business adviser who's also a valuable thinker of our times: "People don't buy what you do; they buy why you do it. And what you do simply proves what you believe" (from his book, "Start with Why: How to Inspire Everyone to Take Action").

In this article, I'll explore the most noteworthy whys of TAI, which can make a strong case for pursuing this quite challenging endeavor. Be it from a research or a business perspective, TAI is tough as nails, but maybe it's worth the effort. After all, even The National Institute of Standards and Technology (NIST) has produced a white paper on the topic (PDF link), while large tech companies like IBM and Google have taken a stab at it lately. But looking into the TAI topic just because someone else does it isn't a good enough reason, no matter how much of an authority feature that someone is. It's best to do your own research on the topic and draw your own conclusions. Hopefully, this article can aid you in that. To make sure we are all on the same page, here is the definition of TAI that we use in this article: Any artificial intelligence technology that provides a robust mapping between its inputs and outputs as well as an understanding of what happens in-between, ideally in a mathematical manner that doesn't require specialized AI-related knowledge to understand and follow.


Why 1 – Explainability in terms of features and meta-features

In every data science model that aims to predict something, we have clean and normalized variables, which we call features. Features are the backbone of any model, and have a set of them (especially if we vet them first so that they are more or less independent of each other) can aid any model, AI-based or otherwise.

The ability to explain how important each feature or each meta-feature is may seem obvious since many people define TAI based on this characteristic (hence the term eXplainable AI, or XAI, to refer to TAI). However, this is just the tip of the iceberg. A TAI system should not merely tell you that feature A is of value or that A is more important than feature B. Random Forests and other Tree-based models do that already. A TAI should have the same explainability that a statistical model offers, namely specific weights attached to each one of the factors involved. Now, a typical AI system (an artificial neural network) does a bit of that already, at least for the fringe layers. However, the meta-features themselves are a mystery since the relationships between them and the original features (inputs) are highly non-linear and mathematically complex. For an AI to be worthy of the term "transparent" attached to it, it needs to be understandable to both the user and the AI professional, even if that person isn't a mathematical prodigy! The TAI system may still be stochastic, but it must be able to be a glass box rather than a black one when it comes to the variables involved.


Why 2 – Mitigation of biases and the risks these carry

AI can be quite biased if the data we use to train it isn't unbiased. This fact may seem like common sense, yet in many cases, you see this mistake being made. If it's a simple bias that isn't ever noticed afterward, that may not be so bad, but certain products don't work well because the AIs behind them aren't oftentimes equipped to handle certain skin tones or voice patterns (accents) when these are translated into ones and zeros (see this article for more details of different bias scenarios in the real world). A TAI should be able to handle this issue by pinpointing the correlations of the variables at hand, particularly when it comes to meta-features. If a particular variable, for example, involves the use of data from one part of the frequency spectrum and not others, it may raise some red flags to the engineer in charge of that system. This way, she can be warned and take the necessary steps before the AI system is put into production, thereby preventing complaints from customers whose voices may have a different kind of pitch.


Why 3 – Potential mitigation of costs

This statement may seem like an oxymoron, considering that a TAI may require more work and, therefore, more computational resources to function. However, if you can attain the same result as a conventional AI (let's call it CAI), with fewer layers or fewer nodes for certain layers, since certain meta-features are found to be redundant, why not prune the ANN further? This pruning is bound to not only make it lighter (and therefore computationally cheaper) but also more robust since the final model is bound to be somewhat simpler. All this translates into a lower overall cost, both in terms of development and maintenance.


Why 4 – Facilitation of communication

Communication is key in today's world, be it between people, AIs, or AIs and people together. However, this is easier said than done since some AI-related expertise is often a requirement for having a shot at such communication. This may be acceptable now, but how many companies do you know that want to have an AI expert on their payroll for the duration of the life cycle of their data product? Also, as an AI professional, do you want to be on-call for a product you helped develop a few years ago? Wouldn't it be better if the AI could communicate with its stakeholders directly? Well, TAI should be able to do that, making it easier for everyone. Don't get the wrong idea, though; humans are still essential in both the development of the AI and possibly any refinements it may need in the future. However, we don't need to undertake the role of the interpreter for the AI if the latter is transparent.

On a more practical level, the communication that a TAI would enable compliance with the obligations of a modern organization in the Finance industry. A bank, for example, may be required to explain why a particular loan is declined or justify a specific rate in a mortgage. Additionally, credit institutions such as credit card companies may need to have some clear-cut rationales as to whether a particular group of potential customers is worth the risk of having a card issued to them. This kind of communication would be much less challenging through the use of a TAI system.


Why 5 – Expansion of our own communication abilities

A TAI system could also help us expand our own communication abilities. Not in the sense that it will make us good communicators if we don't have any skill in that; it's more like an increase in our communication potential, much like a chess AI can help someone hone their chess-playing skills. Also, for someone new to data analytics, a TAI can provide data-driven insights on the fly, making all the work easier. Then, it's up to the analyst to showcase these insights in a way that will drive home the points she wants to make. Perhaps as a person versed in the business side of things, she can link these insights to actions that the organization can take. So, the TAI can alleviate the workload of the project enabling the analyst to focus on more high-level aspects of it. At the same time, digging into the lower levels of the analysis will be more efficient a process since she would need merely to take a glimpse inside the glass box that is the TAI system and interpret what she sees.


Why 6 – Mitigation of privacy risks

Privacy is a major concern for many people in data analytics, especially when medical data is involved. However, privacy concerns are evident even in market data since if sensitive data leaks, it can lead to fraudulent transactions and even extortion. Naturally, an organization that can't manage privacy issues is liable to litigation and irreversible damage to its reputation. The privacy matter is even more pronounced in companies doing business with EU citizens where the GDPR legislation is in place. However, a TAI system may sift through the data and identify which variables are more useful for the end-result (usually a prediction of sorts). Then, the AI professional involved can jettison all variables containing sensitive data (since they don't add much value anyway) and focus on pseudonymization processes on the variables that remain. Although this will not solve the problem completely, it can mitigate the risks involved and help manage them better.


Why 7 – Potential of new research avenues

On the least obvious rationales for TAI is the potential for new research avenues, based on the collaboration of a scientific researcher and a TAI system. Particularly in data-driven areas, such as Particle Physics, such as the work done in places like CERN, the potential of discoveries (and refinements of existing discoveries) is immense. This is partly due to the abundance of data available in that field, though this is only part of the picture. Many people involved in research in STEM fields are already computer-savvy, so they are more open to using the latest and greatest our field has to offer (compare that with other areas such as medical research where computer science has been adopted only fairly recently). All that, combined with the increased complexity of these fields, make TAI and research a match made in heaven.



I could go on talking about the whys of TAI until the cows come home. However, there comes a time when we need to focus on other questions too, in our pursuit of expanding this technology. Hopefully, these reasons can be convincing enough to justify further research on this topic. Perhaps it will take years before something complete and refined comes about, but most likely, it's going to be worth the wait. As Laozi said, "a journey of 1000 miles begins with a single step." Are you ready to take that first step now?