Interpretability and Explainability

Shreyas Rana
5 min readJun 3, 2021

In my last article, What is Machine Learning?, I went into different types of machine learning algorithms. I feel I went a bit deeper than I should have. In this article, I want to step back a bit and talk about what machine learning is in layman’s terms and how it is used in our day-to-day lives. After that, I will delve deeper into the explanatory side of what happens before and after machine learning.

Going back to what I said in my previous article — “Machine Learning is the science of getting computers to learn and act as humans do”. The question that comes up immediately is — what do you mean? How can computers learn and act as humans do? What you are talking about is straight out of a science fiction magazine or a movie.

No, it is not. In fact, we live in a world where machine learning is so much part of our daily lives, in some cases, we don’t even know about it. Let’s take some examples.

Video Recommendations

You probably watch either one or all of YouTube, Amazon Prime or Netflix or Disney+. Have you noticed that each of these apps gives you recommendations on what you might like watching? It may not have occurred to you, but these recommendations are fairly accurate. In fact, the more you watch the recommended videos, the more accurate recommendations you get from these apps. These apps utilize machine learning to understand your likes and dislikes. They also use the information that they have about you to group you with people with similar profiles and use that information to give you recommendations.

Virtual Personal Assistant

Have you used a voice assistant like Siri, Alexa, or Google Home? How do you think they are able to understand what you say? This is yet another application of machine learning. The companies like Google, Apple and Amazon employ scientists that are experts in what’s known as Natural Language Processing or NLP for short. When you ask a question to Alexa, machine learning algorithms are applied to interpret what you are trying to say on the basis of your previous engagement with them. The output of those algorithms is normalized and the appropriate action is taken, all within the blink of an eye.

Traffic and Travel Time

I am sure that you or someone you know has used the Maps application on your phone. Have you noticed that the app tells you approximate time that it will take you to reach your destination. You must have also noticed that this app is fairly accurate in giving you this information. How does the app do that? There are so many factors to consider, like not just current traffic in your path, but also traffic that will be there on a point when you reach there. This is all done using machine learning. The app takes data from thousands of sources, learns patterns during day of week, time of day etc., determines the motion pattern of cars in your path and comes up with a prediction of how long it will take you to reach your destination.

I can go on and on about applications of machine learning in our day-to-day lives, but I am sure you understand the gist by now. As you can see, people are constantly programming machines to learn from our past behavior to predict our future behavior. When it comes to future applications of machine learning, the sky’s the limit.

Anyways, as I said before, let’s delve deeper into the explanatory side of what happens before and after machine learning.

Machine learning, more than any other process that requires data, needs proper data preprocessing. To determine the data needed to feed into the algorithm, it first must be cleaned, and rid of biases. Cleaning data is usually the easier, yet often more manual part. For example, a Data Scientist might have to sift through the data and spreadsheets to find empty or corrupted data entries, and delete them. However, ridding the data of bias is usually tricky, as it has to be done before the data is actually collected. Those collecting data will design their own studies with the least bias as possible. Although there are several ways of sampling data, 4 simple ways data scientists use are Simple/Systematic Random Samples, Stratified Random Samples, and Cluster Samples based on what is appropriate for the population.

Image Credit: scribbr.com

Explainability and interpretability are also extremely important aspects of machine learning and AI. If a data scientist has created a breakthrough algorithm that could have massive positive externalities in medicine (for example, accurately detecting the presence of Parkinson’s using just motion profiling), the algorithm must be explainable, as to why it made the decision it did, to others because it is a very high stakes application. Another famous dilemma is for self-driving cars; if autonomous driving becomes normalized, will the car make the decision to keep the driver, or the surroundings safe in case of an emergency? In this case, it is not the explainability of the algorithm itself, but it is a moral dilemma with the data scientists behind it to truly convince society why the self-driving car algorithm works the way it does.

Interpretability is on the algorithm’s side, as it is supposed to interpret any change in the data and be able to display a change in output, if any. Essentially the algorithm should be able to predict a trend in changing data. Going over the previous example of self-driving cars, the input for the car would be the numerous camera and distance inputs, and the speed of the car to name a few. As the car gets closer to an obstruction on the road, the algorithm should know to put more priority on avoiding it.

Although there is Machine Learning being used in nearly every aspect of our lives, including social media, transportation, and communication, none of it’s would be possible without the assurance that the machine learning algorithm makes an informed and accurate prediction: its explainability. As algorithms have advanced, their interpretability has improved. In the end, the transparency of an algorithm is extremely important for the progression of Machine Learning in the future.

--

--

Shreyas Rana

High school junior in California who loves building intelligent mobile apps, doing robotics, drawing and playing tennis!