This is How Machines Learn! Machine Learning and Society (Part 5)

Stefan Seegerer
3 min readDec 9, 2020

Our goal with this series is to enable everyone to understand AI phenomena in their daily lives, as well as to actively shape the growing influence of AI on our society. Therefore, we do not consider any technical details or provide an introduction on how to use certain machine learning frameworks. Instead, we focus on explaining the underlying ideas of machine learning which empower everyone to understand and shape the digital world that surrounds us.

Machine learning is becoming increasingly important in more and more areas of life and achieves remarkable results. This is even more impressive given the fact that computers have not developed a real understanding of the problem, but have merely identified patterns and rules in data.

A machine learning model is trained for a specific application.

However, this means that a machine learning algorithm is trained for specific applications. If, for example, our robot has to label dog and cat pictures instead of bricks, it will be completely lost (without new training data).

Furthermore, the success of machine learning depends heavily on available data. Since machine learning algorithms learn from a large but limited amount of data, possible distortions in this input data are also transferred into the algorithm. In the case of our robot, we expect that the distribution of the bricks involved in training matches the distribution of building blocks in general — an assumption that often does not hold true in reality. So we need to be careful in machine learning: feeding biased data will result (most likely) in a biased model.

Feeding biased data will result in a biased model.

Another issue related to machine learning is that the decisions made are often difficult to comprehend, given that it is often hard or impossible to identify which patterns in the data the algorithm uses. If, for example, a machine learning model refuses a loan, it is not always possible for humans to reconstruct how that result was obtained. This can have far-reaching consequences, especially in the case of an error. Approaches to deal with those kinds of problems are related to Explainable AI. This approach tries to make the results of machine learning comprehensible for humans, stopping AI models from being a black box.

It can be difficult to reconstruct how a result of a machine learning model was obtained.

However, when decisions made by computers are difficult to understand or based on distorted data, ethical and legal issues are also involved. When that happens, these issues need to be discussed by us as a society. Therefore, not only engineers, but as many people as possible should get a chance to understand the ideas and concepts underlying the phenomenon of machine learning, so they can assess its possibilities and limitations. So, share this article with your kids, your parents and even your grandparents.

Thanks for reading. This was the last part of our series. If you have any questions, feel free to ask them in the comments.

Click here to get to the other articles of this series.

Written by Stefan Seegerer, Tilman Michaeli and Ralf Romeike.
The robot is adapted from
https://openclipart.org/detail/191072/blue-robot and licensed under CC0. The article and the derived graphics are licensed under CC-BY-SA.

--

--