How can the game of checkers be used to illustrate specific machine learning functions? What role does the human play in the process? Find out the answers in this blog post.
In 1959, artificial intelligence and human gaming researcher Arthur L. Samuel popularized the term “machine learning” to describe a field of study that allows computers to learn without being explicitly programmed. The definition reigns true today, as a subset of AI that uses programming systems to perform a specific task without having to code rule-based instructions.
Samuel, in what is now known as the Samuel Checkers-Playing Program, pioneered work in enabling computers to learn from their experience through the game of checkers. The reason checkers was chosen over chess was because its simple rules allowed more emphasis to be placed on specific learning techniques.
His learning program replayed the games included in the book “Lee’s Guide to the Game of Draughts or Checkers” so the computer program knew to choose the checker moves that were thought to be good by checker-playing experts. In essence, Samuel demonstrated that a computer can be programmed so it can outperform the person who wrote the program in the game and, in this case, win the game of checkers.
More so, the computer was able to learn to do this in a brief period of time, eight to 10 hours when given the following variables: the rules of the game, a sense of direction, and a redundant and incomplete list of parameters which are thought to have something to do with the game, but whose correct signs and relative weights are unknown and unspecified.
By programming computers to recognize patterns, learn from experience, and improve over time, Samuel was able to demonstrate the importance of self-learning, one of the first demonstrations of a now fundamental pillar of machine learning.
Why Does This Matter?
When it comes to any data science project, a pivotal step that often ends up overlooked or omitted is building feedback loops into model workflows to ensure optimal performance. In the example of the checkers game, it was important for Samuel to monitor his model once it was built and deployed so that it was able to continue to improve and learn from any new data.
From initial data collection and cleaning all the way through the data pipeline to feedback loops, AI drives the most value when data has a clear end-to-end path to follow. Without a plan in place to track and measure how well the program is doing in the future, teams would be left without data-backed information to compare the model against.
When embarking on any data or analytics project, it is critical to leverage human-centered AI to augment — not replace — people. In the checkers example, the game made sense to use in the field of AI research because it is easy to compare computer performance with that of humans.
However, none of the data preparation, cleaning, enrichment, visualization, deployment, or feedback would be possible without human oversight and intervention in order to optimize for efficiency and ensure that each phase connects back to the greater business objective. In the video below, hear how Head of Data and Analytics at Mercedes-Benz Walid Mehanna and his team use technology to augment human intelligence and close the gap between data and action.
It is when human knowledge and machine learning come together that we are able to truly push the boundaries of data science capabilities. By using computers to process data and provide us with an output, we can then use those outputs to influence the most critical business decisions. With a human-in-the-loop methodology, organizations across all industries and use cases can effectively create organizational change with scalable AI systems.