Author: saqibkhan

  • 2015 – Present day

    Amazon launched its own machine learning platform in 2015. Microsoft also created the Distributed Machine Learning Toolkit, which enabled the efficient distribution of machine learning problems across multiple computers.

    Then more 3,000 AI and Robotics researchers, endorsed by Stephen Hawking, Elon Musk and Steve Wozniak (among many others), signed an open letter warning of the danger of autonomous weapons which select and engage targets without human intervention.

    In 2016 Google’s artificial intelligence algorithm beat a professional player at the Chinese board game Go, which is considered the world’s most complex board game and is many times harder than chess. The AlphaGo algorithm developed by Google DeepMind managed to win five games out of five in the Go competition.

    Waymo started testing autonomous cars in the US in 2017 with backup drivers only at the back of the car. Later the same year they introduce completely autonomous taxis in the city of Phoenix.

    In 2020, while the rest of the world was in the grips of the pandemic, open AI announced a ground-breaking natural language processing algorithm GPT-3 with a remarkable ability to generate human-like text when given a prompt. Today, GPT-3 is considered the largest and most advanced language model in the world, using 175 billion parameters and Microsoft Azure’s AI supercomputer for training.

  • Big steps forward

    In the 1990s work on machine learning shifted from a knowledge-driven approach to a data-driven approach.  Scientists began creating programs for computers to analyze large amounts of data and draw conclusions — or “learn” — from the results.

    And in 1997, IBM’s Deep Blue shocked the world by beating the world champion at chess.

    The term “deep learning” was coined in 2006 by Geoffrey Hinton to explain new algorithms that let computers “see” and distinguish objects and text in images and videos.

    Four years later, in 2010 Microsoft revealed their Kinect technology could track 20 human features at a rate of 30 times per second, allowing people to interact with the computer via movements and gestures. The follow year IBM’s Watson beat its human competitors at Jeopardy.

    Google Brain was developed in 2011 and its deep neural network could learn to discover and categorize objects much the way a cat does. The following year, the tech giant’s X Lab developed a machine learning algorithm that is able to autonomously browse YouTube videos to identify the videos that contain cats.

    In 2014, Facebook developed DeepFace, a software algorithm that is able to recognize or verify individuals on photos to the same level as humans can.

  • Playing games and plotting routes

    The first ever computer learning program was written in 1952 by Arthur Samuel. The program was the game of checkers, and the IBM computer improved at the game the more it played, studying which moves made up winning strategies and incorporating those moves into its program.

    Then in 1957 Frank Rosenblatt designed the first neural network for computers – the perceptron – which simulated the thought processes of the human brain.

    The next significant step forward in ML wasn’t until 1967 when the “nearest neighbor” algorithm was written, allowing computers to begin using very basic pattern recognition. This could be used to map a route for traveling salesmen, starting at a random city but ensuring they visit all cities during a short tour.

    Twelve years later, in 1979 students at Stanford University invent the ‘Stanford Cart’ which could navigate obstacles in a room on its own. And in 1981, Gerald Dejong introduced the concept of Explanation Based Learning (EBL), where a computer analyses training data and creates a general rule it can follow by discarding unimportant data.

  • The early days

    Machine learning history starts in 1943 with the first mathematical model of neural networks presented in the scientific paper “A logical calculus of the ideas immanent in nervous activity” by Walter Pitts and Warren McCulloch.

    Then, in 1949, the book The Organization of Behavior by Donald Hebb is published. The book had theories on how behavior relates to neural networks and brain activity and would go on to become one of the monumental pillars of machine learning development.

    In 1950 Alan Turing created the Turing Test to determine if a computer has real intelligence. To pass the test, a computer must be able to fool a human into believing it is also human. He presented the principle in his paper Computing Machinery and Intelligence while working at the University of Manchester. It opens with the words: “I propose to consider the question, ‘Can machines think?’”

  •  Job Displacement

    Automation through machine learning can lead to job losses in certain sectors. Roles involving repetitive tasks, such as data entry or assembly line work, are particularly vulnerable. While ML creates new opportunities, reskilling the workforce remains a significant challenge.

  • Risk of Bias

    If the training data contains biases, the ML model may perpetuate or amplify these biases, leading to unfair or unethical outcomes, especially in sensitive applications like hiring or lending.

  • Complexity and Interpretability

    Many machine learning models especially deep learning systems, operate as “black boxes.” Their decision-making processes are difficult to interpret or explain, raising ethical concerns in critical fields like healthcare or finance.

  • High Computational Costs

    Developing and deploying machine learning models requires significant investment in infrastructure, computational resources, and skilled professionals. Small businesses may find these costs prohibitive.

  • Data Dependency

    The performance of ML models heavily depends on the quality and quantity of training data. Poor-quality or biased data can lead to inaccurate predictions or unfair outcomes. For example, biased datasets in hiring algorithms can perpetuate discrimination.

  • Innovation Enablement

    Machine learning drives innovation across industries. Technologies like virtual assistants, facial recognition systems, and autonomous vehicles are all powered by ML algorithms. This innovation gives businesses a competitive edge by enabling them to offer cutting-edge solutions.