![](https://crypto4nerd.com/wp-content/uploads/2023/12/1x3DUXrQ_txGbBFKjvmPIlQ.png)
Machine Learning (ML) has become an alluring field of study recently. It is common to see phrases or terms used in ML everywhere, given how Chat GPT/Google Bard, has impacted our everyday routine. It is very natural to be overwhelmed with all the complex terms around you, without being able to understand the meaning of them. However, this guide will provide you with sufficient knowledge to understand the basic concepts of ML. This guide is designed to empower you with the knowledge needed to grasp the foundational concepts of ML, offering a unique advantage through visual representation.
While there are numerous articles and tutorials on ML, a common drawback is the lack of visual representation. Traditional resources often bombard learners with abstract concepts, making it challenging to visualize and retain information. This guide seeks to address this gap by incorporating visuals to enhance understanding and promote better retention.
To learn about the subset of AI, which is a subset of Computer Science, we need to understand why ML matters. ML allows computers to learn from data and perform various tasks, for example, a simple ML model learns from data of players performing in the World Cup and predicts their future performance.
ML is usually done with three models; these define the basic working of the Model. These approaches are selected based on the task, we can use one of the approaches or combinations of them (sequentially or non-sequentially) depending on the application.
1. Supervised Learning:
We give the machine labeled data to train, what we mean by labeled data is the data that consists of pre-defined outputs. For example, an image might be labeled as 1 if it contains a cat, and 0 if it does not contain a cat. Supervised learning is like having a mentor guide you through a task. After being trained it makes predictions on new data.
Two main uses of Supervised Learning are:
1) Regression:
Regression is used when we need to predict after being trained on a labeled set. Prediction of Salary of an employee, depending on their Job Description.
2) Classification:
Classification is when we need to put data into categories, in the case of two classes, we call it binary classification. In the case of many classes, we call it multiclass classification.
2. Unsupervised Learning:
Unsupervised learning is like exploration without a guide. The algorithm discovers patterns or structures in unlabeled data, allowing it to categorize or group similar elements. For example, it generates an image of the cat, after giving multiple photos of a cat.
There are many uses, two of the uses of Unsupervised Learning:
1) Clustering:
The algorithm identifies natural groupings within the data without prior knowledge of class labels. For example, putting apples, oranges, and grapes in a basket because they are fruits.
2) Generative Modeling:
Generative Modeling is used when data is given to the model to learn different patterns from it, and after being trained, the model is asked to replicate similar data.
3. Reinforcement Learning:
Reinforcement learning is comparable to a reward-based system. An agent learns to make decisions by receiving rewards or penalties based on its actions in an environment. It is like giving the dog a treat when he does a trick and punishing him when he does unwanted behavior. The main usage is teaching ML models to play games.
All the approaches have different Algorithms which can be used, Algorithms are a set of instructions that need to be performed in a certain order. The ML Algorithms are divided into three components, in the famous “A Few Useful Things to Know About Machine Learning”[1]
Representation:
There exist many representations, but the most famous is Neural Network (NN), and I would explain NN in this guide, NN is a simple representation that behaves like brain neurons.
The Input layer takes the data, the Hidden Layer performs some functions, and the output layer gives us 0 or 1 in case of binary classification.
Evaluation:
Evaluation is the metric on which we judge the performance of our ML model, for example, accuracy is used when predicting the cost of the house.
Optimization:
After training our data for the first time in the model, we optimize our model so it can perform better in the next iteration of data, there are different methods of optimizing. In optimizing, we change the value of functions performed in the hidden layer, to get better results.
Deep learning is a subset of ML, we use deep neural networks as representation in this field. Unlike ML, Deep Learning is based mainly on neural networks with multiple layers and gives much of the decision to the computer while we only tweak the initial model. Deep Learning is being used in various fields like Computer Vision and Natural Language Processing (NLP).
Convolutional Neural Network:
It is used to perform functions on graphical data, like photos or videos. For example, in the photo below it tells us how an image is classified into three categories. It is known for its parallel power computing.
Small features like dots and edges are detected in the initial layers, and by the end layers features like “dogs” “birds” etc. are detected.
Recurrent Neural Networks:
It is used to perform functions on sequential data, like audio or text. Recurrent Neural Networks is widely known for its use in Natural Language Processing, which we will get into in a second. In this, the special feature is that data can be fed to the previous layer.
Natural Language Processing is widely used in today’s world, Siri, and ChatGPT are examples of it. Processing human language is known as natural language processing. It consists of NLU (Natural Language Understanding) and NLG (Natural Language Generation). We perform NLPs using RNNs but recently another architecture known as transformer has overthrown RNNs in NLPs. The model can understand human language. Google uses it in its translation app.
In conclusion, the realm of ML is reshaping our technological landscape, and understanding its key concepts is crucial in our AI-driven world. This guide, enriched with visual aids, tackles the complexity of ML terms, offering a unique learning advantage. Deep Learning, a subset of ML, employs neural networks for decision-making. Convolutional Neural Networks (CNNs) excel in computer vision, while Recurrent Neural Networks (RNNs) shine in sequential data handling, especially in Natural Language Processing (NLP).
NLP transforms how machines understand and generate human language, as seen in virtual assistants like Siri and language models like ChatGPT. Generative AI, as with DALL-E, and CNNs in self-driving cars, exemplifies ML’s innovative applications.
1. “Pattern Recognition and Machine Learning” by Christopher M. Bishop
A comprehensive introduction to machine learning concepts and pattern recognition.
2. “Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow” by Aurélien Géron
Practical guide with hands-on examples using popular machine learning libraries.
3. “Machine Learning Yearning” by Andrew Ng
A practical guide for machine learning practitioners, focusing on best practices and strategies.
[1] Research Paper by Pedro Domingus, “A Few Useful Things to Know About Machine Learning”