## Machine Learning Algorithms There are 4 main types of Machine Learning Algorithm, the choice of the algorithm depends on the data type in the use case. The two main types of supervised learning are: –

• Regression (Polynomial): – It is applied when the output is a continuous number.
1. Linear Regression: One of the most well-known algorithms for statistics and machine learning is linear regression. It is an equation that describes a line that illustrates the connection between the variables for input (x) and output (y). By determining particular coefficients, or weights, for the input variables (b). The main goal of predictive modelling is to reduce system errors or produce the most precise predictions while sacrificing expansibility.
2. Decision Tree: The decision tree algorithm is a well-liked, simple one. It uses predictive models to produce results, is drawn upside down with its root at the top, and splits into branches based on a condition or internal node. It is a graphical representation of all options for a decision based on a few conditions. The decision leaf is located at the end of the branch that does not split. This is also referred to as learning decision trees from data.They predict constant values. CART is the name given to the decision tree algorithm (Classification and Regression Tree). Each node represents a split point on a single input variable (x) as well as the variable itself.
• Classification (SVM, KNN, trees, logistic): – Classification is applied when the output has finite and discrete values. For example, social media analysis has 3 possible outcomes, Positive, Negative and Neutral.
• Clustering/Dimensional Reduction (K-means, SVD, PCA)
• Association Analysis (aPriori, FP-Growth)

### Naive Bayes

• Is a simple but surprisingly powerful algorithm for predictive modeling.
• Comprises two types of probabilities: –
• The probability of each class.
• The conditional probability of each class is based on the value of x.
• Can be used to make predictions for new data.
• Can easily estimate probabilities as a bell curve for real-valued data.
• Is called naive because it assumes that each input variable is independent.
• Is based on previous experience

#### Naive Bayes Classification

Objects are classified as either green or red by naive Bayes. It categorises fresh cases as they come in. You need a basic understanding of the Bayes theorem in order to maximise your knowledge of the naive Bayes algorithm. Prior probability, which in this case refers to the likelihood of green and red objects, is a concept used in Bayesian analysis.

Priority probability is based on prior knowledge. According to prior experience, new cases will also fall under the colour in which there are more objects if green is more prevalent than red. How to calculate prior probabilities: In order to create the final classification, the prior and likelihood information sources are combined to create a posterior probability using Bayes’ rule.

### K-means cluster

• A versatile algorithm that can be used for any type of grouping.
• Behavioral Segmentation
• Segment by purchase history.
• Segment by activities on an application, website, or platform.
• Define personas based on interests.
• Create profiles based on activity monitoring.
• Inventory Categorization
• Group inventory by sales activity.
• Group inventory by manufacturing metrics.
• Sorting sensor measurements
• Detect activity types in motion sensors.
• Group images
• Separate audio
• Identify groups in health monitoring.
• Detecting bots and anomalies
• Separate valid activity groups from bots.
• Group valid activity to clean up outlier detection.
• Randomly initialize three points called cluster centroids.
• Three cluster centroids in the image given below since data is grouped into three clusters.

K-Means involves two steps:

1. Cluster Assignment – Algorithm travels through data points, depending on which cluster is closer.
2. Move Centroid Step – Algorithm calculates average of all points in cluster and moves centroid to the average location.

## Deep Learning Algorithms In order to learn data representations, deep learning uses supervised, unsupervised, and semi-supervised learning techniques. It is comparable to the structure and operation of the human nervous system, which processes complex information using a complex network of interconnected computing units.

Deep Learning is a subset of machine learning. It refers to deep artificial neural networks and somewhat most frequently deep enforcement learning. Deep Neural Networks (DNN) are the set of algorithms that set a new record in accuracy for many important problems, Such as Image Recognition and Recommended systems. The DNN algorithms are arranged in layers and they learn patterns of patterns.

Neural Networks of Human Brain, our brain contains approximately 86 billion interconnected neurons. These neurons which are interconnected in our brain which process and transmit chemical and electrical signals. They take input and pass along outputs. Its neurons respond to certain stimuli and passes output to another.

Our human brain learns to identify objects from photos. The more data you feed, the better their recognition capability will become.

### Artificial Neural Network

An artificial neural network is a computer system made up of several simple and highly interconnected processing elements which process information by their dynamic state response to external inputs.

• A mathematic function designed as a model of biological neurons.
• Modeled freely based on the human brain.
• Designed for recognizing patterns.
• Interpret sensory data by the machine’s perception, labelling or grouping.

#### Feature of Neural Network

• Cluster and classify the raw input.
• Group unlabeled datasets based on the similarities in the inputs.
• Classify labeled datasets according to expected outcomes.
• Extract features given to other algorithms.

“Artificial Neural Network (ANN) is a computing system made up of a number of simple, highly interconnected processing elements which process information by their dynamic state response to external inputs.”

by Robert Hecht-Nielsen

### Definition of Perception

A perception is a neural network unit (an artificial neuron) that does certain computations to detect features or business intelligence in the input data. Perception is a single neuron model that is a precursor to larger neural networks. It investigates how simple models of biological brains can solve difficult computational tasks like predictive modeling in machine learning.

The goal is to develop the robust algorithms and data structures that can model difficult problems.

### Structure of Multilayer Perception

A layer is a type of neuron, and a network can have more than one layer. Network topology, or the architecture of the neurons in a network, is a common term. Because they are not directly exposed to the input, layers that come after the input layers are known as hidden layers.

A single neuron in the hidden layer that outputs the value directly makes up the simplest neural structure. The output layer, the last hidden layer, is in charge of producing the value or values from a vector that correspond to the format needed for the issue.

## Machine Learning and Deep Learning Fundamentals: A Practical Approach for Non-Technical Professionals

Machine Learning can be defined as an approach to achieve AI through systems or software models that can learn from experience to find patterns in a set of data