# Machine Learning Algorithm and Deep Learning ## Machine Learning Algorithm

There are 4 main types of Machine Learning Algorithm, the choice of the algorithm depends on the data type in the use case. The two main types of supervised learning are: –

• Regression (Polynomial): – It is applied when the output is a continuous number.
• Linear Regression is one of the most well-known algorithms for statistics and machine learning. It is an equation which describes a line, which represents relationship between input (x) and output (y) variables. By finding specific weightage for input variables called coefficients (b). Predictive modeling is primarily concerned when minimizing system errors or making the most accurate predictions possible at the expense of expansibility.
• Decision Tree – Another popular easy to understand algorithm is decision tree. It is a graphical representation of all possible solutions to a decision based on few conditions, it uses predictive models to achieve results, it is drawn upside down with its root at the top and it splits into branches based on a condition or internal node The end of the branch that doesn’t not split, is the decision leaf. This is commonly known as learn decision tree based on data.
They predict continuous values. Decision tree algorithm are referred to as CART (Classification and Regression Tree). Each node represents a single input variable (x) and a split point on that value.
• Classification (SVM, KNN, trees, logistic): – Classification is applied when the output has finite and discrete values. For example, social media analysis has 3 possible outcomes, Positive, Negative and Neutral.
• Clustering/Dimensional Reduction (K-means, SVD, PCA)
• Association Analysis (aPriori, FP-Growth)

### Naive Bayes

• Is a simple but surprisingly powerful algorithm for predictive modeling.
• Comprises two types of probabilities: –
• The probability of each class.
• The conditional probability of each class is based on the value of x.
• Can be used to make predictions for new data.
• Can easily estimate probabilities as a bell curve for real-valued data.
• Is called naive because it assumes that each input variable is independent.
• Is based on previous experience

#### Naive Bayes Classification

Naive Bayes classify objects as either green or red. It classifies new cases as they arrive. To maximize the knowledge on the naive bayes algorithm you need a basic knowledge of bayes theorem. In Bayes analysis this is known as Prior Probability, they are based on previous experience, which in this case is the probability of green and red objects. Based on previous experience, if the green is more than red, the new cases will also belong to that color of which there are more objects.
Calculation of prior probability: – The final classification is produced by combining both sources of information, the prior and the likelihood, to form a posterior probability using Bayes’ rule.

### K-means cluster

• A versatile algorithm that can be used for any type of grouping.
• Behavioral Segmentation
• Segment by purchase history.
• Segment by activities on an application, website, or platform.
• Define personas based on interests.
• Create profiles based on activity monitoring.
• Inventory Categorization
• Group inventory by sales activity.
• Group inventory by manufacturing metrics.
• Sorting sensor measurements
• Detect activity types in motion sensors.
• Group images
• Separate audio
• Identify groups in health monitoring.
• Detecting bots and anomalies
• Separate valid activity groups from bots.
• Group valid activity to clean up outlier detection.
• Randomly initialize three points called cluster centroids.
• Three cluster centroids in the image given below since data is grouped into three clusters.

K-Means involves two steps:

1. Cluster Assignment – Algorithm travels through data points, depending on which cluster is closer.
2. Move Centroid Step – Algorithm calculates average of all points in cluster and moves centroid to the average location.

## Deep Learning

Deep Learning is a specialized form of machine learning which utilizes supervised, unsupervised and semi-supervised learning to learn data representations. It is similar to the structure and function of the human nervous system, where a complex network of interconnected computing units works in a coordinated way to process complex information.

Deep Learning is a subset of machine learning. It refers to deep artificial neural networks and somewhat most frequently deep enforcement learning. Deep Neural Networks (DNN) are the set of algorithms that set a new record in accuracy for many important problems, Such as Image Recognition and Recommended systems. The DNN algorithms are arranged in layers and they learn patterns of patterns.

Neural Networks of Human Brain, our brain contains approximately 86 billion interconnected neurons. These neurons which are interconnected in our brain which process and transmit chemical and electrical signals. They take input and pass along outputs. Its neurons respond to certain stimuli and passes output to another.

Our human brain learns to identify objects from photos. The more data you feed, the better their recognition capability will become.

### Artificial Neural Network

An artificial neural network is a computer system made up of several simple and highly interconnected processing elements which process information by their dynamic state response to external inputs.

• A mathematic function designed as a model of biological neurons.
• Modeled freely based on the human brain.
• Designed for recognizing patterns.
• Interpret sensory data by the machine’s perception, labelling or grouping.

#### Feature of Neural Network

• Cluster and classify the raw input.
• Group unlabeled datasets based on the similarities in the inputs.
• Classify labeled datasets according to expected outcomes.
• Extract features given to other algorithms.

“Artificial Neural Network (ANN) is a computing system made up of a number of simple, highly interconnected processing elements which process information by their dynamic state response to external inputs.”

by Robert Hecht-Nielsen

### Definition of Perception

A perception is a neural network unit (an artificial neuron) that does certain computations to detect features or business intelligence in the input data. Perception is a single neuron model that is a precursor to larger neural networks. It investigates how simple models of biological brains can solve difficult computational tasks like predictive modeling in machine learning.

The goal is to develop the robust algorithms and data structures that can model difficult problems.

### Structure of Multilayer Perception

A role of neuron is called a layer and one network can have multiple layers. The architecture of the neurons in a network is often called as Network topology. Layers after the input layers are called hidden layers because they are not directly exposed to the input. The simplest neural structure is to have single neuron in the hidden layer that directly outputs the value. The final hidden layer is called the output layer, which is responsible for outputting the value or vector values that correspondent to the format required for the problem.

## 3 thoughts on “Machine Learning Algorithm and Deep Learning”

1. leadership skills says:

magnificent post, very informative. I’m wondering why the opposite experts of this sector don’t realize this. You must proceed your writing. I’m confident, you’ve a huge readers’ base already!

This site uses Akismet to reduce spam. Learn how your comment data is processed.