machine learning can be related to the way we use our experiences as humans to make decisions in our daily lives.

Machine Learning Classification Algorithms for Complete Novice.

Ugwu Arinze Christopher
4 min readDec 9, 2019

Experience over time informs the way we make decisions as humans. The six senses are the gateways through which we garner most of the experience we base our decisions on. We are simply learning from the past, when new and similar situations arises, we go back the memory lane. Most of the time, the way you treat people can be based on experiences you have had in the past or things you have heard about them. In machine learning, classification is a supervised learning technique. The machine learns from the supplied data, which already has a target class(result of experiences). Other attributes(life occurrences) are used to determine this target class which could be binary(yes/no) or multi-class(yes/no/maybe).

Below are some of the classification algorithms in machine leaning:

  1. Decision Trees
  2. Random Forest
  3. Linear Classifiers: Naive Bayes Classifier, Logistic Regression
  4. Artificial Neural Networks
  5. K-Nearest Neighbor

Decision Trees

Fig. 1: Phone Purchase Decision Tree

In decision tree, classification and regression models are built in form of a tree structure. The tree is made up of decision and leaf nodes which are attributes from the data set. The root node is usually the best predictor and appears at the top. To determine the root node, Information gain or Gini index could be used. A top-down recursive steps are taken to construct the tree starting from the root node to the decision nodes and down to the leafs. In figure 1 above, the buyer considers various specifications of the phone using a top down approach, before deciding whether to buy or not.

Random Forest

A tree cannot make a forest

Just as the name implies, it takes a lot of trees to call it a forest. These trees form an ensemble and the problem of over fitting is usually controlled with random forest. Over fitting is when the machine learns too well and tries to include the noise. The final class is determined from the output of the various decision trees. The various decision trees are used to fit sub-samples, from which the classes are used to determine the final class.

Linear Classifiers: Naive Bayes Classifier, Logistic Regression

The value of the linear combination of characteristics is used by a linear classifier to make classifications. In Naive Bayes, predictors/features are considered in isolation and not dependent on each other. They independently contribute to the probability results obtained. Naive Bayes is often applied in spam filtering, recommendation systems or sentiment analysis. Its fast and easy implementation is an advantage it has over other classifiers. However the handling of features as independent is a disadvantage because those predictors are actually dependent on each other.

Logistic regression Classifier is the go-to method for binary classification. It returns the probability of various observations, which can then be mapped to the target class. The aim of logistic regression is to determine the best fitting model to describe the relationship between the target class(dependent class) and other attributes.

Artificial Neural Networks

Just like the human brain is made up of layers upon layers of memory, this model has various layers where things are joggled to churn out an output. From one layer to another, various weights are applied to the input and further tuned to make better predictions. This model learns as various functions are applied to the input at each layer.

K-Nearest Neighbor

You are the average if the five people surrounding you

This classifier learns by taking nearest neighbors(similarity) into consideration. It is considered a lazy learner because it just considers similar attributes on the fly, without building a model. The closeness is calculated based on distance, using either Euclidean or Manhattan distance.

Euclidean: For example, if x=(a,b) and y=(c,d),

Distance = sqrt((ac)²+(bd)²)

Manhattan: For example, if x=(a,b)and y=(c,d)

Distance = |ac|+|bd|

Machine learning can be related to the way we use our experiences as humans to make decisions in our daily lives. Classification is a supervised machine learning technique used when there is an outcome(class) and want to use various occurrences(features) to predict future outcomes(class). There are more technical and detail explanations to the algorithms discussed, but this post should get you started on your way to gaining deeper understanding.

--

--

Ugwu Arinze Christopher

~ Entrepreneur ~ Data Scientist ~ Writer ~ A Minimum Viable Product