# Capstone Project Blog

I am going to discuss about different machine learning classification models today. I tried many different classification methods on my NLP project, and all had different results. These are the list of methods that I’m going over today.

a. Logistics Regression

b. Support Vector Machine

c. Multinomial Naïve Base model

1. logistics Regression

Logistics regression model is used for classifications. It is used to fit data into the s shaped lined graph and explains the relationship between dependent variable and one or more independent variables. Sometimes this type of models could be difficult to interpret. The model uses the logistic function to squeeze the output of a linear equation between 0 and 1. Logistics regressions have couple of different types. Those are binary logistic regression, multinomial logistic regression, ordinal logistic regression. Multinomial logistic regression is used to predict categorical placement in or the probability of category membership on dependent variable.

For pros, it can be used for many different applications, easier to implement, interpret and very efficient to train, no assumptions about distributions of classes in feature space. For cons, if the number of observation is lesser than the number of features, logistic regression should not be used, major limitation of logistic regression is the assumption of linearity between the dependent variable and the independent variables, and it can only be used to predict discrete functions.

2. Support Vector machine

Support vector machine is supervised machine learning algorithm that uses classification. It plots each data as a point in n-dimensional space with the value of each feature being the value of coordinate. Support vector machine classifies train data by drawing optimal hyperplane, separating respondents from nonrespondents. The maximal margin classifier finds the max margins such that the resulting separating hyperplane is farthest from the training observations among all hyperplanes. Different kernels can be used, its is a way of computing the dot product of two vectors x and y in some feature space. There are three types of kernels, linear polynomial, and radial basis function kernel.

Pros of the SVM are versatility, best algorithm when classes are separable, suited for extreme case binary classification. Cons are it takes long time to run, does not perform well in case of overlapped classes, selecting the appropriate kernel function ca be tricky.

3. Naïve Bayse Model

Naïve Bayse model is classification technique based on Bayes’ theory with an assumption of independence among predictors. It assumes that the presence of a particular feature in a class is unrelated to the presence of any other feature. Baye’s theorem provides a way that we can calculate the probability of a hypothesis given our prior knowledge.

Pros of Naïve Bayes are easy and fast to predict class of test data set, it performs well when assumption holds. Cons are it is impossible to get something completely independent.

References:

https://www.geeksforgeeks.org/advantages-and-disadvantages-of-logistic-regression/

https://www.statisticssolutions.com/what-is-logistic-regression/

https://towardsdatascience.com/support-vector-machines-svm-c9ef22815589

https://towardsdatascience.com/support-vector-machines-svm-c9ef22815589