Disadvantages:
• In order for the neural network to function, training is required.
• A neural network's architecture is distinct from that of a computer's CPU. Thus, it is obligatory to
emulate.
• Intense computational resources needed for big neural networks.
Support Vector Machine:
1. It is the most widely used Supervised Learning technique, Support Vector Machine (SVM) is also
effective when applied to Regression situations. On the other hand, its main application is in
Machine Learning for Classification issues.
2. To classify fresh data points efficiently in the future, the SVM algorithm seeks to find the optimal
line or decision boundary that divides the space into n distinct classes. A hyperplane describes this
optimal decision boundary.
3. If you want to make a hyperplane, SVM can pick the most extreme points or vectors for you. The
name "Support Vector Machine" refers to the fact that this technique is designed to help in the most
severe instances.
K-Nearest Neighbors:
1. In Machine Learning algorithms based on the Supervised Learning technique K-NN algorithm is
the simplest one.
2. The K-Nearest Neighbors algorithm classifies new cases or data by assuming their similarities to
existing cases and then assigning them to the most similar class.
3. K-Nearest Neighbors (K-NN) is an algorithm that uses all the available data to determine how to
label a new data point. This means that as fresh data becomes available, the K- NN algorithm can
quickly and simply categorize it into a well-suited category.
4. K-NN is a versatile approach that may be used regression and classification, but it is typically
employed for the latter.
5. The K-Nearest Neighbors (K-NN) algorithm is non-parametric, which implies it does not
presuppose anything about the data being analyzed.
6. Mainly because it doesn't immediately apply what it learns from the training set, this method is
sometimes referred to as a "lazy learner algorithm."
Advantages of KNN:
• It's easy to put into action.
• It can withstand the ambiguity of the training data.
• If there is a lot of data available for training, it may be more efficient.
Disadvantages of KNN:
• Always requires figuring out what K is supposed to be, which can get tricky sometimes.
• The distance between all the data points in the training samples needs to be calculated, which
increases the computational cost.