Job Saarnee

# Example of Find-S Algorithm | Machine Learning Find S Algorithm | Example 2

## FIND S Solved Example

Find- S is used to Find A maximal Specific Hypothesis out of All possible Hypothesis.

For more Detail visit our post titled Find S Algorithm with Example in Machine Learning | Machine Learning

### Example of Find-S Algorithm

In this blog we are going to see one more example of Find-S algorithm. We have already discussed about Find-S algorithm. You Can Read it from here. So Let start it

Here is the Data Set, which we are going to use

 Citations Size In Library Price Editions Buy SOME SMALL NO AFFORDABLE MANY NO MANY BIG NO EXPENSIVE ONE YES SOME BIG ALWAYS EXPENSIVE FEW NO MANY MEDIUM NO EXPENSIVE MANY YES MANY SMALL NO AFFORDABLE MANY YES

In this Training Data set we have some parameter like Citations, Size, In Library, Price, Edition. On the basis of these parameter an person is going to decide either he is going to Buy the book or Not. So over Target Attribute is Buy.

In this Algorithm our goal is to Find the Maximal Specific Hypothesis that can easily able to classify out test data correctly.

In this we are going to start with Most Specific Hypothesis that is

First, we initialize h to most specific hypothesis:
h0 = {φ, φ, φ, φ}

Now we consider first training example:
x1 = (Some , Small , No , Affordable , Many)

This is the Negative training example. So we neglect the training

after this  h1 remain as it is

h1 = {φ, φ, φ, φ}

Now we consider second training example:
x2 = (Many , Big , No , Expensive , One)
This is Positive Training example  From here, it is clear that none of the attributes value in h is satisfied with the attributes value in x1. So we will compare attribute value of hypothesis with attribute value of example if they match we keep the same otherwise attribute value in Hypothesis is replaced with more general value

So, each attribute in h is replaced by the next general constraints –

h2 = (Many , Big , No , Expensive , One)

Now we consider third training example:
x3 = (Some , Big , always , Expensive , few)

This is the Negative training example. So we neglect the training

after this  h2 remain as it is

h3 = (Many , Big , No , Expensive , One)

Now we consider forth training example:
x4 = (Many , Medium , No , Expensive , Many)

This is Positive Training example. So we will compare attribute value of hypothesis with attribute value of example if they match we keep the same otherwise attribute value in Hypothesis is replaced with more general value

in this example many = many, Big not equal to Medium so ? because it is capable to accept both similarly No = No, Expensive = Expensive , the not matching so ?

After this
h4= (Many, ? , No, Expansive, ?)

Now we consider fifth training example:
x5 = (Many , Small , No , Affordable , Many)

now again we have positive example so compare attribute value of example with attribute value of hypothesis

again first match, in second ? is more generalize than Small, third remain as it is, forth is not matching so ? it is capable to accept both value., than in last ?.

After this
h5=(Many, ?, No, ? , ?)
The Find-S algorithm, a cornerstone in machine learning, is a straightforward and efficient method used to construct a consistent hypothesis from a set of training examples. Operating within the context of supervised learning, it iteratively refines its hypothesis space by comparing the provided training data, ultimately converging towards the most specific hypothesis that accurately classifies the given examples. The algorithm starts with the most specific hypothesis, usually representing the smallest set of generalizations, and incrementally adjusts it based on the training data until a suitable hypothesis that fits the data perfectly is derived. This iterative process makes Find-S a foundational concept, providing a stepping stone for more complex machine learning algorithms and strategies.