In the previous segment, you learned the design of a perceptron. In this segment, you will learn how perceptrons can be trained to perform certain tasks. But first, let’s formally define the problem statement and fix some notations we’ll be using throughout this session.

The perceptron problem statement is defined as follows:

We need to find the correct w and b such that wT.x + b > 0 for all points where y=+1 and wT.x + b < 0 for points where y=−1.

Note that the step function used is defined as follows:

y=1 if x>0

y=−1 if x<=0

Let’s now see how we can represent the above statement in a much concise way.

So we see that a certain set (w,b) is a valid separator if y(wT.x+b) > 0 **for all the data points** and not a valid separator if y(wT.x+b) < 0 for **any one **of the data points.

Let’s now solve some questions to concretize these concepts. Say you have the following data points with their corresponding ground truth values.

Data points, x | Ground Truth, y |

(0,3) | 1 |

(5,9) | 1 |

(-1,-2) | -1 |

Note that the vector ⎡⎢⎣x1x2x3⎤⎥⎦ can be represented as (x1, x2, x3).

Before we move on, let us first tweak our representation a little to **homogenous coordinates **which will help us in formulating the perceptron solution more neatly.

So what homogeneous coordinates mean is as follows:

x earlier represented as this ⎡⎢

⎢

⎢

⎢

⎢

⎢⎣x1x2..xd⎤⎥

⎥

⎥

⎥

⎥

⎥⎦transforms to this⎡⎢

⎢

⎢

⎢

⎢

⎢

⎢

⎢⎣x1x2..xd1⎤⎥

⎥

⎥

⎥

⎥

⎥

⎥

⎥⎦.

**w **earlier represented as this ⎡⎢

⎢

⎢

⎢

⎢

⎢⎣w1w2..wd⎤⎥

⎥

⎥

⎥

⎥

⎥⎦ transforms to this ⎡⎢

⎢

⎢

⎢

⎢

⎢

⎢

⎢⎣w1w2..wdb⎤⎥

⎥

⎥

⎥

⎥

⎥

⎥

⎥⎦.

This new representation does not explicitly state the existence of a bias term though it intrinsically includes it.

So you have understood how we use homogeneous coordinates to represent the perceptron more concisely. This will help us in illustrating some of the wonderful tasks a set of perceptrons can do. Let’s look at them in the next segment.

Report an error