Can thresholds be modeled as weights?

I'm trying to model a random threshold as a weight,the threshold should help the error to decrease, the weights are not random, they are 1. It's possible to change the threshold so that the error will be 0?

import numpy as np

# input dataset
X = np.array([[0, 0],
              [0, 1],
              [1, 0],
              [1, 1]])

# output dataset
y = np.array([[0, 0, 0, 1]]).T

syn0 = np.zeros((2, 1)) + 1


threshold = np.random.random_integers(-5, 5)

# forward propagation
l0 = X
l1 = np.dot(l0, syn0)

# how much did we miss?
l1_error = y - l1
print(l1_error)

This is what I have so far..!

Topic neural-network python

Category Data Science


In this example, you have 2 classes which are separated with a line with negative slope (if you don't know why please comment). So if you set the slope as +1 then you can not separate classes. The whole point is to learn the line and line is defined with its coefficients (including threshold but threshold is JUST ONE OF THEM!).

PS: If you later want to upgrade to multilayer perceptron and use backpropagation algorithm, then you need to choose randomly different initial weights for mathematical reasons according to the backpropagating error of layers

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.