Perceptron - Which step function to choose Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern) 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsProperties for building a Multilayer Perceptron Neural Network using Keras?Perceptron weight vector updateHow to implement gradient descent for a tanh() activation function for a single layer perceptron?Question about Logistic RegressionNormalizing the final weights vector in the upper bound on the Perceptron's convergencePurpose of weights in neural networksIn a convolutional neural network (CNN), when convolving the image, is the operation used the dot product or the sum of element-wise multiplication?Trouble with accuracy of multiclass perceptronOptimizing an averaged perceptron algorithm using numpy and scipy instead of dictionariesMachine Learning Perceptron Algorithm
Is there hard evidence that the grant peer review system performs significantly better than random?
Google .dev domain strangely redirects to https
What does this say in Elvish?
How did Fremen produce and carry enough thumpers to use Sandworms as de facto Ubers?
What is the difference between a "ranged attack" and a "ranged weapon attack"?
Why weren't discrete x86 CPUs ever used in game hardware?
Dynamic filling of a region of a polar plot
What does it mean that physics no longer uses mechanical models to describe phenomena?
Sum letters are not two different
AppleTVs create a chatty alternate WiFi network
Drawing spherical mirrors
Deconstruction is ambiguous
Can the Flaming Sphere spell be rammed into multiple Tiny creatures that are in the same 5-foot square?
Co-worker has annoying ringtone
What is an "asse" in Elizabethan English?
How do living politicians protect their readily obtainable signatures from misuse?
In musical terms, what properties are varied by the human voice to produce different words / syllables?
An adverb for when you're not exaggerating
What makes a man succeed?
Did any compiler fully use 80-bit floating point?
Most bit efficient text communication method?
Is multiple magic items in one inherently imbalanced?
Does the Mueller report show a conspiracy between Russia and the Trump Campaign?
Flight departed from the gate 5 min before scheduled departure time. Refund options
Perceptron - Which step function to choose
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsProperties for building a Multilayer Perceptron Neural Network using Keras?Perceptron weight vector updateHow to implement gradient descent for a tanh() activation function for a single layer perceptron?Question about Logistic RegressionNormalizing the final weights vector in the upper bound on the Perceptron's convergencePurpose of weights in neural networksIn a convolutional neural network (CNN), when convolving the image, is the operation used the dot product or the sum of element-wise multiplication?Trouble with accuracy of multiclass perceptronOptimizing an averaged perceptron algorithm using numpy and scipy instead of dictionariesMachine Learning Perceptron Algorithm
$begingroup$
I'm studying Perceptron
algorithm. Some books use this step function
1 if x>=0 else -1
where x is a dot product between the weights w and a sample x.
Other books use:
1 if x>=0 else 0
What are the practical differences between these step functions?
machine-learning neural-network deep-learning perceptron
$endgroup$
bumped to the homepage by Community♦ 25 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
$begingroup$
I'm studying Perceptron
algorithm. Some books use this step function
1 if x>=0 else -1
where x is a dot product between the weights w and a sample x.
Other books use:
1 if x>=0 else 0
What are the practical differences between these step functions?
machine-learning neural-network deep-learning perceptron
$endgroup$
bumped to the homepage by Community♦ 25 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
$begingroup$
I'm studying Perceptron
algorithm. Some books use this step function
1 if x>=0 else -1
where x is a dot product between the weights w and a sample x.
Other books use:
1 if x>=0 else 0
What are the practical differences between these step functions?
machine-learning neural-network deep-learning perceptron
$endgroup$
I'm studying Perceptron
algorithm. Some books use this step function
1 if x>=0 else -1
where x is a dot product between the weights w and a sample x.
Other books use:
1 if x>=0 else 0
What are the practical differences between these step functions?
machine-learning neural-network deep-learning perceptron
machine-learning neural-network deep-learning perceptron
edited Jan 22 '18 at 1:20
Vaalizaadeh
7,62562265
7,62562265
asked Dec 31 '17 at 8:54
PoieraPoiera
11616
11616
bumped to the homepage by Community♦ 25 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
bumped to the homepage by Community♦ 25 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
They have the same meaning in this context although during training using Rosenblatt
update rule, the former may have great changes during each update. Perceptron
is used for binary classification which means there are two possible classes to classify. If the result of inner product, here dot product, is greater than or equal to zero, the class of inputs will be the first class and if it's smaller than zero the class of inputs would be the other class. Perceptron
just has one neuron. It's a simple linear classifier. The value of threshold is just important. Means that if the product is greater than zero, the input belongs to e.g. positive class and if is negative it belongs to negative class. The step functions and Rosenblatt
update rule are not used any more. They have so much oscillation. Today networks learn using gradient descending algorithms which uses the concept of derivative.
When you progress, you will see that neural nets which use other activation functions like Sigmoid
or Tanh
are different. the former has 0.5
expected value and the latter has 0
expected value which causes the second learn so much faster. Although now a days ReLU
is more popular among other activation functions.
$endgroup$
add a comment |
$begingroup$
I think that depends on how the next step in the algorithm is defined in the respective textbook(s). There might be slight differences.
Your if-statements can be interpreted as the following half-sentences,
"If there is a change in sign, [update the weights, if there isn't, do
nothing"]
.
The second if statement reads:
"if the value is nonzero, [update the weights, otherwise do nothing.]"
Maybe your textbooks differ in how the parts between the [ ... ]
are written.
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f26159%2fperceptron-which-step-function-to-choose%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
They have the same meaning in this context although during training using Rosenblatt
update rule, the former may have great changes during each update. Perceptron
is used for binary classification which means there are two possible classes to classify. If the result of inner product, here dot product, is greater than or equal to zero, the class of inputs will be the first class and if it's smaller than zero the class of inputs would be the other class. Perceptron
just has one neuron. It's a simple linear classifier. The value of threshold is just important. Means that if the product is greater than zero, the input belongs to e.g. positive class and if is negative it belongs to negative class. The step functions and Rosenblatt
update rule are not used any more. They have so much oscillation. Today networks learn using gradient descending algorithms which uses the concept of derivative.
When you progress, you will see that neural nets which use other activation functions like Sigmoid
or Tanh
are different. the former has 0.5
expected value and the latter has 0
expected value which causes the second learn so much faster. Although now a days ReLU
is more popular among other activation functions.
$endgroup$
add a comment |
$begingroup$
They have the same meaning in this context although during training using Rosenblatt
update rule, the former may have great changes during each update. Perceptron
is used for binary classification which means there are two possible classes to classify. If the result of inner product, here dot product, is greater than or equal to zero, the class of inputs will be the first class and if it's smaller than zero the class of inputs would be the other class. Perceptron
just has one neuron. It's a simple linear classifier. The value of threshold is just important. Means that if the product is greater than zero, the input belongs to e.g. positive class and if is negative it belongs to negative class. The step functions and Rosenblatt
update rule are not used any more. They have so much oscillation. Today networks learn using gradient descending algorithms which uses the concept of derivative.
When you progress, you will see that neural nets which use other activation functions like Sigmoid
or Tanh
are different. the former has 0.5
expected value and the latter has 0
expected value which causes the second learn so much faster. Although now a days ReLU
is more popular among other activation functions.
$endgroup$
add a comment |
$begingroup$
They have the same meaning in this context although during training using Rosenblatt
update rule, the former may have great changes during each update. Perceptron
is used for binary classification which means there are two possible classes to classify. If the result of inner product, here dot product, is greater than or equal to zero, the class of inputs will be the first class and if it's smaller than zero the class of inputs would be the other class. Perceptron
just has one neuron. It's a simple linear classifier. The value of threshold is just important. Means that if the product is greater than zero, the input belongs to e.g. positive class and if is negative it belongs to negative class. The step functions and Rosenblatt
update rule are not used any more. They have so much oscillation. Today networks learn using gradient descending algorithms which uses the concept of derivative.
When you progress, you will see that neural nets which use other activation functions like Sigmoid
or Tanh
are different. the former has 0.5
expected value and the latter has 0
expected value which causes the second learn so much faster. Although now a days ReLU
is more popular among other activation functions.
$endgroup$
They have the same meaning in this context although during training using Rosenblatt
update rule, the former may have great changes during each update. Perceptron
is used for binary classification which means there are two possible classes to classify. If the result of inner product, here dot product, is greater than or equal to zero, the class of inputs will be the first class and if it's smaller than zero the class of inputs would be the other class. Perceptron
just has one neuron. It's a simple linear classifier. The value of threshold is just important. Means that if the product is greater than zero, the input belongs to e.g. positive class and if is negative it belongs to negative class. The step functions and Rosenblatt
update rule are not used any more. They have so much oscillation. Today networks learn using gradient descending algorithms which uses the concept of derivative.
When you progress, you will see that neural nets which use other activation functions like Sigmoid
or Tanh
are different. the former has 0.5
expected value and the latter has 0
expected value which causes the second learn so much faster. Although now a days ReLU
is more popular among other activation functions.
edited Dec 31 '17 at 9:46
answered Dec 31 '17 at 9:21
VaalizaadehVaalizaadeh
7,62562265
7,62562265
add a comment |
add a comment |
$begingroup$
I think that depends on how the next step in the algorithm is defined in the respective textbook(s). There might be slight differences.
Your if-statements can be interpreted as the following half-sentences,
"If there is a change in sign, [update the weights, if there isn't, do
nothing"]
.
The second if statement reads:
"if the value is nonzero, [update the weights, otherwise do nothing.]"
Maybe your textbooks differ in how the parts between the [ ... ]
are written.
$endgroup$
add a comment |
$begingroup$
I think that depends on how the next step in the algorithm is defined in the respective textbook(s). There might be slight differences.
Your if-statements can be interpreted as the following half-sentences,
"If there is a change in sign, [update the weights, if there isn't, do
nothing"]
.
The second if statement reads:
"if the value is nonzero, [update the weights, otherwise do nothing.]"
Maybe your textbooks differ in how the parts between the [ ... ]
are written.
$endgroup$
add a comment |
$begingroup$
I think that depends on how the next step in the algorithm is defined in the respective textbook(s). There might be slight differences.
Your if-statements can be interpreted as the following half-sentences,
"If there is a change in sign, [update the weights, if there isn't, do
nothing"]
.
The second if statement reads:
"if the value is nonzero, [update the weights, otherwise do nothing.]"
Maybe your textbooks differ in how the parts between the [ ... ]
are written.
$endgroup$
I think that depends on how the next step in the algorithm is defined in the respective textbook(s). There might be slight differences.
Your if-statements can be interpreted as the following half-sentences,
"If there is a change in sign, [update the weights, if there isn't, do
nothing"]
.
The second if statement reads:
"if the value is nonzero, [update the weights, otherwise do nothing.]"
Maybe your textbooks differ in how the parts between the [ ... ]
are written.
answered Aug 20 '18 at 13:52
knbknb
430413
430413
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f26159%2fperceptron-which-step-function-to-choose%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown