normalization/denormalization for linear regression problem The 2019 Stack Overflow Developer Survey Results Are InMultiple linear regression, fMRIInterpreting Multiple Linear RegressionUnderstanding Locally Weighted Linear RegressionDe-normalization in Linear RegressionAssumptions of linear regressionAlternatives to linear activation function in regression tasks to limit the outputLinear Model for Linear RegressionLinear Regression Coefficient CalculationLinear Regression ErrorProblem with Linear Regression and Gradient Descent
What could be the right powersource for 15 seconds lifespan disposable giant chainsaw?
How to support a colleague who finds meetings extremely tiring?
Is this app Icon Browser Safe/Legit?
For what reasons would an animal species NOT cross a *horizontal* land bridge?
What do hard-Brexiteers want with respect to the Irish border?
Time travel alters history but people keep saying nothing's changed
Return to UK after having been refused entry years ago
If I score a critical hit on an 18 or higher, what are my chances of getting a critical hit if I roll 3d20?
Why do UK politicians seemingly ignore opinion polls on Brexit?
Why is the Constellation's nose gear so long?
How come people say “Would of”?
Deal with toxic manager when you can't quit
Loose spokes after only a few rides
Pokemon Turn Based battle (Python)
Are spiders unable to hurt humans, especially very small spiders?
Have you ever entered Singapore using a different passport or name?
Protecting Dualbooting Windows from dangerous code (like rm -rf)
Why can Shazam fly?
Why isn't airport relocation done gradually?
Geography at the pixel level
How are circuits which use complex ICs normally simulated?
Can we generate random numbers using irrational numbers like π and e?
What is the accessibility of a package's `Private` context variables?
FPGA - DIY Programming
normalization/denormalization for linear regression problem
The 2019 Stack Overflow Developer Survey Results Are InMultiple linear regression, fMRIInterpreting Multiple Linear RegressionUnderstanding Locally Weighted Linear RegressionDe-normalization in Linear RegressionAssumptions of linear regressionAlternatives to linear activation function in regression tasks to limit the outputLinear Model for Linear RegressionLinear Regression Coefficient CalculationLinear Regression ErrorProblem with Linear Regression and Gradient Descent
$begingroup$
My question is simple actually, I have two features that have big difference in scale. So I used a simple normalization by dividing the scale=np.max(array) for both data and lables. Then after prediction, I mulitiplied this scale value back.
But since I used a DNN, wouldn't the nonlinear change the scale so make the multiply not valid? e.g.
given input data: X, label: y;
y' = y/scale
X' = X/scale
predicted = f(X')
predicted_update = predicted * scale
Anyone could provide some advice on whether I could do this or it's actually not correct? How do we handle this kind of problem?
machine-learning linear-regression preprocessing
$endgroup$
bumped to the homepage by Community♦ 3 hours ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
$begingroup$
My question is simple actually, I have two features that have big difference in scale. So I used a simple normalization by dividing the scale=np.max(array) for both data and lables. Then after prediction, I mulitiplied this scale value back.
But since I used a DNN, wouldn't the nonlinear change the scale so make the multiply not valid? e.g.
given input data: X, label: y;
y' = y/scale
X' = X/scale
predicted = f(X')
predicted_update = predicted * scale
Anyone could provide some advice on whether I could do this or it's actually not correct? How do we handle this kind of problem?
machine-learning linear-regression preprocessing
$endgroup$
bumped to the homepage by Community♦ 3 hours ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
$begingroup$
My question is simple actually, I have two features that have big difference in scale. So I used a simple normalization by dividing the scale=np.max(array) for both data and lables. Then after prediction, I mulitiplied this scale value back.
But since I used a DNN, wouldn't the nonlinear change the scale so make the multiply not valid? e.g.
given input data: X, label: y;
y' = y/scale
X' = X/scale
predicted = f(X')
predicted_update = predicted * scale
Anyone could provide some advice on whether I could do this or it's actually not correct? How do we handle this kind of problem?
machine-learning linear-regression preprocessing
$endgroup$
My question is simple actually, I have two features that have big difference in scale. So I used a simple normalization by dividing the scale=np.max(array) for both data and lables. Then after prediction, I mulitiplied this scale value back.
But since I used a DNN, wouldn't the nonlinear change the scale so make the multiply not valid? e.g.
given input data: X, label: y;
y' = y/scale
X' = X/scale
predicted = f(X')
predicted_update = predicted * scale
Anyone could provide some advice on whether I could do this or it's actually not correct? How do we handle this kind of problem?
machine-learning linear-regression preprocessing
machine-learning linear-regression preprocessing
edited May 15 '18 at 8:26
Toros91
2,0042829
2,0042829
asked May 15 '18 at 8:22
user2189731user2189731
213
213
bumped to the homepage by Community♦ 3 hours ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
bumped to the homepage by Community♦ 3 hours ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
I think it is ok, as long as your training and test data have the same maximum values for every feature, approximately. The idea is that the scaling has to be done with the training set (remember that using the test set for anything that is not testing is illegal, not even for scaling).
So, you actually fit $y'$ as a function of $X'$, and you have a model that maps properly $y' = f(X')$. When you get your test data, you just obtain the predictions by doing $f(X_test')$. As the paragraph before states, if you have that $scale approx scale_test$, then you can just recover $y_test$ by doing $y_test = scale cdot f(X_test')$.
Edit: Don't worry about nonlinearities
Even if the function $f$ is highly nonlinear, it is a function capable of mapping $X'$ to $y'$. If you trust this function and trust the fact that $y = y' cdot scale$, then there is no need to worry about the way $f$ acts, as function composition makes sense for all kind of functions, both linear and nonlinear.
$endgroup$
$begingroup$
Thanks for the comment. For training/test set, I'm splitting the whole data set. And both are scaled before training/testing. And the np.max() is a simplied notation. I use separate scale for different features though I could use one. The question is actually about the bold line you mentioned. Why should i not worry about nonlinearities? I'm doubting the validity ofy=y' * scale
$endgroup$
– user2189731
May 16 '18 at 7:11
$begingroup$
If $scale approx scale_test$, then $y_test = scale cdot y_test'$. It is completely valid, scaling and getting back to usual scale are inverse transformations.
$endgroup$
– David Masip
May 16 '18 at 7:29
$begingroup$
Suppose I use a universal scale for all X, and y, regardless of training/test data. Theny=f(X)
is my original problem and what I wanted to learn isf
. After scaling, I actually is training withy'_train = g(X'_train)
and then I get my $y'_test = g(X'_test)$. To make surey'_test
==y_test
, we have to make sureg=~f
. Am I right?
$endgroup$
– user2189731
May 18 '18 at 3:09
$begingroup$
Correction to last sentence: since $ y_test = f(X_test) $ and $ y'_test = g(X'_test) $, to make sure $y_test == scale * y'_test$, I have to make sure $f(X_test) = scale * g(X'_test) $ . Which equals to: $f(X_test) = scale * g(X_test/scale) $ and this is not gauranteed?
$endgroup$
– user2189731
May 18 '18 at 3:21
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f31671%2fnormalization-denormalization-for-linear-regression-problem%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
I think it is ok, as long as your training and test data have the same maximum values for every feature, approximately. The idea is that the scaling has to be done with the training set (remember that using the test set for anything that is not testing is illegal, not even for scaling).
So, you actually fit $y'$ as a function of $X'$, and you have a model that maps properly $y' = f(X')$. When you get your test data, you just obtain the predictions by doing $f(X_test')$. As the paragraph before states, if you have that $scale approx scale_test$, then you can just recover $y_test$ by doing $y_test = scale cdot f(X_test')$.
Edit: Don't worry about nonlinearities
Even if the function $f$ is highly nonlinear, it is a function capable of mapping $X'$ to $y'$. If you trust this function and trust the fact that $y = y' cdot scale$, then there is no need to worry about the way $f$ acts, as function composition makes sense for all kind of functions, both linear and nonlinear.
$endgroup$
$begingroup$
Thanks for the comment. For training/test set, I'm splitting the whole data set. And both are scaled before training/testing. And the np.max() is a simplied notation. I use separate scale for different features though I could use one. The question is actually about the bold line you mentioned. Why should i not worry about nonlinearities? I'm doubting the validity ofy=y' * scale
$endgroup$
– user2189731
May 16 '18 at 7:11
$begingroup$
If $scale approx scale_test$, then $y_test = scale cdot y_test'$. It is completely valid, scaling and getting back to usual scale are inverse transformations.
$endgroup$
– David Masip
May 16 '18 at 7:29
$begingroup$
Suppose I use a universal scale for all X, and y, regardless of training/test data. Theny=f(X)
is my original problem and what I wanted to learn isf
. After scaling, I actually is training withy'_train = g(X'_train)
and then I get my $y'_test = g(X'_test)$. To make surey'_test
==y_test
, we have to make sureg=~f
. Am I right?
$endgroup$
– user2189731
May 18 '18 at 3:09
$begingroup$
Correction to last sentence: since $ y_test = f(X_test) $ and $ y'_test = g(X'_test) $, to make sure $y_test == scale * y'_test$, I have to make sure $f(X_test) = scale * g(X'_test) $ . Which equals to: $f(X_test) = scale * g(X_test/scale) $ and this is not gauranteed?
$endgroup$
– user2189731
May 18 '18 at 3:21
add a comment |
$begingroup$
I think it is ok, as long as your training and test data have the same maximum values for every feature, approximately. The idea is that the scaling has to be done with the training set (remember that using the test set for anything that is not testing is illegal, not even for scaling).
So, you actually fit $y'$ as a function of $X'$, and you have a model that maps properly $y' = f(X')$. When you get your test data, you just obtain the predictions by doing $f(X_test')$. As the paragraph before states, if you have that $scale approx scale_test$, then you can just recover $y_test$ by doing $y_test = scale cdot f(X_test')$.
Edit: Don't worry about nonlinearities
Even if the function $f$ is highly nonlinear, it is a function capable of mapping $X'$ to $y'$. If you trust this function and trust the fact that $y = y' cdot scale$, then there is no need to worry about the way $f$ acts, as function composition makes sense for all kind of functions, both linear and nonlinear.
$endgroup$
$begingroup$
Thanks for the comment. For training/test set, I'm splitting the whole data set. And both are scaled before training/testing. And the np.max() is a simplied notation. I use separate scale for different features though I could use one. The question is actually about the bold line you mentioned. Why should i not worry about nonlinearities? I'm doubting the validity ofy=y' * scale
$endgroup$
– user2189731
May 16 '18 at 7:11
$begingroup$
If $scale approx scale_test$, then $y_test = scale cdot y_test'$. It is completely valid, scaling and getting back to usual scale are inverse transformations.
$endgroup$
– David Masip
May 16 '18 at 7:29
$begingroup$
Suppose I use a universal scale for all X, and y, regardless of training/test data. Theny=f(X)
is my original problem and what I wanted to learn isf
. After scaling, I actually is training withy'_train = g(X'_train)
and then I get my $y'_test = g(X'_test)$. To make surey'_test
==y_test
, we have to make sureg=~f
. Am I right?
$endgroup$
– user2189731
May 18 '18 at 3:09
$begingroup$
Correction to last sentence: since $ y_test = f(X_test) $ and $ y'_test = g(X'_test) $, to make sure $y_test == scale * y'_test$, I have to make sure $f(X_test) = scale * g(X'_test) $ . Which equals to: $f(X_test) = scale * g(X_test/scale) $ and this is not gauranteed?
$endgroup$
– user2189731
May 18 '18 at 3:21
add a comment |
$begingroup$
I think it is ok, as long as your training and test data have the same maximum values for every feature, approximately. The idea is that the scaling has to be done with the training set (remember that using the test set for anything that is not testing is illegal, not even for scaling).
So, you actually fit $y'$ as a function of $X'$, and you have a model that maps properly $y' = f(X')$. When you get your test data, you just obtain the predictions by doing $f(X_test')$. As the paragraph before states, if you have that $scale approx scale_test$, then you can just recover $y_test$ by doing $y_test = scale cdot f(X_test')$.
Edit: Don't worry about nonlinearities
Even if the function $f$ is highly nonlinear, it is a function capable of mapping $X'$ to $y'$. If you trust this function and trust the fact that $y = y' cdot scale$, then there is no need to worry about the way $f$ acts, as function composition makes sense for all kind of functions, both linear and nonlinear.
$endgroup$
I think it is ok, as long as your training and test data have the same maximum values for every feature, approximately. The idea is that the scaling has to be done with the training set (remember that using the test set for anything that is not testing is illegal, not even for scaling).
So, you actually fit $y'$ as a function of $X'$, and you have a model that maps properly $y' = f(X')$. When you get your test data, you just obtain the predictions by doing $f(X_test')$. As the paragraph before states, if you have that $scale approx scale_test$, then you can just recover $y_test$ by doing $y_test = scale cdot f(X_test')$.
Edit: Don't worry about nonlinearities
Even if the function $f$ is highly nonlinear, it is a function capable of mapping $X'$ to $y'$. If you trust this function and trust the fact that $y = y' cdot scale$, then there is no need to worry about the way $f$ acts, as function composition makes sense for all kind of functions, both linear and nonlinear.
edited May 15 '18 at 10:58
answered May 15 '18 at 8:59
David MasipDavid Masip
2,5711428
2,5711428
$begingroup$
Thanks for the comment. For training/test set, I'm splitting the whole data set. And both are scaled before training/testing. And the np.max() is a simplied notation. I use separate scale for different features though I could use one. The question is actually about the bold line you mentioned. Why should i not worry about nonlinearities? I'm doubting the validity ofy=y' * scale
$endgroup$
– user2189731
May 16 '18 at 7:11
$begingroup$
If $scale approx scale_test$, then $y_test = scale cdot y_test'$. It is completely valid, scaling and getting back to usual scale are inverse transformations.
$endgroup$
– David Masip
May 16 '18 at 7:29
$begingroup$
Suppose I use a universal scale for all X, and y, regardless of training/test data. Theny=f(X)
is my original problem and what I wanted to learn isf
. After scaling, I actually is training withy'_train = g(X'_train)
and then I get my $y'_test = g(X'_test)$. To make surey'_test
==y_test
, we have to make sureg=~f
. Am I right?
$endgroup$
– user2189731
May 18 '18 at 3:09
$begingroup$
Correction to last sentence: since $ y_test = f(X_test) $ and $ y'_test = g(X'_test) $, to make sure $y_test == scale * y'_test$, I have to make sure $f(X_test) = scale * g(X'_test) $ . Which equals to: $f(X_test) = scale * g(X_test/scale) $ and this is not gauranteed?
$endgroup$
– user2189731
May 18 '18 at 3:21
add a comment |
$begingroup$
Thanks for the comment. For training/test set, I'm splitting the whole data set. And both are scaled before training/testing. And the np.max() is a simplied notation. I use separate scale for different features though I could use one. The question is actually about the bold line you mentioned. Why should i not worry about nonlinearities? I'm doubting the validity ofy=y' * scale
$endgroup$
– user2189731
May 16 '18 at 7:11
$begingroup$
If $scale approx scale_test$, then $y_test = scale cdot y_test'$. It is completely valid, scaling and getting back to usual scale are inverse transformations.
$endgroup$
– David Masip
May 16 '18 at 7:29
$begingroup$
Suppose I use a universal scale for all X, and y, regardless of training/test data. Theny=f(X)
is my original problem and what I wanted to learn isf
. After scaling, I actually is training withy'_train = g(X'_train)
and then I get my $y'_test = g(X'_test)$. To make surey'_test
==y_test
, we have to make sureg=~f
. Am I right?
$endgroup$
– user2189731
May 18 '18 at 3:09
$begingroup$
Correction to last sentence: since $ y_test = f(X_test) $ and $ y'_test = g(X'_test) $, to make sure $y_test == scale * y'_test$, I have to make sure $f(X_test) = scale * g(X'_test) $ . Which equals to: $f(X_test) = scale * g(X_test/scale) $ and this is not gauranteed?
$endgroup$
– user2189731
May 18 '18 at 3:21
$begingroup$
Thanks for the comment. For training/test set, I'm splitting the whole data set. And both are scaled before training/testing. And the np.max() is a simplied notation. I use separate scale for different features though I could use one. The question is actually about the bold line you mentioned. Why should i not worry about nonlinearities? I'm doubting the validity of
y=y' * scale
$endgroup$
– user2189731
May 16 '18 at 7:11
$begingroup$
Thanks for the comment. For training/test set, I'm splitting the whole data set. And both are scaled before training/testing. And the np.max() is a simplied notation. I use separate scale for different features though I could use one. The question is actually about the bold line you mentioned. Why should i not worry about nonlinearities? I'm doubting the validity of
y=y' * scale
$endgroup$
– user2189731
May 16 '18 at 7:11
$begingroup$
If $scale approx scale_test$, then $y_test = scale cdot y_test'$. It is completely valid, scaling and getting back to usual scale are inverse transformations.
$endgroup$
– David Masip
May 16 '18 at 7:29
$begingroup$
If $scale approx scale_test$, then $y_test = scale cdot y_test'$. It is completely valid, scaling and getting back to usual scale are inverse transformations.
$endgroup$
– David Masip
May 16 '18 at 7:29
$begingroup$
Suppose I use a universal scale for all X, and y, regardless of training/test data. Then
y=f(X)
is my original problem and what I wanted to learn is f
. After scaling, I actually is training with y'_train = g(X'_train)
and then I get my $y'_test = g(X'_test)$. To make sure y'_test
== y_test
, we have to make sure g=~f
. Am I right?$endgroup$
– user2189731
May 18 '18 at 3:09
$begingroup$
Suppose I use a universal scale for all X, and y, regardless of training/test data. Then
y=f(X)
is my original problem and what I wanted to learn is f
. After scaling, I actually is training with y'_train = g(X'_train)
and then I get my $y'_test = g(X'_test)$. To make sure y'_test
== y_test
, we have to make sure g=~f
. Am I right?$endgroup$
– user2189731
May 18 '18 at 3:09
$begingroup$
Correction to last sentence: since $ y_test = f(X_test) $ and $ y'_test = g(X'_test) $, to make sure $y_test == scale * y'_test$, I have to make sure $f(X_test) = scale * g(X'_test) $ . Which equals to: $f(X_test) = scale * g(X_test/scale) $ and this is not gauranteed?
$endgroup$
– user2189731
May 18 '18 at 3:21
$begingroup$
Correction to last sentence: since $ y_test = f(X_test) $ and $ y'_test = g(X'_test) $, to make sure $y_test == scale * y'_test$, I have to make sure $f(X_test) = scale * g(X'_test) $ . Which equals to: $f(X_test) = scale * g(X_test/scale) $ and this is not gauranteed?
$endgroup$
– user2189731
May 18 '18 at 3:21
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f31671%2fnormalization-denormalization-for-linear-regression-problem%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown