Adding a custom constraint to weighted least squares regression model Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern) 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsCan FTRL be applied on linear least squares? or is it just for logistic regression models?What is the difference between residual sum of squares and ordinary least squares?Understanding Locally Weighted Linear RegressionCorrelation with missing values. Is least squares an acceptable option?Closed form solution of linear regression via least squares using matrix derivativesLocally Weighted Regression (Loess) - Robustifying iterationsLeast Squares Regression $Ax=b$ when $A$ is fixed and $b$ is variedDifference between output of probabilistic and ordinary least squares regressionsPartial least squares (PLS)Ordinary Least Squares - P values differing in Jupyter and Spyder for the same Multiple Linear Regression problem
Using et al. for a last / senior author rather than for a first author
ListPlot join points by nearest neighbor rather than order
Why is "Consequences inflicted." not a sentence?
Why was the term "discrete" used in discrete logarithm?
Identify plant with long narrow paired leaves and reddish stems
Can I cast Passwall to drop an enemy into a 20-foot pit?
What is the meaning of the new sigil in Game of Thrones Season 8 intro?
Should I discuss the type of campaign with my players?
How can I make names more distinctive without making them longer?
String `!23` is replaced with `docker` in command line
How come Sam didn't become Lord of Horn Hill?
The logistics of corpse disposal
How to tell that you are a giant?
Short Story with Cinderella as a Voo-doo Witch
What's the purpose of writing one's academic biography in the third person?
Should I use a zero-interest credit card for a large one-time purchase?
Withdrew £2800, but only £2000 shows as withdrawn on online banking; what are my obligations?
If a contract sometimes uses the wrong name, is it still valid?
What does the "x" in "x86" represent?
How to align text above triangle figure
What is Arya's weapon design?
What's the meaning of 間時肆拾貳 at a car parking sign
Is it true that "carbohydrates are of no use for the basal metabolic need"?
Check which numbers satisfy the condition [A*B*C = A! + B! + C!]
Adding a custom constraint to weighted least squares regression model
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsCan FTRL be applied on linear least squares? or is it just for logistic regression models?What is the difference between residual sum of squares and ordinary least squares?Understanding Locally Weighted Linear RegressionCorrelation with missing values. Is least squares an acceptable option?Closed form solution of linear regression via least squares using matrix derivativesLocally Weighted Regression (Loess) - Robustifying iterationsLeast Squares Regression $Ax=b$ when $A$ is fixed and $b$ is variedDifference between output of probabilistic and ordinary least squares regressionsPartial least squares (PLS)Ordinary Least Squares - P values differing in Jupyter and Spyder for the same Multiple Linear Regression problem
$begingroup$
I am trying to run a weighted least squares model that looks something like this (but could be different):
$y = beta_0 + beta_1 x + beta_2 log(x) + epsilon$
with weights $w_1, w_2, ..$
However, I know, from external knowledge, that whatever the model the outcome must asymptotically converge to a constant for large values of $x$. How can I get an OLS estimate with this constraint.
As an example, let's say if I knew the asymptote $c$, then I can add two fake data points to my model, with very high values of $x$ and very high weights $w$ and $y=c$, and run the normal WLS model and it would give me what I need - except I don't know the value of $c$. Is there a way to impose this constraint - maybe through adding a custom error term to the model?
python regression linear-regression
$endgroup$
add a comment |
$begingroup$
I am trying to run a weighted least squares model that looks something like this (but could be different):
$y = beta_0 + beta_1 x + beta_2 log(x) + epsilon$
with weights $w_1, w_2, ..$
However, I know, from external knowledge, that whatever the model the outcome must asymptotically converge to a constant for large values of $x$. How can I get an OLS estimate with this constraint.
As an example, let's say if I knew the asymptote $c$, then I can add two fake data points to my model, with very high values of $x$ and very high weights $w$ and $y=c$, and run the normal WLS model and it would give me what I need - except I don't know the value of $c$. Is there a way to impose this constraint - maybe through adding a custom error term to the model?
python regression linear-regression
$endgroup$
$begingroup$
Perhaps solving the equation $y=max(beta_0+beta_1x+beta_2*log(x),c)+epsilon$ instead of the original? you will have to add artificial points to the data with $(x_large,c)$
$endgroup$
– Juan Esteban de la Calle
5 hours ago
$begingroup$
I don't know the value of $c$, so somehow I imagine the loss function would need to take care of this. If I knew $c$, I have described in the question how I would go about doing this.
$endgroup$
– ste_kwr
5 hours ago
$begingroup$
Maybe you can try to fit something like a modified logit model. I have never tried something liike this and I don't know anything about a possible implementation, but a logit regression has a natural limit of $1$, you may work with a unknown limit. The equation would be like this: $Y=fracc(1+e^-(beta_0+beta_1x+beta_2log(x)))$
$endgroup$
– Juan Esteban de la Calle
4 hours ago
add a comment |
$begingroup$
I am trying to run a weighted least squares model that looks something like this (but could be different):
$y = beta_0 + beta_1 x + beta_2 log(x) + epsilon$
with weights $w_1, w_2, ..$
However, I know, from external knowledge, that whatever the model the outcome must asymptotically converge to a constant for large values of $x$. How can I get an OLS estimate with this constraint.
As an example, let's say if I knew the asymptote $c$, then I can add two fake data points to my model, with very high values of $x$ and very high weights $w$ and $y=c$, and run the normal WLS model and it would give me what I need - except I don't know the value of $c$. Is there a way to impose this constraint - maybe through adding a custom error term to the model?
python regression linear-regression
$endgroup$
I am trying to run a weighted least squares model that looks something like this (but could be different):
$y = beta_0 + beta_1 x + beta_2 log(x) + epsilon$
with weights $w_1, w_2, ..$
However, I know, from external knowledge, that whatever the model the outcome must asymptotically converge to a constant for large values of $x$. How can I get an OLS estimate with this constraint.
As an example, let's say if I knew the asymptote $c$, then I can add two fake data points to my model, with very high values of $x$ and very high weights $w$ and $y=c$, and run the normal WLS model and it would give me what I need - except I don't know the value of $c$. Is there a way to impose this constraint - maybe through adding a custom error term to the model?
python regression linear-regression
python regression linear-regression
edited 9 mins ago
ste_kwr
asked 5 hours ago
ste_kwrste_kwr
1063
1063
$begingroup$
Perhaps solving the equation $y=max(beta_0+beta_1x+beta_2*log(x),c)+epsilon$ instead of the original? you will have to add artificial points to the data with $(x_large,c)$
$endgroup$
– Juan Esteban de la Calle
5 hours ago
$begingroup$
I don't know the value of $c$, so somehow I imagine the loss function would need to take care of this. If I knew $c$, I have described in the question how I would go about doing this.
$endgroup$
– ste_kwr
5 hours ago
$begingroup$
Maybe you can try to fit something like a modified logit model. I have never tried something liike this and I don't know anything about a possible implementation, but a logit regression has a natural limit of $1$, you may work with a unknown limit. The equation would be like this: $Y=fracc(1+e^-(beta_0+beta_1x+beta_2log(x)))$
$endgroup$
– Juan Esteban de la Calle
4 hours ago
add a comment |
$begingroup$
Perhaps solving the equation $y=max(beta_0+beta_1x+beta_2*log(x),c)+epsilon$ instead of the original? you will have to add artificial points to the data with $(x_large,c)$
$endgroup$
– Juan Esteban de la Calle
5 hours ago
$begingroup$
I don't know the value of $c$, so somehow I imagine the loss function would need to take care of this. If I knew $c$, I have described in the question how I would go about doing this.
$endgroup$
– ste_kwr
5 hours ago
$begingroup$
Maybe you can try to fit something like a modified logit model. I have never tried something liike this and I don't know anything about a possible implementation, but a logit regression has a natural limit of $1$, you may work with a unknown limit. The equation would be like this: $Y=fracc(1+e^-(beta_0+beta_1x+beta_2log(x)))$
$endgroup$
– Juan Esteban de la Calle
4 hours ago
$begingroup$
Perhaps solving the equation $y=max(beta_0+beta_1x+beta_2*log(x),c)+epsilon$ instead of the original? you will have to add artificial points to the data with $(x_large,c)$
$endgroup$
– Juan Esteban de la Calle
5 hours ago
$begingroup$
Perhaps solving the equation $y=max(beta_0+beta_1x+beta_2*log(x),c)+epsilon$ instead of the original? you will have to add artificial points to the data with $(x_large,c)$
$endgroup$
– Juan Esteban de la Calle
5 hours ago
$begingroup$
I don't know the value of $c$, so somehow I imagine the loss function would need to take care of this. If I knew $c$, I have described in the question how I would go about doing this.
$endgroup$
– ste_kwr
5 hours ago
$begingroup$
I don't know the value of $c$, so somehow I imagine the loss function would need to take care of this. If I knew $c$, I have described in the question how I would go about doing this.
$endgroup$
– ste_kwr
5 hours ago
$begingroup$
Maybe you can try to fit something like a modified logit model. I have never tried something liike this and I don't know anything about a possible implementation, but a logit regression has a natural limit of $1$, you may work with a unknown limit. The equation would be like this: $Y=fracc(1+e^-(beta_0+beta_1x+beta_2log(x)))$
$endgroup$
– Juan Esteban de la Calle
4 hours ago
$begingroup$
Maybe you can try to fit something like a modified logit model. I have never tried something liike this and I don't know anything about a possible implementation, but a logit regression has a natural limit of $1$, you may work with a unknown limit. The equation would be like this: $Y=fracc(1+e^-(beta_0+beta_1x+beta_2log(x)))$
$endgroup$
– Juan Esteban de la Calle
4 hours ago
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
The model you are looking for is this:
$Y=fracA1+e^-(beta_0+beta_1x+beta_2log(x))$, this could not be obtained but a very similar was obtained.
This code in R might work:
R=data.frame(X=c(1,2,3,4,5,6,7,8,9),Y=c(1,2,3,3,3,3,3,3,3)) # Data in which X is a line, and Y has an still unknown limit.
model=nls(formula = Y~A/(1+exp(-(b0+b1*X))),data=R)
summary(E)
In the result you can see how $A$ says that the limit of 3 (previously unknown) is calculated.
There is a limitation to take into account, is explained in this link, is summarized in the impossibility for all possible models to exist, the "most inside" model should be linear.
The model $beta_0+beta_1x+beta_2log(x)$ could not be used, the model $beta_0+beta_1x$ could be used, take this into account.
First steps with Non-Linear Regression in R
Singular Gradient Error in nls with correct starting values
New contributor
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49428%2fadding-a-custom-constraint-to-weighted-least-squares-regression-model%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
The model you are looking for is this:
$Y=fracA1+e^-(beta_0+beta_1x+beta_2log(x))$, this could not be obtained but a very similar was obtained.
This code in R might work:
R=data.frame(X=c(1,2,3,4,5,6,7,8,9),Y=c(1,2,3,3,3,3,3,3,3)) # Data in which X is a line, and Y has an still unknown limit.
model=nls(formula = Y~A/(1+exp(-(b0+b1*X))),data=R)
summary(E)
In the result you can see how $A$ says that the limit of 3 (previously unknown) is calculated.
There is a limitation to take into account, is explained in this link, is summarized in the impossibility for all possible models to exist, the "most inside" model should be linear.
The model $beta_0+beta_1x+beta_2log(x)$ could not be used, the model $beta_0+beta_1x$ could be used, take this into account.
First steps with Non-Linear Regression in R
Singular Gradient Error in nls with correct starting values
New contributor
$endgroup$
add a comment |
$begingroup$
The model you are looking for is this:
$Y=fracA1+e^-(beta_0+beta_1x+beta_2log(x))$, this could not be obtained but a very similar was obtained.
This code in R might work:
R=data.frame(X=c(1,2,3,4,5,6,7,8,9),Y=c(1,2,3,3,3,3,3,3,3)) # Data in which X is a line, and Y has an still unknown limit.
model=nls(formula = Y~A/(1+exp(-(b0+b1*X))),data=R)
summary(E)
In the result you can see how $A$ says that the limit of 3 (previously unknown) is calculated.
There is a limitation to take into account, is explained in this link, is summarized in the impossibility for all possible models to exist, the "most inside" model should be linear.
The model $beta_0+beta_1x+beta_2log(x)$ could not be used, the model $beta_0+beta_1x$ could be used, take this into account.
First steps with Non-Linear Regression in R
Singular Gradient Error in nls with correct starting values
New contributor
$endgroup$
add a comment |
$begingroup$
The model you are looking for is this:
$Y=fracA1+e^-(beta_0+beta_1x+beta_2log(x))$, this could not be obtained but a very similar was obtained.
This code in R might work:
R=data.frame(X=c(1,2,3,4,5,6,7,8,9),Y=c(1,2,3,3,3,3,3,3,3)) # Data in which X is a line, and Y has an still unknown limit.
model=nls(formula = Y~A/(1+exp(-(b0+b1*X))),data=R)
summary(E)
In the result you can see how $A$ says that the limit of 3 (previously unknown) is calculated.
There is a limitation to take into account, is explained in this link, is summarized in the impossibility for all possible models to exist, the "most inside" model should be linear.
The model $beta_0+beta_1x+beta_2log(x)$ could not be used, the model $beta_0+beta_1x$ could be used, take this into account.
First steps with Non-Linear Regression in R
Singular Gradient Error in nls with correct starting values
New contributor
$endgroup$
The model you are looking for is this:
$Y=fracA1+e^-(beta_0+beta_1x+beta_2log(x))$, this could not be obtained but a very similar was obtained.
This code in R might work:
R=data.frame(X=c(1,2,3,4,5,6,7,8,9),Y=c(1,2,3,3,3,3,3,3,3)) # Data in which X is a line, and Y has an still unknown limit.
model=nls(formula = Y~A/(1+exp(-(b0+b1*X))),data=R)
summary(E)
In the result you can see how $A$ says that the limit of 3 (previously unknown) is calculated.
There is a limitation to take into account, is explained in this link, is summarized in the impossibility for all possible models to exist, the "most inside" model should be linear.
The model $beta_0+beta_1x+beta_2log(x)$ could not be used, the model $beta_0+beta_1x$ could be used, take this into account.
First steps with Non-Linear Regression in R
Singular Gradient Error in nls with correct starting values
New contributor
New contributor
answered 3 hours ago
Juan Esteban de la CalleJuan Esteban de la Calle
36011
36011
New contributor
New contributor
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49428%2fadding-a-custom-constraint-to-weighted-least-squares-regression-model%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Perhaps solving the equation $y=max(beta_0+beta_1x+beta_2*log(x),c)+epsilon$ instead of the original? you will have to add artificial points to the data with $(x_large,c)$
$endgroup$
– Juan Esteban de la Calle
5 hours ago
$begingroup$
I don't know the value of $c$, so somehow I imagine the loss function would need to take care of this. If I knew $c$, I have described in the question how I would go about doing this.
$endgroup$
– ste_kwr
5 hours ago
$begingroup$
Maybe you can try to fit something like a modified logit model. I have never tried something liike this and I don't know anything about a possible implementation, but a logit regression has a natural limit of $1$, you may work with a unknown limit. The equation would be like this: $Y=fracc(1+e^-(beta_0+beta_1x+beta_2log(x)))$
$endgroup$
– Juan Esteban de la Calle
4 hours ago