Handling missing values to optimize polynomial features Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30UTC (7:30pm US/Eastern) 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsHandling many missing valuesMissing Categorical Features - no imputationFilling missing values for important featuresMissing population values in census dataHow would one impute missing values for a Discrete variable?Imputation missing values other than using Mean, Median in pythonDealing with NaN (missing) values for Logistic Regression- Best practices?Missing Values in ClassificationMissing Values In New DataHow to deal with missing data for Bernoulli Naive Bayes?
Induction Proof for Sequences
Why does it sometimes sound good to play a grace note as a lead in to a note in a melody?
Sentence with dass with three Verbs (One modal and two connected with zu)
Crossing US/Canada Border for less than 24 hours
Lagrange four-squares theorem --- deterministic complexity
How does light 'choose' between wave and particle behaviour?
What initially awakened the Balrog?
Co-worker has annoying ringtone
How much damage would a cupful of neutron star matter do to the Earth?
What does 丫 mean? 丫是什么意思?
Did any compiler fully use 80-bit floating point?
Dyck paths with extra diagonals from valleys (Laser construction)
macOS: Name for app shortcut screen found by pinching with thumb and three fingers
Is there hard evidence that the grant peer review system performs significantly better than random?
In musical terms, what properties are varied by the human voice to produce different words / syllables?
Misunderstanding of Sylow theory
How to run automated tests after each commit?
Would it be easier to apply for a UK visa if there is a host family to sponsor for you in going there?
What is an "asse" in Elizabethan English?
How does a spellshard spellbook work?
Drawing spherical mirrors
A letter with no particular backstory
How do living politicians protect their readily obtainable signatures from misuse?
How to write capital alpha?
Handling missing values to optimize polynomial features
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30UTC (7:30pm US/Eastern)
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsHandling many missing valuesMissing Categorical Features - no imputationFilling missing values for important featuresMissing population values in census dataHow would one impute missing values for a Discrete variable?Imputation missing values other than using Mean, Median in pythonDealing with NaN (missing) values for Logistic Regression- Best practices?Missing Values in ClassificationMissing Values In New DataHow to deal with missing data for Bernoulli Naive Bayes?
$begingroup$
I was playing around with some data to practice my Python and machine learning skills and wanted to create polynomial features from two features that I think are related and have a strong influence on the predicted output.
Unfortunately my data has missing values (np.NaN) and sklearn's PolynomialFeatures() can not handle these values. What is the best way to impute these values?
I've been trying to replace them with 0, 1, mean and median and for my dataset using the median seems to be the best solution. But can this be generalized and what is the intuition behind it?
I was also wondering if filling methods like ffill, bfill or even KNN modelling can be useful in this context.
Thanks a lot!
feature-engineering missing-data data-imputation
$endgroup$
add a comment |
$begingroup$
I was playing around with some data to practice my Python and machine learning skills and wanted to create polynomial features from two features that I think are related and have a strong influence on the predicted output.
Unfortunately my data has missing values (np.NaN) and sklearn's PolynomialFeatures() can not handle these values. What is the best way to impute these values?
I've been trying to replace them with 0, 1, mean and median and for my dataset using the median seems to be the best solution. But can this be generalized and what is the intuition behind it?
I was also wondering if filling methods like ffill, bfill or even KNN modelling can be useful in this context.
Thanks a lot!
feature-engineering missing-data data-imputation
$endgroup$
add a comment |
$begingroup$
I was playing around with some data to practice my Python and machine learning skills and wanted to create polynomial features from two features that I think are related and have a strong influence on the predicted output.
Unfortunately my data has missing values (np.NaN) and sklearn's PolynomialFeatures() can not handle these values. What is the best way to impute these values?
I've been trying to replace them with 0, 1, mean and median and for my dataset using the median seems to be the best solution. But can this be generalized and what is the intuition behind it?
I was also wondering if filling methods like ffill, bfill or even KNN modelling can be useful in this context.
Thanks a lot!
feature-engineering missing-data data-imputation
$endgroup$
I was playing around with some data to practice my Python and machine learning skills and wanted to create polynomial features from two features that I think are related and have a strong influence on the predicted output.
Unfortunately my data has missing values (np.NaN) and sklearn's PolynomialFeatures() can not handle these values. What is the best way to impute these values?
I've been trying to replace them with 0, 1, mean and median and for my dataset using the median seems to be the best solution. But can this be generalized and what is the intuition behind it?
I was also wondering if filling methods like ffill, bfill or even KNN modelling can be useful in this context.
Thanks a lot!
feature-engineering missing-data data-imputation
feature-engineering missing-data data-imputation
asked Oct 21 '18 at 8:47
Steven Van DorpeSteven Van Dorpe
162
162
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
There is no globally - one could say even locally - ideal way to deal with missing data. This aspect points to incompleteness in the data you're feeding your algorithms, and imputing is simply a technique meant to fill gaps.
Data imputation's motivation is to make feature distribution in your sets the closest possible to whatever real-world distribution it attempts to portray.
The intuition behind why the median is what worked better for your scenario is not something I could be precise about without having access to the data you worked upon, but the positive result is deeply related to your feature distribution, which for the missing points in your data is better represented by the feature's median than all other metrics you've calculated. I'd recommend reading material such as this article that both explains and shows implementation of different imputation techniques - KNN, as you mentioned, being one of them. One of the advantages is seeing how different methods work for a given distribution:
As you have already pointed in the end of your question, methods such as KNN are some straightforward means of imputation that could benefit your case better than mean/median imputation. The biggest difference between those is that KNN better preserves the variance in your data, whereas mean imputation (as you can see from the above image) shifts missing data towards a single value.
Since no method is 100% globally optimal, I'd advise you to try them - KNN, Multiple Interpolation and the likes - and compare. Invest the appropriate amount of time on the techniques that make the most sense for your data.
New contributor
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f40003%2fhandling-missing-values-to-optimize-polynomial-features%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
There is no globally - one could say even locally - ideal way to deal with missing data. This aspect points to incompleteness in the data you're feeding your algorithms, and imputing is simply a technique meant to fill gaps.
Data imputation's motivation is to make feature distribution in your sets the closest possible to whatever real-world distribution it attempts to portray.
The intuition behind why the median is what worked better for your scenario is not something I could be precise about without having access to the data you worked upon, but the positive result is deeply related to your feature distribution, which for the missing points in your data is better represented by the feature's median than all other metrics you've calculated. I'd recommend reading material such as this article that both explains and shows implementation of different imputation techniques - KNN, as you mentioned, being one of them. One of the advantages is seeing how different methods work for a given distribution:
As you have already pointed in the end of your question, methods such as KNN are some straightforward means of imputation that could benefit your case better than mean/median imputation. The biggest difference between those is that KNN better preserves the variance in your data, whereas mean imputation (as you can see from the above image) shifts missing data towards a single value.
Since no method is 100% globally optimal, I'd advise you to try them - KNN, Multiple Interpolation and the likes - and compare. Invest the appropriate amount of time on the techniques that make the most sense for your data.
New contributor
$endgroup$
add a comment |
$begingroup$
There is no globally - one could say even locally - ideal way to deal with missing data. This aspect points to incompleteness in the data you're feeding your algorithms, and imputing is simply a technique meant to fill gaps.
Data imputation's motivation is to make feature distribution in your sets the closest possible to whatever real-world distribution it attempts to portray.
The intuition behind why the median is what worked better for your scenario is not something I could be precise about without having access to the data you worked upon, but the positive result is deeply related to your feature distribution, which for the missing points in your data is better represented by the feature's median than all other metrics you've calculated. I'd recommend reading material such as this article that both explains and shows implementation of different imputation techniques - KNN, as you mentioned, being one of them. One of the advantages is seeing how different methods work for a given distribution:
As you have already pointed in the end of your question, methods such as KNN are some straightforward means of imputation that could benefit your case better than mean/median imputation. The biggest difference between those is that KNN better preserves the variance in your data, whereas mean imputation (as you can see from the above image) shifts missing data towards a single value.
Since no method is 100% globally optimal, I'd advise you to try them - KNN, Multiple Interpolation and the likes - and compare. Invest the appropriate amount of time on the techniques that make the most sense for your data.
New contributor
$endgroup$
add a comment |
$begingroup$
There is no globally - one could say even locally - ideal way to deal with missing data. This aspect points to incompleteness in the data you're feeding your algorithms, and imputing is simply a technique meant to fill gaps.
Data imputation's motivation is to make feature distribution in your sets the closest possible to whatever real-world distribution it attempts to portray.
The intuition behind why the median is what worked better for your scenario is not something I could be precise about without having access to the data you worked upon, but the positive result is deeply related to your feature distribution, which for the missing points in your data is better represented by the feature's median than all other metrics you've calculated. I'd recommend reading material such as this article that both explains and shows implementation of different imputation techniques - KNN, as you mentioned, being one of them. One of the advantages is seeing how different methods work for a given distribution:
As you have already pointed in the end of your question, methods such as KNN are some straightforward means of imputation that could benefit your case better than mean/median imputation. The biggest difference between those is that KNN better preserves the variance in your data, whereas mean imputation (as you can see from the above image) shifts missing data towards a single value.
Since no method is 100% globally optimal, I'd advise you to try them - KNN, Multiple Interpolation and the likes - and compare. Invest the appropriate amount of time on the techniques that make the most sense for your data.
New contributor
$endgroup$
There is no globally - one could say even locally - ideal way to deal with missing data. This aspect points to incompleteness in the data you're feeding your algorithms, and imputing is simply a technique meant to fill gaps.
Data imputation's motivation is to make feature distribution in your sets the closest possible to whatever real-world distribution it attempts to portray.
The intuition behind why the median is what worked better for your scenario is not something I could be precise about without having access to the data you worked upon, but the positive result is deeply related to your feature distribution, which for the missing points in your data is better represented by the feature's median than all other metrics you've calculated. I'd recommend reading material such as this article that both explains and shows implementation of different imputation techniques - KNN, as you mentioned, being one of them. One of the advantages is seeing how different methods work for a given distribution:
As you have already pointed in the end of your question, methods such as KNN are some straightforward means of imputation that could benefit your case better than mean/median imputation. The biggest difference between those is that KNN better preserves the variance in your data, whereas mean imputation (as you can see from the above image) shifts missing data towards a single value.
Since no method is 100% globally optimal, I'd advise you to try them - KNN, Multiple Interpolation and the likes - and compare. Invest the appropriate amount of time on the techniques that make the most sense for your data.
New contributor
New contributor
answered 36 mins ago
jcezarmsjcezarms
515
515
New contributor
New contributor
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f40003%2fhandling-missing-values-to-optimize-polynomial-features%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown