Handling missing values to optimize polynomial features Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30UTC (7:30pm US/Eastern) 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsHandling many missing valuesMissing Categorical Features - no imputationFilling missing values for important featuresMissing population values in census dataHow would one impute missing values for a Discrete variable?Imputation missing values other than using Mean, Median in pythonDealing with NaN (missing) values for Logistic Regression- Best practices?Missing Values in ClassificationMissing Values In New DataHow to deal with missing data for Bernoulli Naive Bayes?

Induction Proof for Sequences

Why does it sometimes sound good to play a grace note as a lead in to a note in a melody?

Sentence with dass with three Verbs (One modal and two connected with zu)

Crossing US/Canada Border for less than 24 hours

Lagrange four-squares theorem --- deterministic complexity

How does light 'choose' between wave and particle behaviour?

What initially awakened the Balrog?

Co-worker has annoying ringtone

How much damage would a cupful of neutron star matter do to the Earth?

What does 丫 mean? 丫是什么意思?

Did any compiler fully use 80-bit floating point?

Dyck paths with extra diagonals from valleys (Laser construction)

macOS: Name for app shortcut screen found by pinching with thumb and three fingers

Is there hard evidence that the grant peer review system performs significantly better than random?

In musical terms, what properties are varied by the human voice to produce different words / syllables?

Misunderstanding of Sylow theory

How to run automated tests after each commit?

Would it be easier to apply for a UK visa if there is a host family to sponsor for you in going there?

What is an "asse" in Elizabethan English?

How does a spellshard spellbook work?

Drawing spherical mirrors

A letter with no particular backstory

How do living politicians protect their readily obtainable signatures from misuse?

How to write capital alpha?



Handling missing values to optimize polynomial features



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30UTC (7:30pm US/Eastern)
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsHandling many missing valuesMissing Categorical Features - no imputationFilling missing values for important featuresMissing population values in census dataHow would one impute missing values for a Discrete variable?Imputation missing values other than using Mean, Median in pythonDealing with NaN (missing) values for Logistic Regression- Best practices?Missing Values in ClassificationMissing Values In New DataHow to deal with missing data for Bernoulli Naive Bayes?










3












$begingroup$


I was playing around with some data to practice my Python and machine learning skills and wanted to create polynomial features from two features that I think are related and have a strong influence on the predicted output.



Unfortunately my data has missing values (np.NaN) and sklearn's PolynomialFeatures() can not handle these values. What is the best way to impute these values?



I've been trying to replace them with 0, 1, mean and median and for my dataset using the median seems to be the best solution. But can this be generalized and what is the intuition behind it?



I was also wondering if filling methods like ffill, bfill or even KNN modelling can be useful in this context.



Thanks a lot!










share|improve this question









$endgroup$
















    3












    $begingroup$


    I was playing around with some data to practice my Python and machine learning skills and wanted to create polynomial features from two features that I think are related and have a strong influence on the predicted output.



    Unfortunately my data has missing values (np.NaN) and sklearn's PolynomialFeatures() can not handle these values. What is the best way to impute these values?



    I've been trying to replace them with 0, 1, mean and median and for my dataset using the median seems to be the best solution. But can this be generalized and what is the intuition behind it?



    I was also wondering if filling methods like ffill, bfill or even KNN modelling can be useful in this context.



    Thanks a lot!










    share|improve this question









    $endgroup$














      3












      3








      3


      1



      $begingroup$


      I was playing around with some data to practice my Python and machine learning skills and wanted to create polynomial features from two features that I think are related and have a strong influence on the predicted output.



      Unfortunately my data has missing values (np.NaN) and sklearn's PolynomialFeatures() can not handle these values. What is the best way to impute these values?



      I've been trying to replace them with 0, 1, mean and median and for my dataset using the median seems to be the best solution. But can this be generalized and what is the intuition behind it?



      I was also wondering if filling methods like ffill, bfill or even KNN modelling can be useful in this context.



      Thanks a lot!










      share|improve this question









      $endgroup$




      I was playing around with some data to practice my Python and machine learning skills and wanted to create polynomial features from two features that I think are related and have a strong influence on the predicted output.



      Unfortunately my data has missing values (np.NaN) and sklearn's PolynomialFeatures() can not handle these values. What is the best way to impute these values?



      I've been trying to replace them with 0, 1, mean and median and for my dataset using the median seems to be the best solution. But can this be generalized and what is the intuition behind it?



      I was also wondering if filling methods like ffill, bfill or even KNN modelling can be useful in this context.



      Thanks a lot!







      feature-engineering missing-data data-imputation






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Oct 21 '18 at 8:47









      Steven Van DorpeSteven Van Dorpe

      162




      162




















          1 Answer
          1






          active

          oldest

          votes


















          0












          $begingroup$

          There is no globally - one could say even locally - ideal way to deal with missing data. This aspect points to incompleteness in the data you're feeding your algorithms, and imputing is simply a technique meant to fill gaps.



          Data imputation's motivation is to make feature distribution in your sets the closest possible to whatever real-world distribution it attempts to portray.



          The intuition behind why the median is what worked better for your scenario is not something I could be precise about without having access to the data you worked upon, but the positive result is deeply related to your feature distribution, which for the missing points in your data is better represented by the feature's median than all other metrics you've calculated. I'd recommend reading material such as this article that both explains and shows implementation of different imputation techniques - KNN, as you mentioned, being one of them. One of the advantages is seeing how different methods work for a given distribution:



          Imputation techniques over time series data



          As you have already pointed in the end of your question, methods such as KNN are some straightforward means of imputation that could benefit your case better than mean/median imputation. The biggest difference between those is that KNN better preserves the variance in your data, whereas mean imputation (as you can see from the above image) shifts missing data towards a single value.



          Since no method is 100% globally optimal, I'd advise you to try them - KNN, Multiple Interpolation and the likes - and compare. Invest the appropriate amount of time on the techniques that make the most sense for your data.






          share|improve this answer








          New contributor




          jcezarms is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.






          $endgroup$













            Your Answer








            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "557"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f40003%2fhandling-missing-values-to-optimize-polynomial-features%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0












            $begingroup$

            There is no globally - one could say even locally - ideal way to deal with missing data. This aspect points to incompleteness in the data you're feeding your algorithms, and imputing is simply a technique meant to fill gaps.



            Data imputation's motivation is to make feature distribution in your sets the closest possible to whatever real-world distribution it attempts to portray.



            The intuition behind why the median is what worked better for your scenario is not something I could be precise about without having access to the data you worked upon, but the positive result is deeply related to your feature distribution, which for the missing points in your data is better represented by the feature's median than all other metrics you've calculated. I'd recommend reading material such as this article that both explains and shows implementation of different imputation techniques - KNN, as you mentioned, being one of them. One of the advantages is seeing how different methods work for a given distribution:



            Imputation techniques over time series data



            As you have already pointed in the end of your question, methods such as KNN are some straightforward means of imputation that could benefit your case better than mean/median imputation. The biggest difference between those is that KNN better preserves the variance in your data, whereas mean imputation (as you can see from the above image) shifts missing data towards a single value.



            Since no method is 100% globally optimal, I'd advise you to try them - KNN, Multiple Interpolation and the likes - and compare. Invest the appropriate amount of time on the techniques that make the most sense for your data.






            share|improve this answer








            New contributor




            jcezarms is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.






            $endgroup$

















              0












              $begingroup$

              There is no globally - one could say even locally - ideal way to deal with missing data. This aspect points to incompleteness in the data you're feeding your algorithms, and imputing is simply a technique meant to fill gaps.



              Data imputation's motivation is to make feature distribution in your sets the closest possible to whatever real-world distribution it attempts to portray.



              The intuition behind why the median is what worked better for your scenario is not something I could be precise about without having access to the data you worked upon, but the positive result is deeply related to your feature distribution, which for the missing points in your data is better represented by the feature's median than all other metrics you've calculated. I'd recommend reading material such as this article that both explains and shows implementation of different imputation techniques - KNN, as you mentioned, being one of them. One of the advantages is seeing how different methods work for a given distribution:



              Imputation techniques over time series data



              As you have already pointed in the end of your question, methods such as KNN are some straightforward means of imputation that could benefit your case better than mean/median imputation. The biggest difference between those is that KNN better preserves the variance in your data, whereas mean imputation (as you can see from the above image) shifts missing data towards a single value.



              Since no method is 100% globally optimal, I'd advise you to try them - KNN, Multiple Interpolation and the likes - and compare. Invest the appropriate amount of time on the techniques that make the most sense for your data.






              share|improve this answer








              New contributor




              jcezarms is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.






              $endgroup$















                0












                0








                0





                $begingroup$

                There is no globally - one could say even locally - ideal way to deal with missing data. This aspect points to incompleteness in the data you're feeding your algorithms, and imputing is simply a technique meant to fill gaps.



                Data imputation's motivation is to make feature distribution in your sets the closest possible to whatever real-world distribution it attempts to portray.



                The intuition behind why the median is what worked better for your scenario is not something I could be precise about without having access to the data you worked upon, but the positive result is deeply related to your feature distribution, which for the missing points in your data is better represented by the feature's median than all other metrics you've calculated. I'd recommend reading material such as this article that both explains and shows implementation of different imputation techniques - KNN, as you mentioned, being one of them. One of the advantages is seeing how different methods work for a given distribution:



                Imputation techniques over time series data



                As you have already pointed in the end of your question, methods such as KNN are some straightforward means of imputation that could benefit your case better than mean/median imputation. The biggest difference between those is that KNN better preserves the variance in your data, whereas mean imputation (as you can see from the above image) shifts missing data towards a single value.



                Since no method is 100% globally optimal, I'd advise you to try them - KNN, Multiple Interpolation and the likes - and compare. Invest the appropriate amount of time on the techniques that make the most sense for your data.






                share|improve this answer








                New contributor




                jcezarms is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.






                $endgroup$



                There is no globally - one could say even locally - ideal way to deal with missing data. This aspect points to incompleteness in the data you're feeding your algorithms, and imputing is simply a technique meant to fill gaps.



                Data imputation's motivation is to make feature distribution in your sets the closest possible to whatever real-world distribution it attempts to portray.



                The intuition behind why the median is what worked better for your scenario is not something I could be precise about without having access to the data you worked upon, but the positive result is deeply related to your feature distribution, which for the missing points in your data is better represented by the feature's median than all other metrics you've calculated. I'd recommend reading material such as this article that both explains and shows implementation of different imputation techniques - KNN, as you mentioned, being one of them. One of the advantages is seeing how different methods work for a given distribution:



                Imputation techniques over time series data



                As you have already pointed in the end of your question, methods such as KNN are some straightforward means of imputation that could benefit your case better than mean/median imputation. The biggest difference between those is that KNN better preserves the variance in your data, whereas mean imputation (as you can see from the above image) shifts missing data towards a single value.



                Since no method is 100% globally optimal, I'd advise you to try them - KNN, Multiple Interpolation and the likes - and compare. Invest the appropriate amount of time on the techniques that make the most sense for your data.







                share|improve this answer








                New contributor




                jcezarms is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.









                share|improve this answer



                share|improve this answer






                New contributor




                jcezarms is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.









                answered 36 mins ago









                jcezarmsjcezarms

                515




                515




                New contributor




                jcezarms is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.





                New contributor





                jcezarms is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.






                jcezarms is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Data Science Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f40003%2fhandling-missing-values-to-optimize-polynomial-features%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Францішак Багушэвіч Змест Сям'я | Біяграфія | Творчасць | Мова Багушэвіча | Ацэнкі дзейнасці | Цікавыя факты | Спадчына | Выбраная бібліяграфія | Ушанаванне памяці | У філатэліі | Зноскі | Літаратура | Спасылкі | НавігацыяЛяхоўскі У. Рупіўся дзеля Бога і людзей: Жыццёвы шлях Лявона Вітан-Дубейкаўскага // Вольскі і Памідораў з песняй пра немца Адвакат, паэт, народны заступнік Ашмянскі веснікВ Минске появится площадь Богушевича и улица Сырокомли, Белорусская деловая газета, 19 июля 2001 г.Айцец беларускай нацыянальнай ідэі паўстаў у бронзе Сяргей Аляксандравіч Адашкевіч (1918, Мінск). 80-я гады. Бюст «Францішак Багушэвіч».Яўген Мікалаевіч Ціхановіч. «Партрэт Францішка Багушэвіча»Мікола Мікалаевіч Купава. «Партрэт зачынальніка новай беларускай літаратуры Францішка Багушэвіча»Уладзімір Іванавіч Мелехаў. На помніку «Змагарам за родную мову» Барэльеф «Францішак Багушэвіч»Памяць пра Багушэвіча на Віленшчыне Страчаная сталіца. Беларускія шыльды на вуліцах Вільні«Krynica». Ideologia i przywódcy białoruskiego katolicyzmuФранцішак БагушэвічТворы на knihi.comТворы Францішка Багушэвіча на bellib.byСодаль Уладзімір. Францішак Багушэвіч на Лідчыне;Луцкевіч Антон. Жыцьцё і творчасьць Фр. Багушэвіча ў успамінах ягоных сучасьнікаў // Запісы Беларускага Навуковага таварыства. Вільня, 1938. Сшытак 1. С. 16-34.Большая российская1188761710000 0000 5537 633Xn9209310021619551927869394п

                    Partai Komunis Tiongkok Daftar isi Kepemimpinan | Pranala luar | Referensi | Menu navigasidiperiksa1 perubahan tertundacpc.people.com.cnSitus resmiSurat kabar resmi"Why the Communist Party is alive, well and flourishing in China"0307-1235"Full text of Constitution of Communist Party of China"smengembangkannyas

                    ValueError: Expected n_neighbors <= n_samples, but n_samples = 1, n_neighbors = 6 (SMOTE) The 2019 Stack Overflow Developer Survey Results Are InCan SMOTE be applied over sequence of words (sentences)?ValueError when doing validation with random forestsSMOTE and multi class oversamplingLogic behind SMOTE-NC?ValueError: Error when checking target: expected dense_1 to have shape (7,) but got array with shape (1,)SmoteBoost: Should SMOTE be ran individually for each iteration/tree in the boosting?solving multi-class imbalance classification using smote and OSSUsing SMOTE for Synthetic Data generation to improve performance on unbalanced dataproblem of entry format for a simple model in KerasSVM SMOTE fit_resample() function runs forever with no result