How to deal with missing data for Bernoulli Naive Bayes? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern) 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsHow does the naive Bayes classifier handle missing data in training?Scikit Learn Missing Data - Categorical valuesNaive Bayes Should generate prediction given missing features (scikit learn)how to impute missing values on numpy array created by train_test_split from pandas.DataFrame?How does the naive Bayes classifier handle missing data in training?Scikit Learn Missing Data - Categorical valuesMissing Categorical Features - no imputationNaive Bayes Should generate prediction given missing features (scikit learn)handling missing data in pandas pythonHow can I handle missing categorical data that has significance?What Naive Bayes method is being used in this example?Difference between Bernoulli and Multinomial Naive BayesDealing with NaN (missing) values for Logistic Regression- Best practices?

What is this clumpy 20-30cm high yellow-flowered plant?

How much damage would a cupful of neutron star matter do to the Earth?

Taylor expansion of ln(1-x)

Disembodied hand growing fangs

Do any jurisdictions seriously consider reclassifying social media websites as publishers?

Why is it faster to reheat something than it is to cook it?

Can the Great Weapon Master feat's damage bonus and accuracy penalty apply to attacks from the Spiritual Weapon spell?

Selecting user stories during sprint planning

Why is Nikon 1.4g better when Nikon 1.8g is sharper?

Why wasn't DOSKEY integrated with COMMAND.COM?

Illegal assignment from sObject to Id

Why is the AVR GCC compiler using a full `CALL` even though I have set the `-mshort-calls` flag?

Is there any word for a place full of confusion?

How to write this math term? with cases it isn't working

SF book about people trapped in a series of worlds they imagine

Most bit efficient text communication method?

What's the meaning of "fortified infraction restraint"?

What is the difference between globalisation and imperialism?

How often does castling occur in grandmaster games?

How to compare two different files line by line in unix?

Trademark violation for app?

What does it mean that physics no longer uses mechanical models to describe phenomena?

Do wooden building fires get hotter than 600°C?

Take 2! Is this homebrew Lady of Pain warlock patron balanced?



How to deal with missing data for Bernoulli Naive Bayes?



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern)
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsHow does the naive Bayes classifier handle missing data in training?Scikit Learn Missing Data - Categorical valuesNaive Bayes Should generate prediction given missing features (scikit learn)how to impute missing values on numpy array created by train_test_split from pandas.DataFrame?How does the naive Bayes classifier handle missing data in training?Scikit Learn Missing Data - Categorical valuesMissing Categorical Features - no imputationNaive Bayes Should generate prediction given missing features (scikit learn)handling missing data in pandas pythonHow can I handle missing categorical data that has significance?What Naive Bayes method is being used in this example?Difference between Bernoulli and Multinomial Naive BayesDealing with NaN (missing) values for Logistic Regression- Best practices?










1












$begingroup$


I am dealing with a dataset of categorical data that looks like this:



 content_1 content_2 content_4 content_5 content_6 
0 NaN 0.0 0.0 0.0 NaN
1 NaN 0.0 0.0 0.0 NaN
2 NaN NaN NaN NaN NaN
3 0.0 NaN 0.0 NaN 0.0


These represent user downloads from an intranet, where a user is shown the opportunity to download a particular piece of content. 1 indicates a user seeing content and downloading it, 0 indicates a user seeing content and not downloading it, and NaN means the user did not see/was not shown that piece of content.



I am trying to use the scikit-learn Bernoulli Naive Bayes model to predict the probability of a user downloading content_1, given if they have seen downloaded / not downloaded content_2-7.



I have removed all data where content_1 is equal to NaN as I'm obviously only interested in data points where a decision was actively made by the user. This gives data as:



 content_1 content_2 content_3 content_4 content_5 content_6 
0 1.0 NaN 1.0 NaN NaN 1.0
1 0.0 NaN NaN 0.0 1.0 0.0
2 1.0 0.0 NaN NaN NaN 1.0


In the above framework, NaN, is a missing value. For data points where a Nan is present, I want the algorithm to ignore that category, and use only those categories present in the calculation.



I know from these questions: 1, that there are essentially 3 options when dealing with missing values:



  1. ignore the data point if any categories contain a NaN (I.e. remove the row)

  2. Impute some other placeholder value (e.g. -1 etc.) or

  3. Impute some average value corresponding to the overall dataset
    distribution.

However, these are not the best option for the following reason:



  1. Every single row contains at least 1 NaN. This means, under this
    arrangement I would discard the entire dataset. Obviously a no go.

  2. I do not want the missing value to add to the probability
    calculation, which will happen if I replace Nan with say -1. I'm also using a Bernoulli Naive Bayes, so as I understand, this requires singly 0 or 1 values.

  3. As this is categorical data, it does not make sense for me to do this,
    in this way (it was either seen or not, and if not, it is not needed).

The answer here indicated that the best way to do this, is, when calculating probabilities, to ignore that category if it is a missing value (essentially you are saying: only compute a probability based on the specific categories I have provided with non missing values).



I do not know how to encode this when using the scikit-learn Naive Bayes model, whether to do this as a missing value.



Here's what I have so far:



df=pd.read_clipboard()
from sklearn import datasets
from sklearn.naive_bayes import BernoulliNB
# Create train input / output data
y_train = df['content_1'].values
X_train = df.drop('content_1', axis=1).values
# Loud Bernoulli Naive Bayes model
clf = BernoulliNB()
clf.fit(X_train, y_train)


Obviously, this returns an error because of the present NaNs. So how can I adjust the scikit-learn Bernoulli model to automatically ignore the columns with NaNs, and instead take only those with 0 or 1?



I am aware this may not be possible with the stock model, and reviewing the documentation seems to suggest this. As such, this may require significant coding, so I'll say this: I am not asking for someone to go and code this (nor do I expect it); I'm looking to be pointed in the right direction, for instance if someone has faced this problem / how they approach it / relevant blog or tutorial posts (my searches have turned up nothing).



Thanks in advance - appreciate you reading.










share|improve this question









$endgroup$
















    1












    $begingroup$


    I am dealing with a dataset of categorical data that looks like this:



     content_1 content_2 content_4 content_5 content_6 
    0 NaN 0.0 0.0 0.0 NaN
    1 NaN 0.0 0.0 0.0 NaN
    2 NaN NaN NaN NaN NaN
    3 0.0 NaN 0.0 NaN 0.0


    These represent user downloads from an intranet, where a user is shown the opportunity to download a particular piece of content. 1 indicates a user seeing content and downloading it, 0 indicates a user seeing content and not downloading it, and NaN means the user did not see/was not shown that piece of content.



    I am trying to use the scikit-learn Bernoulli Naive Bayes model to predict the probability of a user downloading content_1, given if they have seen downloaded / not downloaded content_2-7.



    I have removed all data where content_1 is equal to NaN as I'm obviously only interested in data points where a decision was actively made by the user. This gives data as:



     content_1 content_2 content_3 content_4 content_5 content_6 
    0 1.0 NaN 1.0 NaN NaN 1.0
    1 0.0 NaN NaN 0.0 1.0 0.0
    2 1.0 0.0 NaN NaN NaN 1.0


    In the above framework, NaN, is a missing value. For data points where a Nan is present, I want the algorithm to ignore that category, and use only those categories present in the calculation.



    I know from these questions: 1, that there are essentially 3 options when dealing with missing values:



    1. ignore the data point if any categories contain a NaN (I.e. remove the row)

    2. Impute some other placeholder value (e.g. -1 etc.) or

    3. Impute some average value corresponding to the overall dataset
      distribution.

    However, these are not the best option for the following reason:



    1. Every single row contains at least 1 NaN. This means, under this
      arrangement I would discard the entire dataset. Obviously a no go.

    2. I do not want the missing value to add to the probability
      calculation, which will happen if I replace Nan with say -1. I'm also using a Bernoulli Naive Bayes, so as I understand, this requires singly 0 or 1 values.

    3. As this is categorical data, it does not make sense for me to do this,
      in this way (it was either seen or not, and if not, it is not needed).

    The answer here indicated that the best way to do this, is, when calculating probabilities, to ignore that category if it is a missing value (essentially you are saying: only compute a probability based on the specific categories I have provided with non missing values).



    I do not know how to encode this when using the scikit-learn Naive Bayes model, whether to do this as a missing value.



    Here's what I have so far:



    df=pd.read_clipboard()
    from sklearn import datasets
    from sklearn.naive_bayes import BernoulliNB
    # Create train input / output data
    y_train = df['content_1'].values
    X_train = df.drop('content_1', axis=1).values
    # Loud Bernoulli Naive Bayes model
    clf = BernoulliNB()
    clf.fit(X_train, y_train)


    Obviously, this returns an error because of the present NaNs. So how can I adjust the scikit-learn Bernoulli model to automatically ignore the columns with NaNs, and instead take only those with 0 or 1?



    I am aware this may not be possible with the stock model, and reviewing the documentation seems to suggest this. As such, this may require significant coding, so I'll say this: I am not asking for someone to go and code this (nor do I expect it); I'm looking to be pointed in the right direction, for instance if someone has faced this problem / how they approach it / relevant blog or tutorial posts (my searches have turned up nothing).



    Thanks in advance - appreciate you reading.










    share|improve this question









    $endgroup$














      1












      1








      1


      1



      $begingroup$


      I am dealing with a dataset of categorical data that looks like this:



       content_1 content_2 content_4 content_5 content_6 
      0 NaN 0.0 0.0 0.0 NaN
      1 NaN 0.0 0.0 0.0 NaN
      2 NaN NaN NaN NaN NaN
      3 0.0 NaN 0.0 NaN 0.0


      These represent user downloads from an intranet, where a user is shown the opportunity to download a particular piece of content. 1 indicates a user seeing content and downloading it, 0 indicates a user seeing content and not downloading it, and NaN means the user did not see/was not shown that piece of content.



      I am trying to use the scikit-learn Bernoulli Naive Bayes model to predict the probability of a user downloading content_1, given if they have seen downloaded / not downloaded content_2-7.



      I have removed all data where content_1 is equal to NaN as I'm obviously only interested in data points where a decision was actively made by the user. This gives data as:



       content_1 content_2 content_3 content_4 content_5 content_6 
      0 1.0 NaN 1.0 NaN NaN 1.0
      1 0.0 NaN NaN 0.0 1.0 0.0
      2 1.0 0.0 NaN NaN NaN 1.0


      In the above framework, NaN, is a missing value. For data points where a Nan is present, I want the algorithm to ignore that category, and use only those categories present in the calculation.



      I know from these questions: 1, that there are essentially 3 options when dealing with missing values:



      1. ignore the data point if any categories contain a NaN (I.e. remove the row)

      2. Impute some other placeholder value (e.g. -1 etc.) or

      3. Impute some average value corresponding to the overall dataset
        distribution.

      However, these are not the best option for the following reason:



      1. Every single row contains at least 1 NaN. This means, under this
        arrangement I would discard the entire dataset. Obviously a no go.

      2. I do not want the missing value to add to the probability
        calculation, which will happen if I replace Nan with say -1. I'm also using a Bernoulli Naive Bayes, so as I understand, this requires singly 0 or 1 values.

      3. As this is categorical data, it does not make sense for me to do this,
        in this way (it was either seen or not, and if not, it is not needed).

      The answer here indicated that the best way to do this, is, when calculating probabilities, to ignore that category if it is a missing value (essentially you are saying: only compute a probability based on the specific categories I have provided with non missing values).



      I do not know how to encode this when using the scikit-learn Naive Bayes model, whether to do this as a missing value.



      Here's what I have so far:



      df=pd.read_clipboard()
      from sklearn import datasets
      from sklearn.naive_bayes import BernoulliNB
      # Create train input / output data
      y_train = df['content_1'].values
      X_train = df.drop('content_1', axis=1).values
      # Loud Bernoulli Naive Bayes model
      clf = BernoulliNB()
      clf.fit(X_train, y_train)


      Obviously, this returns an error because of the present NaNs. So how can I adjust the scikit-learn Bernoulli model to automatically ignore the columns with NaNs, and instead take only those with 0 or 1?



      I am aware this may not be possible with the stock model, and reviewing the documentation seems to suggest this. As such, this may require significant coding, so I'll say this: I am not asking for someone to go and code this (nor do I expect it); I'm looking to be pointed in the right direction, for instance if someone has faced this problem / how they approach it / relevant blog or tutorial posts (my searches have turned up nothing).



      Thanks in advance - appreciate you reading.










      share|improve this question









      $endgroup$




      I am dealing with a dataset of categorical data that looks like this:



       content_1 content_2 content_4 content_5 content_6 
      0 NaN 0.0 0.0 0.0 NaN
      1 NaN 0.0 0.0 0.0 NaN
      2 NaN NaN NaN NaN NaN
      3 0.0 NaN 0.0 NaN 0.0


      These represent user downloads from an intranet, where a user is shown the opportunity to download a particular piece of content. 1 indicates a user seeing content and downloading it, 0 indicates a user seeing content and not downloading it, and NaN means the user did not see/was not shown that piece of content.



      I am trying to use the scikit-learn Bernoulli Naive Bayes model to predict the probability of a user downloading content_1, given if they have seen downloaded / not downloaded content_2-7.



      I have removed all data where content_1 is equal to NaN as I'm obviously only interested in data points where a decision was actively made by the user. This gives data as:



       content_1 content_2 content_3 content_4 content_5 content_6 
      0 1.0 NaN 1.0 NaN NaN 1.0
      1 0.0 NaN NaN 0.0 1.0 0.0
      2 1.0 0.0 NaN NaN NaN 1.0


      In the above framework, NaN, is a missing value. For data points where a Nan is present, I want the algorithm to ignore that category, and use only those categories present in the calculation.



      I know from these questions: 1, that there are essentially 3 options when dealing with missing values:



      1. ignore the data point if any categories contain a NaN (I.e. remove the row)

      2. Impute some other placeholder value (e.g. -1 etc.) or

      3. Impute some average value corresponding to the overall dataset
        distribution.

      However, these are not the best option for the following reason:



      1. Every single row contains at least 1 NaN. This means, under this
        arrangement I would discard the entire dataset. Obviously a no go.

      2. I do not want the missing value to add to the probability
        calculation, which will happen if I replace Nan with say -1. I'm also using a Bernoulli Naive Bayes, so as I understand, this requires singly 0 or 1 values.

      3. As this is categorical data, it does not make sense for me to do this,
        in this way (it was either seen or not, and if not, it is not needed).

      The answer here indicated that the best way to do this, is, when calculating probabilities, to ignore that category if it is a missing value (essentially you are saying: only compute a probability based on the specific categories I have provided with non missing values).



      I do not know how to encode this when using the scikit-learn Naive Bayes model, whether to do this as a missing value.



      Here's what I have so far:



      df=pd.read_clipboard()
      from sklearn import datasets
      from sklearn.naive_bayes import BernoulliNB
      # Create train input / output data
      y_train = df['content_1'].values
      X_train = df.drop('content_1', axis=1).values
      # Loud Bernoulli Naive Bayes model
      clf = BernoulliNB()
      clf.fit(X_train, y_train)


      Obviously, this returns an error because of the present NaNs. So how can I adjust the scikit-learn Bernoulli model to automatically ignore the columns with NaNs, and instead take only those with 0 or 1?



      I am aware this may not be possible with the stock model, and reviewing the documentation seems to suggest this. As such, this may require significant coding, so I'll say this: I am not asking for someone to go and code this (nor do I expect it); I'm looking to be pointed in the right direction, for instance if someone has faced this problem / how they approach it / relevant blog or tutorial posts (my searches have turned up nothing).



      Thanks in advance - appreciate you reading.







      python classification scikit-learn naive-bayes-classifier missing-data






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Oct 23 '18 at 10:14









      ChuckChuck

      1064




      1064




















          1 Answer
          1






          active

          oldest

          votes


















          0












          $begingroup$

          Your search results are on point: without dropping or imputing data, there's no built-in way to do what you want with BernoulliNB.



          There is, however, a way out: train separate Bayesian models on filtered samples from your data, and then combine their predictions by stacking them.



          Filtering



          Filtering here means:



          • Isolating samples from your original df, each having only a subset of df.columns. That way, you'd have a DataFrame only for content_2, one for content_2, content_3, in a sort of a factorial combination of columns.

          • Making sure each sample is made only of rows that have no NaNs for any of the columns in the subset.

          This part is somewhat straightforward in your case, yet a bit lengthy: you'd have $n!$ (n factorial) combinations of columns, each of which would result in a separate sample. For example, you could have a sample named df_c2 containing only content_2 rows valued 0 or 1, df_c2_c3 with only content_2 and content_3 columns filled, and so on.



          These samples would make NaN values non-existent to every model you'd train. Implementing this in a smart way can be cumbersome, so I advise starting with the simplest of scenarios - e.g. two samples, two models; you'll improve gradually and reach a solid solution in code.



          Stacking Bayesian Models



          This is called Bayesian Model Averaging (BMA), and as a concept it's thoroughly addressed in this paper. There, weight attributed to a Bayesian model's predictions is its posterior probability.



          The content can be overwhelming to absorb in one go, be at ease if some of it doesn't stick with you. The main point here is that you'll multiply each model's predicted probabilities by a weight 0 < w < 1 and then sum (sum results shall be in $[0, 1]$). You can attribute weights empirically at first and see where it gets you.




          Edit:



          Due to the added complexity of my proposed solution, as stated in this (also useful) answer, you could opt to implement Naive Bayes in pure Python, since it's not complicated (and there are plenty tutorials to base upon). That'd make it a lot easier to bend the algorithm to your needs.






          share|improve this answer










          New contributor




          jcezarms is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.






          $endgroup$













            Your Answer








            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "557"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f40084%2fhow-to-deal-with-missing-data-for-bernoulli-naive-bayes%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0












            $begingroup$

            Your search results are on point: without dropping or imputing data, there's no built-in way to do what you want with BernoulliNB.



            There is, however, a way out: train separate Bayesian models on filtered samples from your data, and then combine their predictions by stacking them.



            Filtering



            Filtering here means:



            • Isolating samples from your original df, each having only a subset of df.columns. That way, you'd have a DataFrame only for content_2, one for content_2, content_3, in a sort of a factorial combination of columns.

            • Making sure each sample is made only of rows that have no NaNs for any of the columns in the subset.

            This part is somewhat straightforward in your case, yet a bit lengthy: you'd have $n!$ (n factorial) combinations of columns, each of which would result in a separate sample. For example, you could have a sample named df_c2 containing only content_2 rows valued 0 or 1, df_c2_c3 with only content_2 and content_3 columns filled, and so on.



            These samples would make NaN values non-existent to every model you'd train. Implementing this in a smart way can be cumbersome, so I advise starting with the simplest of scenarios - e.g. two samples, two models; you'll improve gradually and reach a solid solution in code.



            Stacking Bayesian Models



            This is called Bayesian Model Averaging (BMA), and as a concept it's thoroughly addressed in this paper. There, weight attributed to a Bayesian model's predictions is its posterior probability.



            The content can be overwhelming to absorb in one go, be at ease if some of it doesn't stick with you. The main point here is that you'll multiply each model's predicted probabilities by a weight 0 < w < 1 and then sum (sum results shall be in $[0, 1]$). You can attribute weights empirically at first and see where it gets you.




            Edit:



            Due to the added complexity of my proposed solution, as stated in this (also useful) answer, you could opt to implement Naive Bayes in pure Python, since it's not complicated (and there are plenty tutorials to base upon). That'd make it a lot easier to bend the algorithm to your needs.






            share|improve this answer










            New contributor




            jcezarms is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.






            $endgroup$

















              0












              $begingroup$

              Your search results are on point: without dropping or imputing data, there's no built-in way to do what you want with BernoulliNB.



              There is, however, a way out: train separate Bayesian models on filtered samples from your data, and then combine their predictions by stacking them.



              Filtering



              Filtering here means:



              • Isolating samples from your original df, each having only a subset of df.columns. That way, you'd have a DataFrame only for content_2, one for content_2, content_3, in a sort of a factorial combination of columns.

              • Making sure each sample is made only of rows that have no NaNs for any of the columns in the subset.

              This part is somewhat straightforward in your case, yet a bit lengthy: you'd have $n!$ (n factorial) combinations of columns, each of which would result in a separate sample. For example, you could have a sample named df_c2 containing only content_2 rows valued 0 or 1, df_c2_c3 with only content_2 and content_3 columns filled, and so on.



              These samples would make NaN values non-existent to every model you'd train. Implementing this in a smart way can be cumbersome, so I advise starting with the simplest of scenarios - e.g. two samples, two models; you'll improve gradually and reach a solid solution in code.



              Stacking Bayesian Models



              This is called Bayesian Model Averaging (BMA), and as a concept it's thoroughly addressed in this paper. There, weight attributed to a Bayesian model's predictions is its posterior probability.



              The content can be overwhelming to absorb in one go, be at ease if some of it doesn't stick with you. The main point here is that you'll multiply each model's predicted probabilities by a weight 0 < w < 1 and then sum (sum results shall be in $[0, 1]$). You can attribute weights empirically at first and see where it gets you.




              Edit:



              Due to the added complexity of my proposed solution, as stated in this (also useful) answer, you could opt to implement Naive Bayes in pure Python, since it's not complicated (and there are plenty tutorials to base upon). That'd make it a lot easier to bend the algorithm to your needs.






              share|improve this answer










              New contributor




              jcezarms is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.






              $endgroup$















                0












                0








                0





                $begingroup$

                Your search results are on point: without dropping or imputing data, there's no built-in way to do what you want with BernoulliNB.



                There is, however, a way out: train separate Bayesian models on filtered samples from your data, and then combine their predictions by stacking them.



                Filtering



                Filtering here means:



                • Isolating samples from your original df, each having only a subset of df.columns. That way, you'd have a DataFrame only for content_2, one for content_2, content_3, in a sort of a factorial combination of columns.

                • Making sure each sample is made only of rows that have no NaNs for any of the columns in the subset.

                This part is somewhat straightforward in your case, yet a bit lengthy: you'd have $n!$ (n factorial) combinations of columns, each of which would result in a separate sample. For example, you could have a sample named df_c2 containing only content_2 rows valued 0 or 1, df_c2_c3 with only content_2 and content_3 columns filled, and so on.



                These samples would make NaN values non-existent to every model you'd train. Implementing this in a smart way can be cumbersome, so I advise starting with the simplest of scenarios - e.g. two samples, two models; you'll improve gradually and reach a solid solution in code.



                Stacking Bayesian Models



                This is called Bayesian Model Averaging (BMA), and as a concept it's thoroughly addressed in this paper. There, weight attributed to a Bayesian model's predictions is its posterior probability.



                The content can be overwhelming to absorb in one go, be at ease if some of it doesn't stick with you. The main point here is that you'll multiply each model's predicted probabilities by a weight 0 < w < 1 and then sum (sum results shall be in $[0, 1]$). You can attribute weights empirically at first and see where it gets you.




                Edit:



                Due to the added complexity of my proposed solution, as stated in this (also useful) answer, you could opt to implement Naive Bayes in pure Python, since it's not complicated (and there are plenty tutorials to base upon). That'd make it a lot easier to bend the algorithm to your needs.






                share|improve this answer










                New contributor




                jcezarms is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.






                $endgroup$



                Your search results are on point: without dropping or imputing data, there's no built-in way to do what you want with BernoulliNB.



                There is, however, a way out: train separate Bayesian models on filtered samples from your data, and then combine their predictions by stacking them.



                Filtering



                Filtering here means:



                • Isolating samples from your original df, each having only a subset of df.columns. That way, you'd have a DataFrame only for content_2, one for content_2, content_3, in a sort of a factorial combination of columns.

                • Making sure each sample is made only of rows that have no NaNs for any of the columns in the subset.

                This part is somewhat straightforward in your case, yet a bit lengthy: you'd have $n!$ (n factorial) combinations of columns, each of which would result in a separate sample. For example, you could have a sample named df_c2 containing only content_2 rows valued 0 or 1, df_c2_c3 with only content_2 and content_3 columns filled, and so on.



                These samples would make NaN values non-existent to every model you'd train. Implementing this in a smart way can be cumbersome, so I advise starting with the simplest of scenarios - e.g. two samples, two models; you'll improve gradually and reach a solid solution in code.



                Stacking Bayesian Models



                This is called Bayesian Model Averaging (BMA), and as a concept it's thoroughly addressed in this paper. There, weight attributed to a Bayesian model's predictions is its posterior probability.



                The content can be overwhelming to absorb in one go, be at ease if some of it doesn't stick with you. The main point here is that you'll multiply each model's predicted probabilities by a weight 0 < w < 1 and then sum (sum results shall be in $[0, 1]$). You can attribute weights empirically at first and see where it gets you.




                Edit:



                Due to the added complexity of my proposed solution, as stated in this (also useful) answer, you could opt to implement Naive Bayes in pure Python, since it's not complicated (and there are plenty tutorials to base upon). That'd make it a lot easier to bend the algorithm to your needs.







                share|improve this answer










                New contributor




                jcezarms is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.









                share|improve this answer



                share|improve this answer








                edited 1 min ago





















                New contributor




                jcezarms is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.









                answered 15 mins ago









                jcezarmsjcezarms

                214




                214




                New contributor




                jcezarms is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.





                New contributor





                jcezarms is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.






                jcezarms is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Data Science Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f40084%2fhow-to-deal-with-missing-data-for-bernoulli-naive-bayes%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Францішак Багушэвіч Змест Сям'я | Біяграфія | Творчасць | Мова Багушэвіча | Ацэнкі дзейнасці | Цікавыя факты | Спадчына | Выбраная бібліяграфія | Ушанаванне памяці | У філатэліі | Зноскі | Літаратура | Спасылкі | НавігацыяЛяхоўскі У. Рупіўся дзеля Бога і людзей: Жыццёвы шлях Лявона Вітан-Дубейкаўскага // Вольскі і Памідораў з песняй пра немца Адвакат, паэт, народны заступнік Ашмянскі веснікВ Минске появится площадь Богушевича и улица Сырокомли, Белорусская деловая газета, 19 июля 2001 г.Айцец беларускай нацыянальнай ідэі паўстаў у бронзе Сяргей Аляксандравіч Адашкевіч (1918, Мінск). 80-я гады. Бюст «Францішак Багушэвіч».Яўген Мікалаевіч Ціхановіч. «Партрэт Францішка Багушэвіча»Мікола Мікалаевіч Купава. «Партрэт зачынальніка новай беларускай літаратуры Францішка Багушэвіча»Уладзімір Іванавіч Мелехаў. На помніку «Змагарам за родную мову» Барэльеф «Францішак Багушэвіч»Памяць пра Багушэвіча на Віленшчыне Страчаная сталіца. Беларускія шыльды на вуліцах Вільні«Krynica». Ideologia i przywódcy białoruskiego katolicyzmuФранцішак БагушэвічТворы на knihi.comТворы Францішка Багушэвіча на bellib.byСодаль Уладзімір. Францішак Багушэвіч на Лідчыне;Луцкевіч Антон. Жыцьцё і творчасьць Фр. Багушэвіча ў успамінах ягоных сучасьнікаў // Запісы Беларускага Навуковага таварыства. Вільня, 1938. Сшытак 1. С. 16-34.Большая российская1188761710000 0000 5537 633Xn9209310021619551927869394п

                    Беларусь Змест Назва Гісторыя Геаграфія Сімволіка Дзяржаўны лад Палітычныя партыі Міжнароднае становішча і знешняя палітыка Адміністрацыйны падзел Насельніцтва Эканоміка Культура і грамадства Сацыяльная сфера Узброеныя сілы Заўвагі Літаратура Спасылкі НавігацыяHGЯOiТоп-2011 г. (па версіі ej.by)Топ-2013 г. (па версіі ej.by)Топ-2016 г. (па версіі ej.by)Топ-2017 г. (па версіі ej.by)Нацыянальны статыстычны камітэт Рэспублікі БеларусьШчыльнасць насельніцтва па краінахhttp://naviny.by/rubrics/society/2011/09/16/ic_articles_116_175144/А. Калечыц, У. Ксяндзоў. Спробы засялення краю неандэртальскім чалавекам.І ў Менску былі мамантыА. Калечыц, У. Ксяндзоў. Старажытны каменны век (палеаліт). Першапачатковае засяленне тэрыторыіГ. Штыхаў. Балты і славяне ў VI—VIII стст.М. Клімаў. Полацкае княства ў IX—XI стст.Г. Штыхаў, В. Ляўко. Палітычная гісторыя Полацкай зямліГ. Штыхаў. Дзяржаўны лад у землях-княствахГ. Штыхаў. Дзяржаўны лад у землях-княствахБеларускія землі ў складзе Вялікага Княства ЛітоўскагаЛюблінская унія 1569 г."The Early Stages of Independence"Zapomniane prawdy25 гадоў таму было аб'яўлена, што Язэп Пілсудскі — беларус (фота)Наша вадаДакументы ЧАЭС: Забруджванне тэрыторыі Беларусі « ЧАЭС Зона адчужэнняСведения о политических партиях, зарегистрированных в Республике Беларусь // Министерство юстиции Республики БеларусьСтатыстычны бюлетэнь „Полаўзроставая структура насельніцтва Рэспублікі Беларусь на 1 студзеня 2012 года і сярэднегадовая колькасць насельніцтва за 2011 год“Индекс человеческого развития Беларуси — не было бы нижеБеларусь занимает первое место в СНГ по индексу развития с учетом гендерного факцёраНацыянальны статыстычны камітэт Рэспублікі БеларусьКанстытуцыя РБ. Артыкул 17Трансфармацыйныя задачы БеларусіВыйсце з крызісу — далейшае рэфармаванне Беларускі рубель — сусветны лідар па дэвальвацыяхПра змену коштаў у кастрычніку 2011 г.Бядней за беларусаў у СНД толькі таджыкіСярэдні заробак у верасні дасягнуў 2,26 мільёна рублёўЭканомікаГаласуем за ТОП-100 беларускай прозыСучасныя беларускія мастакіАрхитектура Беларуси BELARUS.BYА. Каханоўскі. Культура Беларусі ўсярэдзіне XVII—XVIII ст.Анталогія беларускай народнай песні, гуказапісы спеваўБеларускія Музычныя IнструментыБеларускі рок, які мы страцілі. Топ-10 гуртоў«Мясцовы час» — нязгаслая легенда беларускай рок-музыкіСЯРГЕЙ БУДКІН. МЫ НЯ ЗНАЕМ СВАЁЙ МУЗЫКІМ. А. Каладзінскі. НАРОДНЫ ТЭАТРМагнацкія культурныя цэнтрыПублічная дыскусія «Беларуская новая пьеса: без беларускай мовы ці беларуская?»Беларускія драматургі па-ранейшаму лепш ставяцца за мяжой, чым на радзіме«Працэс незалежнага кіно пайшоў, і дзяржаву турбуе яго непадкантрольнасць»Беларускія філосафы ў пошуках прасторыВсе идём в библиотекуАрхіваванаАб Нацыянальнай праграме даследавання і выкарыстання касмічнай прасторы ў мірных мэтах на 2008—2012 гадыУ космас — разам.У суседнім з Барысаўскім раёне пабудуюць Камандна-вымяральны пунктСвяты і абрады беларусаў«Мірныя бульбашы з малой краіны» — 5 непраўдзівых стэрэатыпаў пра БеларусьМ. Раманюк. Беларускае народнае адзеннеУ Беларусі скарачаецца колькасць злачынстваўЛукашэнка незадаволены мінскімі ўладамі Крадзяжы складаюць у Мінску каля 70% злачынстваў Узровень злачыннасці ў Мінскай вобласці — адзін з самых высокіх у краіне Генпракуратура аналізуе стан са злачыннасцю ў Беларусі па каэфіцыенце злачыннасці У Беларусі стабілізавалася крымінагеннае становішча, лічыць генпракурорЗамежнікі сталі здзяйсняць у Беларусі больш злачынстваўМУС Беларусі турбуе рост рэцыдыўнай злачыннасціЯ з ЖЭСа. Дазволіце вас абкрасці! Рэйтынг усіх службаў і падраздзяленняў ГУУС Мінгарвыканкама вырасАб КДБ РБГісторыя Аператыўна-аналітычнага цэнтра РБГісторыя ДКФРТаможняagentura.ruБеларусьBelarus.by — Афіцыйны сайт Рэспублікі БеларусьСайт урада БеларусіRadzima.org — Збор архітэктурных помнікаў, гісторыя Беларусі«Глобус Беларуси»Гербы и флаги БеларусиАсаблівасці каменнага веку на БеларусіА. Калечыц, У. Ксяндзоў. Старажытны каменны век (палеаліт). Першапачатковае засяленне тэрыторыіУ. Ксяндзоў. Сярэдні каменны век (мезаліт). Засяленне краю плямёнамі паляўнічых, рыбакоў і збіральнікаўА. Калечыц, М. Чарняўскі. Плямёны на тэрыторыі Беларусі ў новым каменным веку (неаліце)А. Калечыц, У. Ксяндзоў, М. Чарняўскі. Гаспадарчыя заняткі ў каменным векуЭ. Зайкоўскі. Духоўная культура ў каменным векуАсаблівасці бронзавага веку на БеларусіФарміраванне супольнасцей ранняга перыяду бронзавага векуФотографии БеларусиРоля беларускіх зямель ва ўтварэнні і ўмацаванні ВКЛВ. Фадзеева. З гісторыі развіцця беларускай народнай вышыўкіDMOZGran catalanaБольшая российскаяBritannica (анлайн)Швейцарскі гістарычны15325917611952699xDA123282154079143-90000 0001 2171 2080n9112870100577502ge128882171858027501086026362074122714179пппппп

                    ValueError: Expected n_neighbors <= n_samples, but n_samples = 1, n_neighbors = 6 (SMOTE) The 2019 Stack Overflow Developer Survey Results Are InCan SMOTE be applied over sequence of words (sentences)?ValueError when doing validation with random forestsSMOTE and multi class oversamplingLogic behind SMOTE-NC?ValueError: Error when checking target: expected dense_1 to have shape (7,) but got array with shape (1,)SmoteBoost: Should SMOTE be ran individually for each iteration/tree in the boosting?solving multi-class imbalance classification using smote and OSSUsing SMOTE for Synthetic Data generation to improve performance on unbalanced dataproblem of entry format for a simple model in KerasSVM SMOTE fit_resample() function runs forever with no result