Handling a combined dataset of numerical and categorical features for Regression Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern) 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsRecurrent neural network multiple types of input KerasQ: xgboost regressor training on a large number of indicator variables results in same prediction for all rows in testHow do I define my feature variablesPreparing, Scaling and Selecting from a combination of numerical and categorical featuresPredicting with categorical dataHow can I make a prediction in a regression model if a category has not been observed already?Dropping less frequently used categorical data?ML Models: How to handle categorical feature with over 1000 unique valuesHow to measure correlation between several categorical features and a numerical label in Python?

How does light 'choose' between wave and particle behaviour?

Why is it faster to reheat something than it is to cook it?

How to install press fit bottom bracket into new frame

Is it fair for a professor to grade us on the possession of past papers?

How were pictures turned from film to a big picture in a picture frame before digital scanning?

Do any jurisdictions seriously consider reclassifying social media websites as publishers?

What is the topology associated with the algebras for the ultrafilter monad?

Is grep documentation about ignoring case wrong, since it doesn't ignore case in filenames?

Using et al. for a last / senior author rather than for a first author

Take 2! Is this homebrew Lady of Pain warlock patron balanced?

SF book about people trapped in a series of worlds they imagine

What was the first language to use conditional keywords?

Morning, Afternoon, Night Kanji

Would the Life Transference spell be unbalanced if it ignored resistance and immunity?

Amount of permutations on an NxNxN Rubik's Cube

How often does castling occur in grandmaster games?

How do living politicians protect their readily obtainable signatures from misuse?

Why is Nikon 1.4g better when Nikon 1.8g is sharper?

What is the difference between globalisation and imperialism?

Did Deadpool rescue all of the X-Force?

An adverb for when you're not exaggerating

How would a mousetrap for use in space work?

Should I follow up with an employee I believe overracted to a mistake I made?

Sum letters are not two different



Handling a combined dataset of numerical and categorical features for Regression



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern)
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsRecurrent neural network multiple types of input KerasQ: xgboost regressor training on a large number of indicator variables results in same prediction for all rows in testHow do I define my feature variablesPreparing, Scaling and Selecting from a combination of numerical and categorical featuresPredicting with categorical dataHow can I make a prediction in a regression model if a category has not been observed already?Dropping less frequently used categorical data?ML Models: How to handle categorical feature with over 1000 unique valuesHow to measure correlation between several categorical features and a numerical label in Python?










2












$begingroup$


I have a dataset that has a large number of categorical features and a few numerical features, and I want to predict the probability that any given input is one of two types of a certain binary output feature. I don't want to solve a classifier problem for reasons that are well outlined in this link .



I have seen many tutorials about how to handle them independently but am less sure how to handle them together.










share|improve this question









$endgroup$




bumped to the homepage by Community 21 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.



















    2












    $begingroup$


    I have a dataset that has a large number of categorical features and a few numerical features, and I want to predict the probability that any given input is one of two types of a certain binary output feature. I don't want to solve a classifier problem for reasons that are well outlined in this link .



    I have seen many tutorials about how to handle them independently but am less sure how to handle them together.










    share|improve this question









    $endgroup$




    bumped to the homepage by Community 21 mins ago


    This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.

















      2












      2








      2





      $begingroup$


      I have a dataset that has a large number of categorical features and a few numerical features, and I want to predict the probability that any given input is one of two types of a certain binary output feature. I don't want to solve a classifier problem for reasons that are well outlined in this link .



      I have seen many tutorials about how to handle them independently but am less sure how to handle them together.










      share|improve this question









      $endgroup$




      I have a dataset that has a large number of categorical features and a few numerical features, and I want to predict the probability that any given input is one of two types of a certain binary output feature. I don't want to solve a classifier problem for reasons that are well outlined in this link .



      I have seen many tutorials about how to handle them independently but am less sure how to handle them together.







      python regression pandas






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked May 21 '18 at 12:49









      dward4dward4

      112




      112





      bumped to the homepage by Community 21 mins ago


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







      bumped to the homepage by Community 21 mins ago


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.






















          3 Answers
          3






          active

          oldest

          votes


















          0












          $begingroup$


          I don't want to solve a classifier problem for reasons that are well outlined in this link.




          I doubt that the link wants to tell you to stop performing classification tasks - the problem you propose is a classical example for classification. How I understand your source, it does not want you to use scoring rules as a heuristic.



          For the problem you described I would propose a simple Naive Bayes approach. To make your numerical values discrete, you can simply use the mean of two adjacent numeric values as a treshold. E.g. for a list [1, 2] of numeric values, just split them at a threshold of 1.5 and check above and below.






          share|improve this answer









          $endgroup$




















            0












            $begingroup$

            One could approach this in two general ways:



            1) bottom up: thinking about unifying the data somehow to begin with



            2) top down: deciding how the data needs to look based on the final model you wish to use



            Do you already know which model you will use? If that is fixed (for whatever reason), you already know you need to get your data into the correct form, be it numerical or categorical.



            As you pinned your question with the tag regression, I can tell you that you need to make your data all numerical, so regression can work.




            An example of making numerical data categorical would be to put it into bins. Imagine we have values ranging from zero to ten: [0.173, 7.88, 3.91, ...]. You could simply say that values between 0.00 and 0.99 are category A, values between 1.00 and 1.99 are category B, and so on.



            [Edit:]



            A slightly more sophisticated way of defining the bins to use would be to define the bins based on some characteristic statistics of your dataset. For example, have a look at the possible ways possible implemented within python's Numpy. Of the available methods there, I have found the Doane method to work best - it will depend on your data though, so read the descriptions.




            Making categorical values numerical in a meaningful way depends a little more on you data. It is easy to make them numberic, but you should focus on doing it in such a way as to retain as much of the information each variable contains as well as the relative relationships between each of the categories that you started with. E.g. converting colours into integers would allow you to perform regression, but if yellow becomes 1 and purple 10, the model needs to be able to learn that purple isn't necessarily 10 times bigger than yellow, and that is difficult in the context of regression!






            share|improve this answer











            $endgroup$












            • $begingroup$
              Great, currently I'm using 1 hot encoding on my columns to create a larger data fame of binary variables. Thanks for the indepth way of thinking through it.
              $endgroup$
              – dward4
              May 21 '18 at 14:15










            • $begingroup$
              @dward4 - you're welcome :) - have a look at the extra information I added regarding the use of histogram methods.
              $endgroup$
              – n1k31t4
              May 21 '18 at 15:00


















            0












            $begingroup$

            Adding to the answers of above, there's one more better way to do the same i.e Target Encoding, which naively means you are e encoding your cats according to the target variable via using some aggregate (works out of the box)






            share|improve this answer









            $endgroup$













              Your Answer








              StackExchange.ready(function()
              var channelOptions =
              tags: "".split(" "),
              id: "557"
              ;
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function()
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled)
              StackExchange.using("snippets", function()
              createEditor();
              );

              else
              createEditor();

              );

              function createEditor()
              StackExchange.prepareEditor(
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: false,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: null,
              bindNavPrevention: true,
              postfix: "",
              imageUploader:
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              ,
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              );



              );













              draft saved

              draft discarded


















              StackExchange.ready(
              function ()
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f31926%2fhandling-a-combined-dataset-of-numerical-and-categorical-features-for-regression%23new-answer', 'question_page');

              );

              Post as a guest















              Required, but never shown

























              3 Answers
              3






              active

              oldest

              votes








              3 Answers
              3






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              0












              $begingroup$


              I don't want to solve a classifier problem for reasons that are well outlined in this link.




              I doubt that the link wants to tell you to stop performing classification tasks - the problem you propose is a classical example for classification. How I understand your source, it does not want you to use scoring rules as a heuristic.



              For the problem you described I would propose a simple Naive Bayes approach. To make your numerical values discrete, you can simply use the mean of two adjacent numeric values as a treshold. E.g. for a list [1, 2] of numeric values, just split them at a threshold of 1.5 and check above and below.






              share|improve this answer









              $endgroup$

















                0












                $begingroup$


                I don't want to solve a classifier problem for reasons that are well outlined in this link.




                I doubt that the link wants to tell you to stop performing classification tasks - the problem you propose is a classical example for classification. How I understand your source, it does not want you to use scoring rules as a heuristic.



                For the problem you described I would propose a simple Naive Bayes approach. To make your numerical values discrete, you can simply use the mean of two adjacent numeric values as a treshold. E.g. for a list [1, 2] of numeric values, just split them at a threshold of 1.5 and check above and below.






                share|improve this answer









                $endgroup$















                  0












                  0








                  0





                  $begingroup$


                  I don't want to solve a classifier problem for reasons that are well outlined in this link.




                  I doubt that the link wants to tell you to stop performing classification tasks - the problem you propose is a classical example for classification. How I understand your source, it does not want you to use scoring rules as a heuristic.



                  For the problem you described I would propose a simple Naive Bayes approach. To make your numerical values discrete, you can simply use the mean of two adjacent numeric values as a treshold. E.g. for a list [1, 2] of numeric values, just split them at a threshold of 1.5 and check above and below.






                  share|improve this answer









                  $endgroup$




                  I don't want to solve a classifier problem for reasons that are well outlined in this link.




                  I doubt that the link wants to tell you to stop performing classification tasks - the problem you propose is a classical example for classification. How I understand your source, it does not want you to use scoring rules as a heuristic.



                  For the problem you described I would propose a simple Naive Bayes approach. To make your numerical values discrete, you can simply use the mean of two adjacent numeric values as a treshold. E.g. for a list [1, 2] of numeric values, just split them at a threshold of 1.5 and check above and below.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered May 21 '18 at 13:51









                  AndréAndré

                  41810




                  41810





















                      0












                      $begingroup$

                      One could approach this in two general ways:



                      1) bottom up: thinking about unifying the data somehow to begin with



                      2) top down: deciding how the data needs to look based on the final model you wish to use



                      Do you already know which model you will use? If that is fixed (for whatever reason), you already know you need to get your data into the correct form, be it numerical or categorical.



                      As you pinned your question with the tag regression, I can tell you that you need to make your data all numerical, so regression can work.




                      An example of making numerical data categorical would be to put it into bins. Imagine we have values ranging from zero to ten: [0.173, 7.88, 3.91, ...]. You could simply say that values between 0.00 and 0.99 are category A, values between 1.00 and 1.99 are category B, and so on.



                      [Edit:]



                      A slightly more sophisticated way of defining the bins to use would be to define the bins based on some characteristic statistics of your dataset. For example, have a look at the possible ways possible implemented within python's Numpy. Of the available methods there, I have found the Doane method to work best - it will depend on your data though, so read the descriptions.




                      Making categorical values numerical in a meaningful way depends a little more on you data. It is easy to make them numberic, but you should focus on doing it in such a way as to retain as much of the information each variable contains as well as the relative relationships between each of the categories that you started with. E.g. converting colours into integers would allow you to perform regression, but if yellow becomes 1 and purple 10, the model needs to be able to learn that purple isn't necessarily 10 times bigger than yellow, and that is difficult in the context of regression!






                      share|improve this answer











                      $endgroup$












                      • $begingroup$
                        Great, currently I'm using 1 hot encoding on my columns to create a larger data fame of binary variables. Thanks for the indepth way of thinking through it.
                        $endgroup$
                        – dward4
                        May 21 '18 at 14:15










                      • $begingroup$
                        @dward4 - you're welcome :) - have a look at the extra information I added regarding the use of histogram methods.
                        $endgroup$
                        – n1k31t4
                        May 21 '18 at 15:00















                      0












                      $begingroup$

                      One could approach this in two general ways:



                      1) bottom up: thinking about unifying the data somehow to begin with



                      2) top down: deciding how the data needs to look based on the final model you wish to use



                      Do you already know which model you will use? If that is fixed (for whatever reason), you already know you need to get your data into the correct form, be it numerical or categorical.



                      As you pinned your question with the tag regression, I can tell you that you need to make your data all numerical, so regression can work.




                      An example of making numerical data categorical would be to put it into bins. Imagine we have values ranging from zero to ten: [0.173, 7.88, 3.91, ...]. You could simply say that values between 0.00 and 0.99 are category A, values between 1.00 and 1.99 are category B, and so on.



                      [Edit:]



                      A slightly more sophisticated way of defining the bins to use would be to define the bins based on some characteristic statistics of your dataset. For example, have a look at the possible ways possible implemented within python's Numpy. Of the available methods there, I have found the Doane method to work best - it will depend on your data though, so read the descriptions.




                      Making categorical values numerical in a meaningful way depends a little more on you data. It is easy to make them numberic, but you should focus on doing it in such a way as to retain as much of the information each variable contains as well as the relative relationships between each of the categories that you started with. E.g. converting colours into integers would allow you to perform regression, but if yellow becomes 1 and purple 10, the model needs to be able to learn that purple isn't necessarily 10 times bigger than yellow, and that is difficult in the context of regression!






                      share|improve this answer











                      $endgroup$












                      • $begingroup$
                        Great, currently I'm using 1 hot encoding on my columns to create a larger data fame of binary variables. Thanks for the indepth way of thinking through it.
                        $endgroup$
                        – dward4
                        May 21 '18 at 14:15










                      • $begingroup$
                        @dward4 - you're welcome :) - have a look at the extra information I added regarding the use of histogram methods.
                        $endgroup$
                        – n1k31t4
                        May 21 '18 at 15:00













                      0












                      0








                      0





                      $begingroup$

                      One could approach this in two general ways:



                      1) bottom up: thinking about unifying the data somehow to begin with



                      2) top down: deciding how the data needs to look based on the final model you wish to use



                      Do you already know which model you will use? If that is fixed (for whatever reason), you already know you need to get your data into the correct form, be it numerical or categorical.



                      As you pinned your question with the tag regression, I can tell you that you need to make your data all numerical, so regression can work.




                      An example of making numerical data categorical would be to put it into bins. Imagine we have values ranging from zero to ten: [0.173, 7.88, 3.91, ...]. You could simply say that values between 0.00 and 0.99 are category A, values between 1.00 and 1.99 are category B, and so on.



                      [Edit:]



                      A slightly more sophisticated way of defining the bins to use would be to define the bins based on some characteristic statistics of your dataset. For example, have a look at the possible ways possible implemented within python's Numpy. Of the available methods there, I have found the Doane method to work best - it will depend on your data though, so read the descriptions.




                      Making categorical values numerical in a meaningful way depends a little more on you data. It is easy to make them numberic, but you should focus on doing it in such a way as to retain as much of the information each variable contains as well as the relative relationships between each of the categories that you started with. E.g. converting colours into integers would allow you to perform regression, but if yellow becomes 1 and purple 10, the model needs to be able to learn that purple isn't necessarily 10 times bigger than yellow, and that is difficult in the context of regression!






                      share|improve this answer











                      $endgroup$



                      One could approach this in two general ways:



                      1) bottom up: thinking about unifying the data somehow to begin with



                      2) top down: deciding how the data needs to look based on the final model you wish to use



                      Do you already know which model you will use? If that is fixed (for whatever reason), you already know you need to get your data into the correct form, be it numerical or categorical.



                      As you pinned your question with the tag regression, I can tell you that you need to make your data all numerical, so regression can work.




                      An example of making numerical data categorical would be to put it into bins. Imagine we have values ranging from zero to ten: [0.173, 7.88, 3.91, ...]. You could simply say that values between 0.00 and 0.99 are category A, values between 1.00 and 1.99 are category B, and so on.



                      [Edit:]



                      A slightly more sophisticated way of defining the bins to use would be to define the bins based on some characteristic statistics of your dataset. For example, have a look at the possible ways possible implemented within python's Numpy. Of the available methods there, I have found the Doane method to work best - it will depend on your data though, so read the descriptions.




                      Making categorical values numerical in a meaningful way depends a little more on you data. It is easy to make them numberic, but you should focus on doing it in such a way as to retain as much of the information each variable contains as well as the relative relationships between each of the categories that you started with. E.g. converting colours into integers would allow you to perform regression, but if yellow becomes 1 and purple 10, the model needs to be able to learn that purple isn't necessarily 10 times bigger than yellow, and that is difficult in the context of regression!







                      share|improve this answer














                      share|improve this answer



                      share|improve this answer








                      edited May 21 '18 at 14:59

























                      answered May 21 '18 at 13:45









                      n1k31t4n1k31t4

                      6,5612421




                      6,5612421











                      • $begingroup$
                        Great, currently I'm using 1 hot encoding on my columns to create a larger data fame of binary variables. Thanks for the indepth way of thinking through it.
                        $endgroup$
                        – dward4
                        May 21 '18 at 14:15










                      • $begingroup$
                        @dward4 - you're welcome :) - have a look at the extra information I added regarding the use of histogram methods.
                        $endgroup$
                        – n1k31t4
                        May 21 '18 at 15:00
















                      • $begingroup$
                        Great, currently I'm using 1 hot encoding on my columns to create a larger data fame of binary variables. Thanks for the indepth way of thinking through it.
                        $endgroup$
                        – dward4
                        May 21 '18 at 14:15










                      • $begingroup$
                        @dward4 - you're welcome :) - have a look at the extra information I added regarding the use of histogram methods.
                        $endgroup$
                        – n1k31t4
                        May 21 '18 at 15:00















                      $begingroup$
                      Great, currently I'm using 1 hot encoding on my columns to create a larger data fame of binary variables. Thanks for the indepth way of thinking through it.
                      $endgroup$
                      – dward4
                      May 21 '18 at 14:15




                      $begingroup$
                      Great, currently I'm using 1 hot encoding on my columns to create a larger data fame of binary variables. Thanks for the indepth way of thinking through it.
                      $endgroup$
                      – dward4
                      May 21 '18 at 14:15












                      $begingroup$
                      @dward4 - you're welcome :) - have a look at the extra information I added regarding the use of histogram methods.
                      $endgroup$
                      – n1k31t4
                      May 21 '18 at 15:00




                      $begingroup$
                      @dward4 - you're welcome :) - have a look at the extra information I added regarding the use of histogram methods.
                      $endgroup$
                      – n1k31t4
                      May 21 '18 at 15:00











                      0












                      $begingroup$

                      Adding to the answers of above, there's one more better way to do the same i.e Target Encoding, which naively means you are e encoding your cats according to the target variable via using some aggregate (works out of the box)






                      share|improve this answer









                      $endgroup$

















                        0












                        $begingroup$

                        Adding to the answers of above, there's one more better way to do the same i.e Target Encoding, which naively means you are e encoding your cats according to the target variable via using some aggregate (works out of the box)






                        share|improve this answer









                        $endgroup$















                          0












                          0








                          0





                          $begingroup$

                          Adding to the answers of above, there's one more better way to do the same i.e Target Encoding, which naively means you are e encoding your cats according to the target variable via using some aggregate (works out of the box)






                          share|improve this answer









                          $endgroup$



                          Adding to the answers of above, there's one more better way to do the same i.e Target Encoding, which naively means you are e encoding your cats according to the target variable via using some aggregate (works out of the box)







                          share|improve this answer












                          share|improve this answer



                          share|improve this answer










                          answered May 21 '18 at 16:06









                          AdityaAditya

                          1,4341627




                          1,4341627



























                              draft saved

                              draft discarded
















































                              Thanks for contributing an answer to Data Science Stack Exchange!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid


                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.

                              Use MathJax to format equations. MathJax reference.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function ()
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f31926%2fhandling-a-combined-dataset-of-numerical-and-categorical-features-for-regression%23new-answer', 'question_page');

                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              Францішак Багушэвіч Змест Сям'я | Біяграфія | Творчасць | Мова Багушэвіча | Ацэнкі дзейнасці | Цікавыя факты | Спадчына | Выбраная бібліяграфія | Ушанаванне памяці | У філатэліі | Зноскі | Літаратура | Спасылкі | НавігацыяЛяхоўскі У. Рупіўся дзеля Бога і людзей: Жыццёвы шлях Лявона Вітан-Дубейкаўскага // Вольскі і Памідораў з песняй пра немца Адвакат, паэт, народны заступнік Ашмянскі веснікВ Минске появится площадь Богушевича и улица Сырокомли, Белорусская деловая газета, 19 июля 2001 г.Айцец беларускай нацыянальнай ідэі паўстаў у бронзе Сяргей Аляксандравіч Адашкевіч (1918, Мінск). 80-я гады. Бюст «Францішак Багушэвіч».Яўген Мікалаевіч Ціхановіч. «Партрэт Францішка Багушэвіча»Мікола Мікалаевіч Купава. «Партрэт зачынальніка новай беларускай літаратуры Францішка Багушэвіча»Уладзімір Іванавіч Мелехаў. На помніку «Змагарам за родную мову» Барэльеф «Францішак Багушэвіч»Памяць пра Багушэвіча на Віленшчыне Страчаная сталіца. Беларускія шыльды на вуліцах Вільні«Krynica». Ideologia i przywódcy białoruskiego katolicyzmuФранцішак БагушэвічТворы на knihi.comТворы Францішка Багушэвіча на bellib.byСодаль Уладзімір. Францішак Багушэвіч на Лідчыне;Луцкевіч Антон. Жыцьцё і творчасьць Фр. Багушэвіча ў успамінах ягоных сучасьнікаў // Запісы Беларускага Навуковага таварыства. Вільня, 1938. Сшытак 1. С. 16-34.Большая российская1188761710000 0000 5537 633Xn9209310021619551927869394п

                              Partai Komunis Tiongkok Daftar isi Kepemimpinan | Pranala luar | Referensi | Menu navigasidiperiksa1 perubahan tertundacpc.people.com.cnSitus resmiSurat kabar resmi"Why the Communist Party is alive, well and flourishing in China"0307-1235"Full text of Constitution of Communist Party of China"smengembangkannyas

                              ValueError: Expected n_neighbors <= n_samples, but n_samples = 1, n_neighbors = 6 (SMOTE) The 2019 Stack Overflow Developer Survey Results Are InCan SMOTE be applied over sequence of words (sentences)?ValueError when doing validation with random forestsSMOTE and multi class oversamplingLogic behind SMOTE-NC?ValueError: Error when checking target: expected dense_1 to have shape (7,) but got array with shape (1,)SmoteBoost: Should SMOTE be ran individually for each iteration/tree in the boosting?solving multi-class imbalance classification using smote and OSSUsing SMOTE for Synthetic Data generation to improve performance on unbalanced dataproblem of entry format for a simple model in KerasSVM SMOTE fit_resample() function runs forever with no result