Ordered elements of feature vectors for autoencoders? Unicorn Meta Zoo #1: Why another podcast? Announcing the arrival of Valued Associate #679: Cesar Manara 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsTransforming AutoEncodersAutoencoders for feature creationTips and tricks for designing time-series variational autoencodersAutoencoders versus Word2Vec?Unsupervised feature reduction for anomaly detection with autoencodersAutoencoder for anomaly detection from feature vectorsRight Way to Input Text Data in Keras Auto EncoderPreprocessing and dropout in Autoencoders?Dealing with feature vectors of variable lengthHow to use Autoencoders for outlier detection on images

Retract an already submitted recommendation letter (written for an undergrad student)

Is Electric Central Heating worth it if using Solar Panels?

Is Diceware more secure than a long passphrase?

c++ diamond problem - How to call base method only once

Align column where each cell has two decimals with siunitx

How to use @AuraEnabled base class method in Lightning Component?

Suing a Police Officer Instead of the Police Department

Is Bran literally the world's memory?

Justification for leaving new position after a short time

Trumpet valves, lengths, and pitch

Map material from china not allowed to leave the country

Implementing 3DES algorithm in Java: is my code secure?

What's parked in Mil Moscow helicopter plant?

Multiple options vs single option UI

Additive group of local rings

What do you call the part of a novel that is not dialog?

Why did Israel vote against lifting the American embargo on Cuba?

What is this word supposed to be?

Why isn't everyone flabbergasted about Bran's "gift"?

finding a tangent line to a parabola

Has a Nobel Peace laureate ever been accused of war crimes?

Will I lose my paid in full property

Second order approximation of the loss function (Deep learning book, 7.33)

What ability score does a Hexblade's Pact Weapon use for attack and damage when wielded by another character?



Ordered elements of feature vectors for autoencoders?



Unicorn Meta Zoo #1: Why another podcast?
Announcing the arrival of Valued Associate #679: Cesar Manara
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsTransforming AutoEncodersAutoencoders for feature creationTips and tricks for designing time-series variational autoencodersAutoencoders versus Word2Vec?Unsupervised feature reduction for anomaly detection with autoencodersAutoencoder for anomaly detection from feature vectorsRight Way to Input Text Data in Keras Auto EncoderPreprocessing and dropout in Autoencoders?Dealing with feature vectors of variable lengthHow to use Autoencoders for outlier detection on images










4












$begingroup$


Here is a newbie question; when one trains an autoencoder or a variational autoencoder, does the order of the objects in the training vector $x$ matter?



Suppose I take an MNIST image image $(28times28)$ and turn it into a feature vector of size $x in mathbbR^1times784$. Then does it matter if I e.g. flatten the whole image vertically, or horizontally, or some other fancy way? Or if I were to scramble the order of the elements in the feature vector, would that make the VAE or AE mess up?










share|improve this question











$endgroup$
















    4












    $begingroup$


    Here is a newbie question; when one trains an autoencoder or a variational autoencoder, does the order of the objects in the training vector $x$ matter?



    Suppose I take an MNIST image image $(28times28)$ and turn it into a feature vector of size $x in mathbbR^1times784$. Then does it matter if I e.g. flatten the whole image vertically, or horizontally, or some other fancy way? Or if I were to scramble the order of the elements in the feature vector, would that make the VAE or AE mess up?










    share|improve this question











    $endgroup$














      4












      4








      4


      1



      $begingroup$


      Here is a newbie question; when one trains an autoencoder or a variational autoencoder, does the order of the objects in the training vector $x$ matter?



      Suppose I take an MNIST image image $(28times28)$ and turn it into a feature vector of size $x in mathbbR^1times784$. Then does it matter if I e.g. flatten the whole image vertically, or horizontally, or some other fancy way? Or if I were to scramble the order of the elements in the feature vector, would that make the VAE or AE mess up?










      share|improve this question











      $endgroup$




      Here is a newbie question; when one trains an autoencoder or a variational autoencoder, does the order of the objects in the training vector $x$ matter?



      Suppose I take an MNIST image image $(28times28)$ and turn it into a feature vector of size $x in mathbbR^1times784$. Then does it matter if I e.g. flatten the whole image vertically, or horizontally, or some other fancy way? Or if I were to scramble the order of the elements in the feature vector, would that make the VAE or AE mess up?







      machine-learning deep-learning autoencoder






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Mar 28 '17 at 11:14







      Astrid

















      asked Mar 28 '17 at 0:07









      AstridAstrid

      15017




      15017




















          1 Answer
          1






          active

          oldest

          votes


















          2












          $begingroup$

          For a fully-connected network the precise order of features does not matter initially (i.e. before you start to train), as long as it is consistent for each example. This is independent of whether you have an auto-encoder to train or some other fully-connected network. Processing images with pixels as features does not change this.



          Some caveats:




          • To succeed in training, you will need the pixel order to be the same for each example. So it can be randomly shuffled, but only if you keep the same shuffle for each and every example.



            • As an aside, you will still get some training effect from fully random shuffling the variables, because for example writing an "8" has more filled pixels than writing a "1" on average. But the performance will be very bad, accuracy only a little better than guessing, for most interesting problem domains.


          • To visualise what the auto-encoder has learned, your output needs to be unscrambled. You can actually input a (same shuffle each example) scrambled image and train the autoencoder to unscramble it - this will in theory get the same accuracy as training to match the scrambled input, showing again that pixel order is not important. You could also train autoencoder to match scrambled input to scrambled output and visualise it by reversing the scrambling effect (again this must be a consistent scramble, same for each example).


          In a fully-connected neural network, there is nothing in the model that represents the local differences between pixels, or even that they are somehow related. So the network will learn relations (such as edges) irrespective of how the image is presented. But it will also suffer from being unable to generalise. E.g. just because an edge between pixels 3 and 4 is important, the network will not learn that the same edge between pixels 31 and 32 is similar, unless lots of examples of both occur in the training data.



          Addressing poor generalisation due to loss of knowledge about locality in the model is one of the motivations for convolutional neural networks (CNNs). You can have CNN autoencoders, and for those, you intentionally preserve the 2D structure and local relationships between pixels - if you did not then the network would function very poorly or not at all.






          share|improve this answer











          $endgroup$













            Your Answer








            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "557"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f17916%2fordered-elements-of-feature-vectors-for-autoencoders%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            2












            $begingroup$

            For a fully-connected network the precise order of features does not matter initially (i.e. before you start to train), as long as it is consistent for each example. This is independent of whether you have an auto-encoder to train or some other fully-connected network. Processing images with pixels as features does not change this.



            Some caveats:




            • To succeed in training, you will need the pixel order to be the same for each example. So it can be randomly shuffled, but only if you keep the same shuffle for each and every example.



              • As an aside, you will still get some training effect from fully random shuffling the variables, because for example writing an "8" has more filled pixels than writing a "1" on average. But the performance will be very bad, accuracy only a little better than guessing, for most interesting problem domains.


            • To visualise what the auto-encoder has learned, your output needs to be unscrambled. You can actually input a (same shuffle each example) scrambled image and train the autoencoder to unscramble it - this will in theory get the same accuracy as training to match the scrambled input, showing again that pixel order is not important. You could also train autoencoder to match scrambled input to scrambled output and visualise it by reversing the scrambling effect (again this must be a consistent scramble, same for each example).


            In a fully-connected neural network, there is nothing in the model that represents the local differences between pixels, or even that they are somehow related. So the network will learn relations (such as edges) irrespective of how the image is presented. But it will also suffer from being unable to generalise. E.g. just because an edge between pixels 3 and 4 is important, the network will not learn that the same edge between pixels 31 and 32 is similar, unless lots of examples of both occur in the training data.



            Addressing poor generalisation due to loss of knowledge about locality in the model is one of the motivations for convolutional neural networks (CNNs). You can have CNN autoencoders, and for those, you intentionally preserve the 2D structure and local relationships between pixels - if you did not then the network would function very poorly or not at all.






            share|improve this answer











            $endgroup$

















              2












              $begingroup$

              For a fully-connected network the precise order of features does not matter initially (i.e. before you start to train), as long as it is consistent for each example. This is independent of whether you have an auto-encoder to train or some other fully-connected network. Processing images with pixels as features does not change this.



              Some caveats:




              • To succeed in training, you will need the pixel order to be the same for each example. So it can be randomly shuffled, but only if you keep the same shuffle for each and every example.



                • As an aside, you will still get some training effect from fully random shuffling the variables, because for example writing an "8" has more filled pixels than writing a "1" on average. But the performance will be very bad, accuracy only a little better than guessing, for most interesting problem domains.


              • To visualise what the auto-encoder has learned, your output needs to be unscrambled. You can actually input a (same shuffle each example) scrambled image and train the autoencoder to unscramble it - this will in theory get the same accuracy as training to match the scrambled input, showing again that pixel order is not important. You could also train autoencoder to match scrambled input to scrambled output and visualise it by reversing the scrambling effect (again this must be a consistent scramble, same for each example).


              In a fully-connected neural network, there is nothing in the model that represents the local differences between pixels, or even that they are somehow related. So the network will learn relations (such as edges) irrespective of how the image is presented. But it will also suffer from being unable to generalise. E.g. just because an edge between pixels 3 and 4 is important, the network will not learn that the same edge between pixels 31 and 32 is similar, unless lots of examples of both occur in the training data.



              Addressing poor generalisation due to loss of knowledge about locality in the model is one of the motivations for convolutional neural networks (CNNs). You can have CNN autoencoders, and for those, you intentionally preserve the 2D structure and local relationships between pixels - if you did not then the network would function very poorly or not at all.






              share|improve this answer











              $endgroup$















                2












                2








                2





                $begingroup$

                For a fully-connected network the precise order of features does not matter initially (i.e. before you start to train), as long as it is consistent for each example. This is independent of whether you have an auto-encoder to train or some other fully-connected network. Processing images with pixels as features does not change this.



                Some caveats:




                • To succeed in training, you will need the pixel order to be the same for each example. So it can be randomly shuffled, but only if you keep the same shuffle for each and every example.



                  • As an aside, you will still get some training effect from fully random shuffling the variables, because for example writing an "8" has more filled pixels than writing a "1" on average. But the performance will be very bad, accuracy only a little better than guessing, for most interesting problem domains.


                • To visualise what the auto-encoder has learned, your output needs to be unscrambled. You can actually input a (same shuffle each example) scrambled image and train the autoencoder to unscramble it - this will in theory get the same accuracy as training to match the scrambled input, showing again that pixel order is not important. You could also train autoencoder to match scrambled input to scrambled output and visualise it by reversing the scrambling effect (again this must be a consistent scramble, same for each example).


                In a fully-connected neural network, there is nothing in the model that represents the local differences between pixels, or even that they are somehow related. So the network will learn relations (such as edges) irrespective of how the image is presented. But it will also suffer from being unable to generalise. E.g. just because an edge between pixels 3 and 4 is important, the network will not learn that the same edge between pixels 31 and 32 is similar, unless lots of examples of both occur in the training data.



                Addressing poor generalisation due to loss of knowledge about locality in the model is one of the motivations for convolutional neural networks (CNNs). You can have CNN autoencoders, and for those, you intentionally preserve the 2D structure and local relationships between pixels - if you did not then the network would function very poorly or not at all.






                share|improve this answer











                $endgroup$



                For a fully-connected network the precise order of features does not matter initially (i.e. before you start to train), as long as it is consistent for each example. This is independent of whether you have an auto-encoder to train or some other fully-connected network. Processing images with pixels as features does not change this.



                Some caveats:




                • To succeed in training, you will need the pixel order to be the same for each example. So it can be randomly shuffled, but only if you keep the same shuffle for each and every example.



                  • As an aside, you will still get some training effect from fully random shuffling the variables, because for example writing an "8" has more filled pixels than writing a "1" on average. But the performance will be very bad, accuracy only a little better than guessing, for most interesting problem domains.


                • To visualise what the auto-encoder has learned, your output needs to be unscrambled. You can actually input a (same shuffle each example) scrambled image and train the autoencoder to unscramble it - this will in theory get the same accuracy as training to match the scrambled input, showing again that pixel order is not important. You could also train autoencoder to match scrambled input to scrambled output and visualise it by reversing the scrambling effect (again this must be a consistent scramble, same for each example).


                In a fully-connected neural network, there is nothing in the model that represents the local differences between pixels, or even that they are somehow related. So the network will learn relations (such as edges) irrespective of how the image is presented. But it will also suffer from being unable to generalise. E.g. just because an edge between pixels 3 and 4 is important, the network will not learn that the same edge between pixels 31 and 32 is similar, unless lots of examples of both occur in the training data.



                Addressing poor generalisation due to loss of knowledge about locality in the model is one of the motivations for convolutional neural networks (CNNs). You can have CNN autoencoders, and for those, you intentionally preserve the 2D structure and local relationships between pixels - if you did not then the network would function very poorly or not at all.







                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited 35 mins ago









                Community

                1




                1










                answered Mar 28 '17 at 12:07









                Neil SlaterNeil Slater

                17.8k33264




                17.8k33264



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Data Science Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f17916%2fordered-elements-of-feature-vectors-for-autoencoders%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Францішак Багушэвіч Змест Сям'я | Біяграфія | Творчасць | Мова Багушэвіча | Ацэнкі дзейнасці | Цікавыя факты | Спадчына | Выбраная бібліяграфія | Ушанаванне памяці | У філатэліі | Зноскі | Літаратура | Спасылкі | НавігацыяЛяхоўскі У. Рупіўся дзеля Бога і людзей: Жыццёвы шлях Лявона Вітан-Дубейкаўскага // Вольскі і Памідораў з песняй пра немца Адвакат, паэт, народны заступнік Ашмянскі веснікВ Минске появится площадь Богушевича и улица Сырокомли, Белорусская деловая газета, 19 июля 2001 г.Айцец беларускай нацыянальнай ідэі паўстаў у бронзе Сяргей Аляксандравіч Адашкевіч (1918, Мінск). 80-я гады. Бюст «Францішак Багушэвіч».Яўген Мікалаевіч Ціхановіч. «Партрэт Францішка Багушэвіча»Мікола Мікалаевіч Купава. «Партрэт зачынальніка новай беларускай літаратуры Францішка Багушэвіча»Уладзімір Іванавіч Мелехаў. На помніку «Змагарам за родную мову» Барэльеф «Францішак Багушэвіч»Памяць пра Багушэвіча на Віленшчыне Страчаная сталіца. Беларускія шыльды на вуліцах Вільні«Krynica». Ideologia i przywódcy białoruskiego katolicyzmuФранцішак БагушэвічТворы на knihi.comТворы Францішка Багушэвіча на bellib.byСодаль Уладзімір. Францішак Багушэвіч на Лідчыне;Луцкевіч Антон. Жыцьцё і творчасьць Фр. Багушэвіча ў успамінах ягоных сучасьнікаў // Запісы Беларускага Навуковага таварыства. Вільня, 1938. Сшытак 1. С. 16-34.Большая российская1188761710000 0000 5537 633Xn9209310021619551927869394п

                    Беларусь Змест Назва Гісторыя Геаграфія Сімволіка Дзяржаўны лад Палітычныя партыі Міжнароднае становішча і знешняя палітыка Адміністрацыйны падзел Насельніцтва Эканоміка Культура і грамадства Сацыяльная сфера Узброеныя сілы Заўвагі Літаратура Спасылкі НавігацыяHGЯOiТоп-2011 г. (па версіі ej.by)Топ-2013 г. (па версіі ej.by)Топ-2016 г. (па версіі ej.by)Топ-2017 г. (па версіі ej.by)Нацыянальны статыстычны камітэт Рэспублікі БеларусьШчыльнасць насельніцтва па краінахhttp://naviny.by/rubrics/society/2011/09/16/ic_articles_116_175144/А. Калечыц, У. Ксяндзоў. Спробы засялення краю неандэртальскім чалавекам.І ў Менску былі мамантыА. Калечыц, У. Ксяндзоў. Старажытны каменны век (палеаліт). Першапачатковае засяленне тэрыторыіГ. Штыхаў. Балты і славяне ў VI—VIII стст.М. Клімаў. Полацкае княства ў IX—XI стст.Г. Штыхаў, В. Ляўко. Палітычная гісторыя Полацкай зямліГ. Штыхаў. Дзяржаўны лад у землях-княствахГ. Штыхаў. Дзяржаўны лад у землях-княствахБеларускія землі ў складзе Вялікага Княства ЛітоўскагаЛюблінская унія 1569 г."The Early Stages of Independence"Zapomniane prawdy25 гадоў таму было аб'яўлена, што Язэп Пілсудскі — беларус (фота)Наша вадаДакументы ЧАЭС: Забруджванне тэрыторыі Беларусі « ЧАЭС Зона адчужэнняСведения о политических партиях, зарегистрированных в Республике Беларусь // Министерство юстиции Республики БеларусьСтатыстычны бюлетэнь „Полаўзроставая структура насельніцтва Рэспублікі Беларусь на 1 студзеня 2012 года і сярэднегадовая колькасць насельніцтва за 2011 год“Индекс человеческого развития Беларуси — не было бы нижеБеларусь занимает первое место в СНГ по индексу развития с учетом гендерного факцёраНацыянальны статыстычны камітэт Рэспублікі БеларусьКанстытуцыя РБ. Артыкул 17Трансфармацыйныя задачы БеларусіВыйсце з крызісу — далейшае рэфармаванне Беларускі рубель — сусветны лідар па дэвальвацыяхПра змену коштаў у кастрычніку 2011 г.Бядней за беларусаў у СНД толькі таджыкіСярэдні заробак у верасні дасягнуў 2,26 мільёна рублёўЭканомікаГаласуем за ТОП-100 беларускай прозыСучасныя беларускія мастакіАрхитектура Беларуси BELARUS.BYА. Каханоўскі. Культура Беларусі ўсярэдзіне XVII—XVIII ст.Анталогія беларускай народнай песні, гуказапісы спеваўБеларускія Музычныя IнструментыБеларускі рок, які мы страцілі. Топ-10 гуртоў«Мясцовы час» — нязгаслая легенда беларускай рок-музыкіСЯРГЕЙ БУДКІН. МЫ НЯ ЗНАЕМ СВАЁЙ МУЗЫКІМ. А. Каладзінскі. НАРОДНЫ ТЭАТРМагнацкія культурныя цэнтрыПублічная дыскусія «Беларуская новая пьеса: без беларускай мовы ці беларуская?»Беларускія драматургі па-ранейшаму лепш ставяцца за мяжой, чым на радзіме«Працэс незалежнага кіно пайшоў, і дзяржаву турбуе яго непадкантрольнасць»Беларускія філосафы ў пошуках прасторыВсе идём в библиотекуАрхіваванаАб Нацыянальнай праграме даследавання і выкарыстання касмічнай прасторы ў мірных мэтах на 2008—2012 гадыУ космас — разам.У суседнім з Барысаўскім раёне пабудуюць Камандна-вымяральны пунктСвяты і абрады беларусаў«Мірныя бульбашы з малой краіны» — 5 непраўдзівых стэрэатыпаў пра БеларусьМ. Раманюк. Беларускае народнае адзеннеУ Беларусі скарачаецца колькасць злачынстваўЛукашэнка незадаволены мінскімі ўладамі Крадзяжы складаюць у Мінску каля 70% злачынстваў Узровень злачыннасці ў Мінскай вобласці — адзін з самых высокіх у краіне Генпракуратура аналізуе стан са злачыннасцю ў Беларусі па каэфіцыенце злачыннасці У Беларусі стабілізавалася крымінагеннае становішча, лічыць генпракурорЗамежнікі сталі здзяйсняць у Беларусі больш злачынстваўМУС Беларусі турбуе рост рэцыдыўнай злачыннасціЯ з ЖЭСа. Дазволіце вас абкрасці! Рэйтынг усіх службаў і падраздзяленняў ГУУС Мінгарвыканкама вырасАб КДБ РБГісторыя Аператыўна-аналітычнага цэнтра РБГісторыя ДКФРТаможняagentura.ruБеларусьBelarus.by — Афіцыйны сайт Рэспублікі БеларусьСайт урада БеларусіRadzima.org — Збор архітэктурных помнікаў, гісторыя Беларусі«Глобус Беларуси»Гербы и флаги БеларусиАсаблівасці каменнага веку на БеларусіА. Калечыц, У. Ксяндзоў. Старажытны каменны век (палеаліт). Першапачатковае засяленне тэрыторыіУ. Ксяндзоў. Сярэдні каменны век (мезаліт). Засяленне краю плямёнамі паляўнічых, рыбакоў і збіральнікаўА. Калечыц, М. Чарняўскі. Плямёны на тэрыторыі Беларусі ў новым каменным веку (неаліце)А. Калечыц, У. Ксяндзоў, М. Чарняўскі. Гаспадарчыя заняткі ў каменным векуЭ. Зайкоўскі. Духоўная культура ў каменным векуАсаблівасці бронзавага веку на БеларусіФарміраванне супольнасцей ранняга перыяду бронзавага векуФотографии БеларусиРоля беларускіх зямель ва ўтварэнні і ўмацаванні ВКЛВ. Фадзеева. З гісторыі развіцця беларускай народнай вышыўкіDMOZGran catalanaБольшая российскаяBritannica (анлайн)Швейцарскі гістарычны15325917611952699xDA123282154079143-90000 0001 2171 2080n9112870100577502ge128882171858027501086026362074122714179пппппп

                    ValueError: Expected n_neighbors <= n_samples, but n_samples = 1, n_neighbors = 6 (SMOTE) The 2019 Stack Overflow Developer Survey Results Are InCan SMOTE be applied over sequence of words (sentences)?ValueError when doing validation with random forestsSMOTE and multi class oversamplingLogic behind SMOTE-NC?ValueError: Error when checking target: expected dense_1 to have shape (7,) but got array with shape (1,)SmoteBoost: Should SMOTE be ran individually for each iteration/tree in the boosting?solving multi-class imbalance classification using smote and OSSUsing SMOTE for Synthetic Data generation to improve performance on unbalanced dataproblem of entry format for a simple model in KerasSVM SMOTE fit_resample() function runs forever with no result