Ordered elements of feature vectors for autoencoders? Unicorn Meta Zoo #1: Why another podcast? Announcing the arrival of Valued Associate #679: Cesar Manara 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsTransforming AutoEncodersAutoencoders for feature creationTips and tricks for designing time-series variational autoencodersAutoencoders versus Word2Vec?Unsupervised feature reduction for anomaly detection with autoencodersAutoencoder for anomaly detection from feature vectorsRight Way to Input Text Data in Keras Auto EncoderPreprocessing and dropout in Autoencoders?Dealing with feature vectors of variable lengthHow to use Autoencoders for outlier detection on images

Retract an already submitted recommendation letter (written for an undergrad student)

Is Electric Central Heating worth it if using Solar Panels?

Is Diceware more secure than a long passphrase?

c++ diamond problem - How to call base method only once

Align column where each cell has two decimals with siunitx

How to use @AuraEnabled base class method in Lightning Component?

Suing a Police Officer Instead of the Police Department

Is Bran literally the world's memory?

Justification for leaving new position after a short time

Trumpet valves, lengths, and pitch

Map material from china not allowed to leave the country

Implementing 3DES algorithm in Java: is my code secure?

What's parked in Mil Moscow helicopter plant?

Multiple options vs single option UI

Additive group of local rings

What do you call the part of a novel that is not dialog?

Why did Israel vote against lifting the American embargo on Cuba?

What is this word supposed to be?

Why isn't everyone flabbergasted about Bran's "gift"?

finding a tangent line to a parabola

Has a Nobel Peace laureate ever been accused of war crimes?

Will I lose my paid in full property

Second order approximation of the loss function (Deep learning book, 7.33)

What ability score does a Hexblade's Pact Weapon use for attack and damage when wielded by another character?



Ordered elements of feature vectors for autoencoders?



Unicorn Meta Zoo #1: Why another podcast?
Announcing the arrival of Valued Associate #679: Cesar Manara
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsTransforming AutoEncodersAutoencoders for feature creationTips and tricks for designing time-series variational autoencodersAutoencoders versus Word2Vec?Unsupervised feature reduction for anomaly detection with autoencodersAutoencoder for anomaly detection from feature vectorsRight Way to Input Text Data in Keras Auto EncoderPreprocessing and dropout in Autoencoders?Dealing with feature vectors of variable lengthHow to use Autoencoders for outlier detection on images










4












$begingroup$


Here is a newbie question; when one trains an autoencoder or a variational autoencoder, does the order of the objects in the training vector $x$ matter?



Suppose I take an MNIST image image $(28times28)$ and turn it into a feature vector of size $x in mathbbR^1times784$. Then does it matter if I e.g. flatten the whole image vertically, or horizontally, or some other fancy way? Or if I were to scramble the order of the elements in the feature vector, would that make the VAE or AE mess up?










share|improve this question











$endgroup$
















    4












    $begingroup$


    Here is a newbie question; when one trains an autoencoder or a variational autoencoder, does the order of the objects in the training vector $x$ matter?



    Suppose I take an MNIST image image $(28times28)$ and turn it into a feature vector of size $x in mathbbR^1times784$. Then does it matter if I e.g. flatten the whole image vertically, or horizontally, or some other fancy way? Or if I were to scramble the order of the elements in the feature vector, would that make the VAE or AE mess up?










    share|improve this question











    $endgroup$














      4












      4








      4


      1



      $begingroup$


      Here is a newbie question; when one trains an autoencoder or a variational autoencoder, does the order of the objects in the training vector $x$ matter?



      Suppose I take an MNIST image image $(28times28)$ and turn it into a feature vector of size $x in mathbbR^1times784$. Then does it matter if I e.g. flatten the whole image vertically, or horizontally, or some other fancy way? Or if I were to scramble the order of the elements in the feature vector, would that make the VAE or AE mess up?










      share|improve this question











      $endgroup$




      Here is a newbie question; when one trains an autoencoder or a variational autoencoder, does the order of the objects in the training vector $x$ matter?



      Suppose I take an MNIST image image $(28times28)$ and turn it into a feature vector of size $x in mathbbR^1times784$. Then does it matter if I e.g. flatten the whole image vertically, or horizontally, or some other fancy way? Or if I were to scramble the order of the elements in the feature vector, would that make the VAE or AE mess up?







      machine-learning deep-learning autoencoder






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Mar 28 '17 at 11:14







      Astrid

















      asked Mar 28 '17 at 0:07









      AstridAstrid

      15017




      15017




















          1 Answer
          1






          active

          oldest

          votes


















          2












          $begingroup$

          For a fully-connected network the precise order of features does not matter initially (i.e. before you start to train), as long as it is consistent for each example. This is independent of whether you have an auto-encoder to train or some other fully-connected network. Processing images with pixels as features does not change this.



          Some caveats:




          • To succeed in training, you will need the pixel order to be the same for each example. So it can be randomly shuffled, but only if you keep the same shuffle for each and every example.



            • As an aside, you will still get some training effect from fully random shuffling the variables, because for example writing an "8" has more filled pixels than writing a "1" on average. But the performance will be very bad, accuracy only a little better than guessing, for most interesting problem domains.


          • To visualise what the auto-encoder has learned, your output needs to be unscrambled. You can actually input a (same shuffle each example) scrambled image and train the autoencoder to unscramble it - this will in theory get the same accuracy as training to match the scrambled input, showing again that pixel order is not important. You could also train autoencoder to match scrambled input to scrambled output and visualise it by reversing the scrambling effect (again this must be a consistent scramble, same for each example).


          In a fully-connected neural network, there is nothing in the model that represents the local differences between pixels, or even that they are somehow related. So the network will learn relations (such as edges) irrespective of how the image is presented. But it will also suffer from being unable to generalise. E.g. just because an edge between pixels 3 and 4 is important, the network will not learn that the same edge between pixels 31 and 32 is similar, unless lots of examples of both occur in the training data.



          Addressing poor generalisation due to loss of knowledge about locality in the model is one of the motivations for convolutional neural networks (CNNs). You can have CNN autoencoders, and for those, you intentionally preserve the 2D structure and local relationships between pixels - if you did not then the network would function very poorly or not at all.






          share|improve this answer











          $endgroup$













            Your Answer








            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "557"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f17916%2fordered-elements-of-feature-vectors-for-autoencoders%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            2












            $begingroup$

            For a fully-connected network the precise order of features does not matter initially (i.e. before you start to train), as long as it is consistent for each example. This is independent of whether you have an auto-encoder to train or some other fully-connected network. Processing images with pixels as features does not change this.



            Some caveats:




            • To succeed in training, you will need the pixel order to be the same for each example. So it can be randomly shuffled, but only if you keep the same shuffle for each and every example.



              • As an aside, you will still get some training effect from fully random shuffling the variables, because for example writing an "8" has more filled pixels than writing a "1" on average. But the performance will be very bad, accuracy only a little better than guessing, for most interesting problem domains.


            • To visualise what the auto-encoder has learned, your output needs to be unscrambled. You can actually input a (same shuffle each example) scrambled image and train the autoencoder to unscramble it - this will in theory get the same accuracy as training to match the scrambled input, showing again that pixel order is not important. You could also train autoencoder to match scrambled input to scrambled output and visualise it by reversing the scrambling effect (again this must be a consistent scramble, same for each example).


            In a fully-connected neural network, there is nothing in the model that represents the local differences between pixels, or even that they are somehow related. So the network will learn relations (such as edges) irrespective of how the image is presented. But it will also suffer from being unable to generalise. E.g. just because an edge between pixels 3 and 4 is important, the network will not learn that the same edge between pixels 31 and 32 is similar, unless lots of examples of both occur in the training data.



            Addressing poor generalisation due to loss of knowledge about locality in the model is one of the motivations for convolutional neural networks (CNNs). You can have CNN autoencoders, and for those, you intentionally preserve the 2D structure and local relationships between pixels - if you did not then the network would function very poorly or not at all.






            share|improve this answer











            $endgroup$

















              2












              $begingroup$

              For a fully-connected network the precise order of features does not matter initially (i.e. before you start to train), as long as it is consistent for each example. This is independent of whether you have an auto-encoder to train or some other fully-connected network. Processing images with pixels as features does not change this.



              Some caveats:




              • To succeed in training, you will need the pixel order to be the same for each example. So it can be randomly shuffled, but only if you keep the same shuffle for each and every example.



                • As an aside, you will still get some training effect from fully random shuffling the variables, because for example writing an "8" has more filled pixels than writing a "1" on average. But the performance will be very bad, accuracy only a little better than guessing, for most interesting problem domains.


              • To visualise what the auto-encoder has learned, your output needs to be unscrambled. You can actually input a (same shuffle each example) scrambled image and train the autoencoder to unscramble it - this will in theory get the same accuracy as training to match the scrambled input, showing again that pixel order is not important. You could also train autoencoder to match scrambled input to scrambled output and visualise it by reversing the scrambling effect (again this must be a consistent scramble, same for each example).


              In a fully-connected neural network, there is nothing in the model that represents the local differences between pixels, or even that they are somehow related. So the network will learn relations (such as edges) irrespective of how the image is presented. But it will also suffer from being unable to generalise. E.g. just because an edge between pixels 3 and 4 is important, the network will not learn that the same edge between pixels 31 and 32 is similar, unless lots of examples of both occur in the training data.



              Addressing poor generalisation due to loss of knowledge about locality in the model is one of the motivations for convolutional neural networks (CNNs). You can have CNN autoencoders, and for those, you intentionally preserve the 2D structure and local relationships between pixels - if you did not then the network would function very poorly or not at all.






              share|improve this answer











              $endgroup$















                2












                2








                2





                $begingroup$

                For a fully-connected network the precise order of features does not matter initially (i.e. before you start to train), as long as it is consistent for each example. This is independent of whether you have an auto-encoder to train or some other fully-connected network. Processing images with pixels as features does not change this.



                Some caveats:




                • To succeed in training, you will need the pixel order to be the same for each example. So it can be randomly shuffled, but only if you keep the same shuffle for each and every example.



                  • As an aside, you will still get some training effect from fully random shuffling the variables, because for example writing an "8" has more filled pixels than writing a "1" on average. But the performance will be very bad, accuracy only a little better than guessing, for most interesting problem domains.


                • To visualise what the auto-encoder has learned, your output needs to be unscrambled. You can actually input a (same shuffle each example) scrambled image and train the autoencoder to unscramble it - this will in theory get the same accuracy as training to match the scrambled input, showing again that pixel order is not important. You could also train autoencoder to match scrambled input to scrambled output and visualise it by reversing the scrambling effect (again this must be a consistent scramble, same for each example).


                In a fully-connected neural network, there is nothing in the model that represents the local differences between pixels, or even that they are somehow related. So the network will learn relations (such as edges) irrespective of how the image is presented. But it will also suffer from being unable to generalise. E.g. just because an edge between pixels 3 and 4 is important, the network will not learn that the same edge between pixels 31 and 32 is similar, unless lots of examples of both occur in the training data.



                Addressing poor generalisation due to loss of knowledge about locality in the model is one of the motivations for convolutional neural networks (CNNs). You can have CNN autoencoders, and for those, you intentionally preserve the 2D structure and local relationships between pixels - if you did not then the network would function very poorly or not at all.






                share|improve this answer











                $endgroup$



                For a fully-connected network the precise order of features does not matter initially (i.e. before you start to train), as long as it is consistent for each example. This is independent of whether you have an auto-encoder to train or some other fully-connected network. Processing images with pixels as features does not change this.



                Some caveats:




                • To succeed in training, you will need the pixel order to be the same for each example. So it can be randomly shuffled, but only if you keep the same shuffle for each and every example.



                  • As an aside, you will still get some training effect from fully random shuffling the variables, because for example writing an "8" has more filled pixels than writing a "1" on average. But the performance will be very bad, accuracy only a little better than guessing, for most interesting problem domains.


                • To visualise what the auto-encoder has learned, your output needs to be unscrambled. You can actually input a (same shuffle each example) scrambled image and train the autoencoder to unscramble it - this will in theory get the same accuracy as training to match the scrambled input, showing again that pixel order is not important. You could also train autoencoder to match scrambled input to scrambled output and visualise it by reversing the scrambling effect (again this must be a consistent scramble, same for each example).


                In a fully-connected neural network, there is nothing in the model that represents the local differences between pixels, or even that they are somehow related. So the network will learn relations (such as edges) irrespective of how the image is presented. But it will also suffer from being unable to generalise. E.g. just because an edge between pixels 3 and 4 is important, the network will not learn that the same edge between pixels 31 and 32 is similar, unless lots of examples of both occur in the training data.



                Addressing poor generalisation due to loss of knowledge about locality in the model is one of the motivations for convolutional neural networks (CNNs). You can have CNN autoencoders, and for those, you intentionally preserve the 2D structure and local relationships between pixels - if you did not then the network would function very poorly or not at all.







                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited 35 mins ago









                Community

                1




                1










                answered Mar 28 '17 at 12:07









                Neil SlaterNeil Slater

                17.8k33264




                17.8k33264



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Data Science Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f17916%2fordered-elements-of-feature-vectors-for-autoencoders%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    ValueError: Error when checking input: expected conv2d_13_input to have shape (3, 150, 150) but got array with shape (150, 150, 3)2019 Community Moderator ElectionError when checking : expected dense_1_input to have shape (None, 5) but got array with shape (200, 1)Error 'Expected 2D array, got 1D array instead:'ValueError: Error when checking input: expected lstm_41_input to have 3 dimensions, but got array with shape (40000,100)ValueError: Error when checking target: expected dense_1 to have shape (7,) but got array with shape (1,)ValueError: Error when checking target: expected dense_2 to have shape (1,) but got array with shape (0,)Keras exception: ValueError: Error when checking input: expected conv2d_1_input to have shape (150, 150, 3) but got array with shape (256, 256, 3)Steps taking too long to completewhen checking input: expected dense_1_input to have shape (13328,) but got array with shape (317,)ValueError: Error when checking target: expected dense_3 to have shape (None, 1) but got array with shape (7715, 40000)Keras exception: Error when checking input: expected dense_input to have shape (2,) but got array with shape (1,)

                    Ружовы пелікан Змест Знешні выгляд | Пашырэнне | Асаблівасці біялогіі | Літаратура | НавігацыяДагледжаная версіяправерана1 зменаДагледжаная версіяправерана1 змена/ 22697590 Сістэматыкана ВіківідахВыявына Вікісховішчы174693363011049382

                    Illegal assignment from SObject to ContactFetching String, Id from Map - Illegal Assignment Id to Field / ObjectError: Compile Error: Illegal assignment from String to BooleanError: List has no rows for assignment to SObjectError on Test Class - System.QueryException: List has no rows for assignment to SObjectRemote action problemDML requires SObject or SObject list type error“Illegal assignment from List to List”Test Class Fail: Batch Class: System.QueryException: List has no rows for assignment to SObjectMapping to a user'List has no rows for assignment to SObject' Mystery