What does the embedding mean in the FaceNet?2019 Community Moderator Electionconceptual question about triplet loss embeddingsHow to create a multi-dimensional softmax output in Tensorflow?What are the state-of-the-art models for identifying objects in photos?What does depth mean in the SqueezeNet architectural dimensions table?Keras: Softmax output into embedding layerWhat is the neural network architecture behind Facebook's Starspace model?Face embedding of unseen imagesMake embedding more Gaussian-like

How old can references or sources in a thesis be?

Dragon forelimb placement

What's the point of deactivating Num Lock on login screens?

How to format long polynomial?

Risk of getting Chronic Wasting Disease (CWD) in the United States?

Python: next in for loop

Why "Having chlorophyll without photosynthesis is actually very dangerous" and "like living with a bomb"?

What do the dots in this tr command do: tr .............A-Z A-ZA-Z <<< "JVPQBOV" (with 13 dots)

How to write a macro that is braces sensitive?

The Two and the One

Which models of the Boeing 737 are still in production?

Why are electrically insulating heatsinks so rare? Is it just cost?

In Japanese, what’s the difference between “Tonari ni” (となりに) and “Tsugi” (つぎ)? When would you use one over the other?

"You are your self first supporter", a more proper way to say it

How can I prevent hyper evolved versions of regular creatures from wiping out their cousins?

Is it possible to do 50 km distance without any previous training?

Why doesn't Newton's third law mean a person bounces back to where they started when they hit the ground?

Can divisibility rules for digits be generalized to sum of digits

can i play a electric guitar through a bass amp?

"to be prejudice towards/against someone" vs "to be prejudiced against/towards someone"

Do VLANs within a subnet need to have their own subnet for router on a stick?

Is it tax fraud for an individual to declare non-taxable revenue as taxable income? (US tax laws)

Has the BBC provided arguments for saying Brexit being cancelled is unlikely?

What is the word for reserving something for yourself before others do?



What does the embedding mean in the FaceNet?



2019 Community Moderator Electionconceptual question about triplet loss embeddingsHow to create a multi-dimensional softmax output in Tensorflow?What are the state-of-the-art models for identifying objects in photos?What does depth mean in the SqueezeNet architectural dimensions table?Keras: Softmax output into embedding layerWhat is the neural network architecture behind Facebook's Starspace model?Face embedding of unseen imagesMake embedding more Gaussian-like










2












$begingroup$


I am reading the paper about FaceNet but I can't get what does the embedding mean in this paper? Is it a hidden layer of the deep CNN?
P.S. English isn't my native language.










share|improve this question









$endgroup$
















    2












    $begingroup$


    I am reading the paper about FaceNet but I can't get what does the embedding mean in this paper? Is it a hidden layer of the deep CNN?
    P.S. English isn't my native language.










    share|improve this question









    $endgroup$














      2












      2








      2





      $begingroup$


      I am reading the paper about FaceNet but I can't get what does the embedding mean in this paper? Is it a hidden layer of the deep CNN?
      P.S. English isn't my native language.










      share|improve this question









      $endgroup$




      I am reading the paper about FaceNet but I can't get what does the embedding mean in this paper? Is it a hidden layer of the deep CNN?
      P.S. English isn't my native language.







      cnn embeddings






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 13 '18 at 22:50









      ШахШах

      1257




      1257




















          2 Answers
          2






          active

          oldest

          votes


















          2












          $begingroup$

          Assume you have features wich lie in a $R^n$ space, e.g. your input is a picture with $28 times 28$ pixels, then $n$ would be $28 times 28 = 784$.



          Now you can "embedd" your features into another $R^d$ space, where often $d < n$. This way you learn a rich representation of your input. When you compress your $784$ input-pixels to, lets say $64$ you compressed your input by more than a factor $10$ and can elimnate redundant/useless features.



          This embedding learning is of course done in such a way, that you could fully restore your original $784$ pixels out of your $64$ compressed pixels.






          share|improve this answer











          $endgroup$












          • $begingroup$
            Thank you for simple and clear explanation, seems I could get it!
            $endgroup$
            – Шах
            Nov 14 '18 at 20:52


















          2












          $begingroup$


          An embedding is a mapping from discrete objects, such as words, to vectors of real numbers.




          - Tensorflow/Embeddings



          With reference to the FaceNet paper, I should say that embedding here simply means the tensor obtained by performing a forward propagation over an image. The obtained embedding of the image and the target image are then compared to find the closeness among the images. The loss function specified by the equation $L=sum_i^N [ ||f(x_i^a)-f(x_i^p)||_2^2-||f(x_i^a)-f(x_i^n)||_2^2+alpha]_+ $ is simply used here to to find the euclidean distance between the generated embedding of the anchor, positive and negative images obtained by performing forward propagation. This technique is also called as Siamese network.



          I suggest you to read the full original FaceNet paper for clearer understanding of the FaceNet architecture and working.






          share|improve this answer











          $endgroup$













            Your Answer





            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "557"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f41185%2fwhat-does-the-embedding-mean-in-the-facenet%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            2












            $begingroup$

            Assume you have features wich lie in a $R^n$ space, e.g. your input is a picture with $28 times 28$ pixels, then $n$ would be $28 times 28 = 784$.



            Now you can "embedd" your features into another $R^d$ space, where often $d < n$. This way you learn a rich representation of your input. When you compress your $784$ input-pixels to, lets say $64$ you compressed your input by more than a factor $10$ and can elimnate redundant/useless features.



            This embedding learning is of course done in such a way, that you could fully restore your original $784$ pixels out of your $64$ compressed pixels.






            share|improve this answer











            $endgroup$












            • $begingroup$
              Thank you for simple and clear explanation, seems I could get it!
              $endgroup$
              – Шах
              Nov 14 '18 at 20:52















            2












            $begingroup$

            Assume you have features wich lie in a $R^n$ space, e.g. your input is a picture with $28 times 28$ pixels, then $n$ would be $28 times 28 = 784$.



            Now you can "embedd" your features into another $R^d$ space, where often $d < n$. This way you learn a rich representation of your input. When you compress your $784$ input-pixels to, lets say $64$ you compressed your input by more than a factor $10$ and can elimnate redundant/useless features.



            This embedding learning is of course done in such a way, that you could fully restore your original $784$ pixels out of your $64$ compressed pixels.






            share|improve this answer











            $endgroup$












            • $begingroup$
              Thank you for simple and clear explanation, seems I could get it!
              $endgroup$
              – Шах
              Nov 14 '18 at 20:52













            2












            2








            2





            $begingroup$

            Assume you have features wich lie in a $R^n$ space, e.g. your input is a picture with $28 times 28$ pixels, then $n$ would be $28 times 28 = 784$.



            Now you can "embedd" your features into another $R^d$ space, where often $d < n$. This way you learn a rich representation of your input. When you compress your $784$ input-pixels to, lets say $64$ you compressed your input by more than a factor $10$ and can elimnate redundant/useless features.



            This embedding learning is of course done in such a way, that you could fully restore your original $784$ pixels out of your $64$ compressed pixels.






            share|improve this answer











            $endgroup$



            Assume you have features wich lie in a $R^n$ space, e.g. your input is a picture with $28 times 28$ pixels, then $n$ would be $28 times 28 = 784$.



            Now you can "embedd" your features into another $R^d$ space, where often $d < n$. This way you learn a rich representation of your input. When you compress your $784$ input-pixels to, lets say $64$ you compressed your input by more than a factor $10$ and can elimnate redundant/useless features.



            This embedding learning is of course done in such a way, that you could fully restore your original $784$ pixels out of your $64$ compressed pixels.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Nov 15 '18 at 9:32

























            answered Nov 14 '18 at 8:13









            Andreas LookAndreas Look

            431110




            431110











            • $begingroup$
              Thank you for simple and clear explanation, seems I could get it!
              $endgroup$
              – Шах
              Nov 14 '18 at 20:52
















            • $begingroup$
              Thank you for simple and clear explanation, seems I could get it!
              $endgroup$
              – Шах
              Nov 14 '18 at 20:52















            $begingroup$
            Thank you for simple and clear explanation, seems I could get it!
            $endgroup$
            – Шах
            Nov 14 '18 at 20:52




            $begingroup$
            Thank you for simple and clear explanation, seems I could get it!
            $endgroup$
            – Шах
            Nov 14 '18 at 20:52











            2












            $begingroup$


            An embedding is a mapping from discrete objects, such as words, to vectors of real numbers.




            - Tensorflow/Embeddings



            With reference to the FaceNet paper, I should say that embedding here simply means the tensor obtained by performing a forward propagation over an image. The obtained embedding of the image and the target image are then compared to find the closeness among the images. The loss function specified by the equation $L=sum_i^N [ ||f(x_i^a)-f(x_i^p)||_2^2-||f(x_i^a)-f(x_i^n)||_2^2+alpha]_+ $ is simply used here to to find the euclidean distance between the generated embedding of the anchor, positive and negative images obtained by performing forward propagation. This technique is also called as Siamese network.



            I suggest you to read the full original FaceNet paper for clearer understanding of the FaceNet architecture and working.






            share|improve this answer











            $endgroup$

















              2












              $begingroup$


              An embedding is a mapping from discrete objects, such as words, to vectors of real numbers.




              - Tensorflow/Embeddings



              With reference to the FaceNet paper, I should say that embedding here simply means the tensor obtained by performing a forward propagation over an image. The obtained embedding of the image and the target image are then compared to find the closeness among the images. The loss function specified by the equation $L=sum_i^N [ ||f(x_i^a)-f(x_i^p)||_2^2-||f(x_i^a)-f(x_i^n)||_2^2+alpha]_+ $ is simply used here to to find the euclidean distance between the generated embedding of the anchor, positive and negative images obtained by performing forward propagation. This technique is also called as Siamese network.



              I suggest you to read the full original FaceNet paper for clearer understanding of the FaceNet architecture and working.






              share|improve this answer











              $endgroup$















                2












                2








                2





                $begingroup$


                An embedding is a mapping from discrete objects, such as words, to vectors of real numbers.




                - Tensorflow/Embeddings



                With reference to the FaceNet paper, I should say that embedding here simply means the tensor obtained by performing a forward propagation over an image. The obtained embedding of the image and the target image are then compared to find the closeness among the images. The loss function specified by the equation $L=sum_i^N [ ||f(x_i^a)-f(x_i^p)||_2^2-||f(x_i^a)-f(x_i^n)||_2^2+alpha]_+ $ is simply used here to to find the euclidean distance between the generated embedding of the anchor, positive and negative images obtained by performing forward propagation. This technique is also called as Siamese network.



                I suggest you to read the full original FaceNet paper for clearer understanding of the FaceNet architecture and working.






                share|improve this answer











                $endgroup$




                An embedding is a mapping from discrete objects, such as words, to vectors of real numbers.




                - Tensorflow/Embeddings



                With reference to the FaceNet paper, I should say that embedding here simply means the tensor obtained by performing a forward propagation over an image. The obtained embedding of the image and the target image are then compared to find the closeness among the images. The loss function specified by the equation $L=sum_i^N [ ||f(x_i^a)-f(x_i^p)||_2^2-||f(x_i^a)-f(x_i^n)||_2^2+alpha]_+ $ is simply used here to to find the euclidean distance between the generated embedding of the anchor, positive and negative images obtained by performing forward propagation. This technique is also called as Siamese network.



                I suggest you to read the full original FaceNet paper for clearer understanding of the FaceNet architecture and working.







                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited 44 mins ago

























                answered Nov 14 '18 at 20:23









                thanatozthanatoz

                569319




                569319



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Data Science Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f41185%2fwhat-does-the-embedding-mean-in-the-facenet%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Францішак Багушэвіч Змест Сям'я | Біяграфія | Творчасць | Мова Багушэвіча | Ацэнкі дзейнасці | Цікавыя факты | Спадчына | Выбраная бібліяграфія | Ушанаванне памяці | У філатэліі | Зноскі | Літаратура | Спасылкі | НавігацыяЛяхоўскі У. Рупіўся дзеля Бога і людзей: Жыццёвы шлях Лявона Вітан-Дубейкаўскага // Вольскі і Памідораў з песняй пра немца Адвакат, паэт, народны заступнік Ашмянскі веснікВ Минске появится площадь Богушевича и улица Сырокомли, Белорусская деловая газета, 19 июля 2001 г.Айцец беларускай нацыянальнай ідэі паўстаў у бронзе Сяргей Аляксандравіч Адашкевіч (1918, Мінск). 80-я гады. Бюст «Францішак Багушэвіч».Яўген Мікалаевіч Ціхановіч. «Партрэт Францішка Багушэвіча»Мікола Мікалаевіч Купава. «Партрэт зачынальніка новай беларускай літаратуры Францішка Багушэвіча»Уладзімір Іванавіч Мелехаў. На помніку «Змагарам за родную мову» Барэльеф «Францішак Багушэвіч»Памяць пра Багушэвіча на Віленшчыне Страчаная сталіца. Беларускія шыльды на вуліцах Вільні«Krynica». Ideologia i przywódcy białoruskiego katolicyzmuФранцішак БагушэвічТворы на knihi.comТворы Францішка Багушэвіча на bellib.byСодаль Уладзімір. Францішак Багушэвіч на Лідчыне;Луцкевіч Антон. Жыцьцё і творчасьць Фр. Багушэвіча ў успамінах ягоных сучасьнікаў // Запісы Беларускага Навуковага таварыства. Вільня, 1938. Сшытак 1. С. 16-34.Большая российская1188761710000 0000 5537 633Xn9209310021619551927869394п

                    Partai Komunis Tiongkok Daftar isi Kepemimpinan | Pranala luar | Referensi | Menu navigasidiperiksa1 perubahan tertundacpc.people.com.cnSitus resmiSurat kabar resmi"Why the Communist Party is alive, well and flourishing in China"0307-1235"Full text of Constitution of Communist Party of China"smengembangkannyas

                    ValueError: Expected n_neighbors <= n_samples, but n_samples = 1, n_neighbors = 6 (SMOTE) The 2019 Stack Overflow Developer Survey Results Are InCan SMOTE be applied over sequence of words (sentences)?ValueError when doing validation with random forestsSMOTE and multi class oversamplingLogic behind SMOTE-NC?ValueError: Error when checking target: expected dense_1 to have shape (7,) but got array with shape (1,)SmoteBoost: Should SMOTE be ran individually for each iteration/tree in the boosting?solving multi-class imbalance classification using smote and OSSUsing SMOTE for Synthetic Data generation to improve performance on unbalanced dataproblem of entry format for a simple model in KerasSVM SMOTE fit_resample() function runs forever with no result