How does the bounding box regressor work in Fast R-CNN?2019 Community Moderator ElectionAre there pretrained models on the ImageNet Bounding Boxes dataset?Faster R-CNN: Labels regarding the positive anchors when there are many classesHow does region proposal network (RPN) and R-CNN works?How does YOLO algorithm detect objects if the grid size is way smaller than the object in the test image?Faster R-CNN wrapper for the number of RPNs in the layer dimensions?What does the co-ordinate output in the yolo algorithm represent?What is difference between intersection over union (IoU) and intersection over bounding box (IoBB)?Bounding Boxes in YOLO ModelHow to project a bounding box on feature map?

Can divisibility rules for digits be generalized to sum of digits

Adding span tags within wp_list_pages list items

Finding angle with pure Geometry.

How can bays and straits be determined in a procedurally generated map?

I’m planning on buying a laser printer but concerned about the life cycle of toner in the machine

How does strength of boric acid solution increase in presence of salicylic acid?

Why, historically, did Gödel think CH was false?

What is the offset in a seaplane's hull?

How does one intimidate enemies without having the capacity for violence?

A newer friend of my brother's gave him a load of baseball cards that are supposedly extremely valuable. Is this a scam?

Is this a crack on the carbon frame?

What are these boxed doors outside store fronts in New York?

Did Shadowfax go to Valinor?

How old can references or sources in a thesis be?

Why don't electron-positron collisions release infinite energy?

How could an uplifted falcon's brain work?

Can I make popcorn with any corn?

What do you call a Matrix-like slowdown and camera movement effect?

How is the claim "I am in New York only if I am in America" the same as "If I am in New York, then I am in America?

Why does Kotter return in Welcome Back Kotter?

Why did the Germans forbid the possession of pet pigeons in Rostov-on-Don in 1941?

Why do falling prices hurt debtors?

Approximately how much travel time was saved by the opening of the Suez Canal in 1869?

Why not use SQL instead of GraphQL?



How does the bounding box regressor work in Fast R-CNN?



2019 Community Moderator ElectionAre there pretrained models on the ImageNet Bounding Boxes dataset?Faster R-CNN: Labels regarding the positive anchors when there are many classesHow does region proposal network (RPN) and R-CNN works?How does YOLO algorithm detect objects if the grid size is way smaller than the object in the test image?Faster R-CNN wrapper for the number of RPNs in the layer dimensions?What does the co-ordinate output in the yolo algorithm represent?What is difference between intersection over union (IoU) and intersection over bounding box (IoBB)?Bounding Boxes in YOLO ModelHow to project a bounding box on feature map?










5












$begingroup$


In the fast R-CNN paper (https://arxiv.org/abs/1504.08083) by Ross Girshick, the bounding box parameters are continuous variables. These values are predicted using regression method. Unlike other neural network outputs, these values do not represent the probability of output classes. Rather, they are physical values representing position and size of a bounding box.



The exact method of how this regression learning happens is not clear to me. Linear regression and image classification by deep learning are well explained separately earlier. But how the linear regression algorithm works in the CNN settings is not explained so clearly.



Can you explain the basic concept for easy understanding?










share|improve this question









$endgroup$
















    5












    $begingroup$


    In the fast R-CNN paper (https://arxiv.org/abs/1504.08083) by Ross Girshick, the bounding box parameters are continuous variables. These values are predicted using regression method. Unlike other neural network outputs, these values do not represent the probability of output classes. Rather, they are physical values representing position and size of a bounding box.



    The exact method of how this regression learning happens is not clear to me. Linear regression and image classification by deep learning are well explained separately earlier. But how the linear regression algorithm works in the CNN settings is not explained so clearly.



    Can you explain the basic concept for easy understanding?










    share|improve this question









    $endgroup$














      5












      5








      5


      2



      $begingroup$


      In the fast R-CNN paper (https://arxiv.org/abs/1504.08083) by Ross Girshick, the bounding box parameters are continuous variables. These values are predicted using regression method. Unlike other neural network outputs, these values do not represent the probability of output classes. Rather, they are physical values representing position and size of a bounding box.



      The exact method of how this regression learning happens is not clear to me. Linear regression and image classification by deep learning are well explained separately earlier. But how the linear regression algorithm works in the CNN settings is not explained so clearly.



      Can you explain the basic concept for easy understanding?










      share|improve this question









      $endgroup$




      In the fast R-CNN paper (https://arxiv.org/abs/1504.08083) by Ross Girshick, the bounding box parameters are continuous variables. These values are predicted using regression method. Unlike other neural network outputs, these values do not represent the probability of output classes. Rather, they are physical values representing position and size of a bounding box.



      The exact method of how this regression learning happens is not clear to me. Linear regression and image classification by deep learning are well explained separately earlier. But how the linear regression algorithm works in the CNN settings is not explained so clearly.



      Can you explain the basic concept for easy understanding?







      image-recognition object-recognition yolo faster-rcnn






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Apr 20 '18 at 7:25









      Saptarshi RoySaptarshi Roy

      12718




      12718




















          2 Answers
          2






          active

          oldest

          votes


















          3












          $begingroup$

          The paper cited does not mention linear regression at all. What it does is using a neural network to predict continuous variables, and refers to that as regression.



          The regression that is defined (which is not linear at all), is just a CNN with convolutional layers, and fully connected layers, but in the last fully connected layer, it does not apply sigmoid or softmax, which is what is typically used in classification, as the values correspond to probabilities. Instead, what this CNN outputs are four values $(r, c, h, w)$, where $(r, c)$ specify the values of the position of the left corner and $(h, w)$ the height and width of the window. In order to train this NN, the loss function will penalize when the outputs of the NN are very different from the labelled $(r, c, h, w)$ in the training set.






          share|improve this answer









          $endgroup$












          • $begingroup$
            Yes. It was my mistake to mention the regressor as linear.
            $endgroup$
            – Saptarshi Roy
            Apr 20 '18 at 8:12










          • $begingroup$
            Did I answer your question though?
            $endgroup$
            – David Masip
            Apr 20 '18 at 8:42










          • $begingroup$
            After your comment (and few subsequent google search), I have understood that NN can very well solve regression problems by replacing the last layer. But the intuitive understanding of how the exact value of lengths coming is still not there. For example, the layers of CNN indicate different features of an image (features like edges, color etc.). The training method finds the correct filters (weights) to extract only the relevant features to discriminate the positive examples from the negative ones. I was looking for a similar explanation for the regression part.
            $endgroup$
            – Saptarshi Roy
            Apr 20 '18 at 8:49







          • 2




            $begingroup$
            In the regression setting, the training method finds the correct filters (weights) to extract the relevant features to find the position of the top left edge, as well as the height and the width. In the end, what you have is a cost function that measures how good you are doing on predicting these features. And that is what deep learning is all about: give me a differentiable cost function, some labelled images and I'll find you a way to predict the labels. Is this more clear?
            $endgroup$
            – David Masip
            Apr 20 '18 at 8:58










          • $begingroup$
            It is somewhat clearer than before.
            $endgroup$
            – Saptarshi Roy
            Apr 20 '18 at 9:27



















          0












          $begingroup$


          A very clear and in-depth explanation is provided by the slow R-CNN paper by Author(Girshick et. al) on page 12: C. Bounding-box regression and I simply paste here for quick reading:




          enter image description hereenter image description here




          Moreover, the author took inspiration from an earlier paper and talked about the difference in the two techniques is below:




          enter image description here




          After which in Fast-RCNN paper which you referenced to, the author changed the loss function for BB regression task from regularized least squares(ridge regression) to smooth L1 which is less sensitive to outliers!. Also, you embed this smooth L1 loss in the multi-task loss function so that we can jointly train for classification and bounding-box regression that wasn't done before in R-CNN or SPP-net!




          enter image description here




          However, the same author has changed the loss function again in the upcoming paper faster-RCNN
          Later, in FCN
          Many a time, in order to learn about a topic, you need to do backtracking through research papers! :) Hope it helps!







          share|improve this answer









          $endgroup$













            Your Answer





            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "557"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f30557%2fhow-does-the-bounding-box-regressor-work-in-fast-r-cnn%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            3












            $begingroup$

            The paper cited does not mention linear regression at all. What it does is using a neural network to predict continuous variables, and refers to that as regression.



            The regression that is defined (which is not linear at all), is just a CNN with convolutional layers, and fully connected layers, but in the last fully connected layer, it does not apply sigmoid or softmax, which is what is typically used in classification, as the values correspond to probabilities. Instead, what this CNN outputs are four values $(r, c, h, w)$, where $(r, c)$ specify the values of the position of the left corner and $(h, w)$ the height and width of the window. In order to train this NN, the loss function will penalize when the outputs of the NN are very different from the labelled $(r, c, h, w)$ in the training set.






            share|improve this answer









            $endgroup$












            • $begingroup$
              Yes. It was my mistake to mention the regressor as linear.
              $endgroup$
              – Saptarshi Roy
              Apr 20 '18 at 8:12










            • $begingroup$
              Did I answer your question though?
              $endgroup$
              – David Masip
              Apr 20 '18 at 8:42










            • $begingroup$
              After your comment (and few subsequent google search), I have understood that NN can very well solve regression problems by replacing the last layer. But the intuitive understanding of how the exact value of lengths coming is still not there. For example, the layers of CNN indicate different features of an image (features like edges, color etc.). The training method finds the correct filters (weights) to extract only the relevant features to discriminate the positive examples from the negative ones. I was looking for a similar explanation for the regression part.
              $endgroup$
              – Saptarshi Roy
              Apr 20 '18 at 8:49







            • 2




              $begingroup$
              In the regression setting, the training method finds the correct filters (weights) to extract the relevant features to find the position of the top left edge, as well as the height and the width. In the end, what you have is a cost function that measures how good you are doing on predicting these features. And that is what deep learning is all about: give me a differentiable cost function, some labelled images and I'll find you a way to predict the labels. Is this more clear?
              $endgroup$
              – David Masip
              Apr 20 '18 at 8:58










            • $begingroup$
              It is somewhat clearer than before.
              $endgroup$
              – Saptarshi Roy
              Apr 20 '18 at 9:27
















            3












            $begingroup$

            The paper cited does not mention linear regression at all. What it does is using a neural network to predict continuous variables, and refers to that as regression.



            The regression that is defined (which is not linear at all), is just a CNN with convolutional layers, and fully connected layers, but in the last fully connected layer, it does not apply sigmoid or softmax, which is what is typically used in classification, as the values correspond to probabilities. Instead, what this CNN outputs are four values $(r, c, h, w)$, where $(r, c)$ specify the values of the position of the left corner and $(h, w)$ the height and width of the window. In order to train this NN, the loss function will penalize when the outputs of the NN are very different from the labelled $(r, c, h, w)$ in the training set.






            share|improve this answer









            $endgroup$












            • $begingroup$
              Yes. It was my mistake to mention the regressor as linear.
              $endgroup$
              – Saptarshi Roy
              Apr 20 '18 at 8:12










            • $begingroup$
              Did I answer your question though?
              $endgroup$
              – David Masip
              Apr 20 '18 at 8:42










            • $begingroup$
              After your comment (and few subsequent google search), I have understood that NN can very well solve regression problems by replacing the last layer. But the intuitive understanding of how the exact value of lengths coming is still not there. For example, the layers of CNN indicate different features of an image (features like edges, color etc.). The training method finds the correct filters (weights) to extract only the relevant features to discriminate the positive examples from the negative ones. I was looking for a similar explanation for the regression part.
              $endgroup$
              – Saptarshi Roy
              Apr 20 '18 at 8:49







            • 2




              $begingroup$
              In the regression setting, the training method finds the correct filters (weights) to extract the relevant features to find the position of the top left edge, as well as the height and the width. In the end, what you have is a cost function that measures how good you are doing on predicting these features. And that is what deep learning is all about: give me a differentiable cost function, some labelled images and I'll find you a way to predict the labels. Is this more clear?
              $endgroup$
              – David Masip
              Apr 20 '18 at 8:58










            • $begingroup$
              It is somewhat clearer than before.
              $endgroup$
              – Saptarshi Roy
              Apr 20 '18 at 9:27














            3












            3








            3





            $begingroup$

            The paper cited does not mention linear regression at all. What it does is using a neural network to predict continuous variables, and refers to that as regression.



            The regression that is defined (which is not linear at all), is just a CNN with convolutional layers, and fully connected layers, but in the last fully connected layer, it does not apply sigmoid or softmax, which is what is typically used in classification, as the values correspond to probabilities. Instead, what this CNN outputs are four values $(r, c, h, w)$, where $(r, c)$ specify the values of the position of the left corner and $(h, w)$ the height and width of the window. In order to train this NN, the loss function will penalize when the outputs of the NN are very different from the labelled $(r, c, h, w)$ in the training set.






            share|improve this answer









            $endgroup$



            The paper cited does not mention linear regression at all. What it does is using a neural network to predict continuous variables, and refers to that as regression.



            The regression that is defined (which is not linear at all), is just a CNN with convolutional layers, and fully connected layers, but in the last fully connected layer, it does not apply sigmoid or softmax, which is what is typically used in classification, as the values correspond to probabilities. Instead, what this CNN outputs are four values $(r, c, h, w)$, where $(r, c)$ specify the values of the position of the left corner and $(h, w)$ the height and width of the window. In order to train this NN, the loss function will penalize when the outputs of the NN are very different from the labelled $(r, c, h, w)$ in the training set.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Apr 20 '18 at 7:58









            David MasipDavid Masip

            2,5361428




            2,5361428











            • $begingroup$
              Yes. It was my mistake to mention the regressor as linear.
              $endgroup$
              – Saptarshi Roy
              Apr 20 '18 at 8:12










            • $begingroup$
              Did I answer your question though?
              $endgroup$
              – David Masip
              Apr 20 '18 at 8:42










            • $begingroup$
              After your comment (and few subsequent google search), I have understood that NN can very well solve regression problems by replacing the last layer. But the intuitive understanding of how the exact value of lengths coming is still not there. For example, the layers of CNN indicate different features of an image (features like edges, color etc.). The training method finds the correct filters (weights) to extract only the relevant features to discriminate the positive examples from the negative ones. I was looking for a similar explanation for the regression part.
              $endgroup$
              – Saptarshi Roy
              Apr 20 '18 at 8:49







            • 2




              $begingroup$
              In the regression setting, the training method finds the correct filters (weights) to extract the relevant features to find the position of the top left edge, as well as the height and the width. In the end, what you have is a cost function that measures how good you are doing on predicting these features. And that is what deep learning is all about: give me a differentiable cost function, some labelled images and I'll find you a way to predict the labels. Is this more clear?
              $endgroup$
              – David Masip
              Apr 20 '18 at 8:58










            • $begingroup$
              It is somewhat clearer than before.
              $endgroup$
              – Saptarshi Roy
              Apr 20 '18 at 9:27

















            • $begingroup$
              Yes. It was my mistake to mention the regressor as linear.
              $endgroup$
              – Saptarshi Roy
              Apr 20 '18 at 8:12










            • $begingroup$
              Did I answer your question though?
              $endgroup$
              – David Masip
              Apr 20 '18 at 8:42










            • $begingroup$
              After your comment (and few subsequent google search), I have understood that NN can very well solve regression problems by replacing the last layer. But the intuitive understanding of how the exact value of lengths coming is still not there. For example, the layers of CNN indicate different features of an image (features like edges, color etc.). The training method finds the correct filters (weights) to extract only the relevant features to discriminate the positive examples from the negative ones. I was looking for a similar explanation for the regression part.
              $endgroup$
              – Saptarshi Roy
              Apr 20 '18 at 8:49







            • 2




              $begingroup$
              In the regression setting, the training method finds the correct filters (weights) to extract the relevant features to find the position of the top left edge, as well as the height and the width. In the end, what you have is a cost function that measures how good you are doing on predicting these features. And that is what deep learning is all about: give me a differentiable cost function, some labelled images and I'll find you a way to predict the labels. Is this more clear?
              $endgroup$
              – David Masip
              Apr 20 '18 at 8:58










            • $begingroup$
              It is somewhat clearer than before.
              $endgroup$
              – Saptarshi Roy
              Apr 20 '18 at 9:27
















            $begingroup$
            Yes. It was my mistake to mention the regressor as linear.
            $endgroup$
            – Saptarshi Roy
            Apr 20 '18 at 8:12




            $begingroup$
            Yes. It was my mistake to mention the regressor as linear.
            $endgroup$
            – Saptarshi Roy
            Apr 20 '18 at 8:12












            $begingroup$
            Did I answer your question though?
            $endgroup$
            – David Masip
            Apr 20 '18 at 8:42




            $begingroup$
            Did I answer your question though?
            $endgroup$
            – David Masip
            Apr 20 '18 at 8:42












            $begingroup$
            After your comment (and few subsequent google search), I have understood that NN can very well solve regression problems by replacing the last layer. But the intuitive understanding of how the exact value of lengths coming is still not there. For example, the layers of CNN indicate different features of an image (features like edges, color etc.). The training method finds the correct filters (weights) to extract only the relevant features to discriminate the positive examples from the negative ones. I was looking for a similar explanation for the regression part.
            $endgroup$
            – Saptarshi Roy
            Apr 20 '18 at 8:49





            $begingroup$
            After your comment (and few subsequent google search), I have understood that NN can very well solve regression problems by replacing the last layer. But the intuitive understanding of how the exact value of lengths coming is still not there. For example, the layers of CNN indicate different features of an image (features like edges, color etc.). The training method finds the correct filters (weights) to extract only the relevant features to discriminate the positive examples from the negative ones. I was looking for a similar explanation for the regression part.
            $endgroup$
            – Saptarshi Roy
            Apr 20 '18 at 8:49





            2




            2




            $begingroup$
            In the regression setting, the training method finds the correct filters (weights) to extract the relevant features to find the position of the top left edge, as well as the height and the width. In the end, what you have is a cost function that measures how good you are doing on predicting these features. And that is what deep learning is all about: give me a differentiable cost function, some labelled images and I'll find you a way to predict the labels. Is this more clear?
            $endgroup$
            – David Masip
            Apr 20 '18 at 8:58




            $begingroup$
            In the regression setting, the training method finds the correct filters (weights) to extract the relevant features to find the position of the top left edge, as well as the height and the width. In the end, what you have is a cost function that measures how good you are doing on predicting these features. And that is what deep learning is all about: give me a differentiable cost function, some labelled images and I'll find you a way to predict the labels. Is this more clear?
            $endgroup$
            – David Masip
            Apr 20 '18 at 8:58












            $begingroup$
            It is somewhat clearer than before.
            $endgroup$
            – Saptarshi Roy
            Apr 20 '18 at 9:27





            $begingroup$
            It is somewhat clearer than before.
            $endgroup$
            – Saptarshi Roy
            Apr 20 '18 at 9:27












            0












            $begingroup$


            A very clear and in-depth explanation is provided by the slow R-CNN paper by Author(Girshick et. al) on page 12: C. Bounding-box regression and I simply paste here for quick reading:




            enter image description hereenter image description here




            Moreover, the author took inspiration from an earlier paper and talked about the difference in the two techniques is below:




            enter image description here




            After which in Fast-RCNN paper which you referenced to, the author changed the loss function for BB regression task from regularized least squares(ridge regression) to smooth L1 which is less sensitive to outliers!. Also, you embed this smooth L1 loss in the multi-task loss function so that we can jointly train for classification and bounding-box regression that wasn't done before in R-CNN or SPP-net!




            enter image description here




            However, the same author has changed the loss function again in the upcoming paper faster-RCNN
            Later, in FCN
            Many a time, in order to learn about a topic, you need to do backtracking through research papers! :) Hope it helps!







            share|improve this answer









            $endgroup$

















              0












              $begingroup$


              A very clear and in-depth explanation is provided by the slow R-CNN paper by Author(Girshick et. al) on page 12: C. Bounding-box regression and I simply paste here for quick reading:




              enter image description hereenter image description here




              Moreover, the author took inspiration from an earlier paper and talked about the difference in the two techniques is below:




              enter image description here




              After which in Fast-RCNN paper which you referenced to, the author changed the loss function for BB regression task from regularized least squares(ridge regression) to smooth L1 which is less sensitive to outliers!. Also, you embed this smooth L1 loss in the multi-task loss function so that we can jointly train for classification and bounding-box regression that wasn't done before in R-CNN or SPP-net!




              enter image description here




              However, the same author has changed the loss function again in the upcoming paper faster-RCNN
              Later, in FCN
              Many a time, in order to learn about a topic, you need to do backtracking through research papers! :) Hope it helps!







              share|improve this answer









              $endgroup$















                0












                0








                0





                $begingroup$


                A very clear and in-depth explanation is provided by the slow R-CNN paper by Author(Girshick et. al) on page 12: C. Bounding-box regression and I simply paste here for quick reading:




                enter image description hereenter image description here




                Moreover, the author took inspiration from an earlier paper and talked about the difference in the two techniques is below:




                enter image description here




                After which in Fast-RCNN paper which you referenced to, the author changed the loss function for BB regression task from regularized least squares(ridge regression) to smooth L1 which is less sensitive to outliers!. Also, you embed this smooth L1 loss in the multi-task loss function so that we can jointly train for classification and bounding-box regression that wasn't done before in R-CNN or SPP-net!




                enter image description here




                However, the same author has changed the loss function again in the upcoming paper faster-RCNN
                Later, in FCN
                Many a time, in order to learn about a topic, you need to do backtracking through research papers! :) Hope it helps!







                share|improve this answer









                $endgroup$




                A very clear and in-depth explanation is provided by the slow R-CNN paper by Author(Girshick et. al) on page 12: C. Bounding-box regression and I simply paste here for quick reading:




                enter image description hereenter image description here




                Moreover, the author took inspiration from an earlier paper and talked about the difference in the two techniques is below:




                enter image description here




                After which in Fast-RCNN paper which you referenced to, the author changed the loss function for BB regression task from regularized least squares(ridge regression) to smooth L1 which is less sensitive to outliers!. Also, you embed this smooth L1 loss in the multi-task loss function so that we can jointly train for classification and bounding-box regression that wasn't done before in R-CNN or SPP-net!




                enter image description here




                However, the same author has changed the loss function again in the upcoming paper faster-RCNN
                Later, in FCN
                Many a time, in order to learn about a topic, you need to do backtracking through research papers! :) Hope it helps!








                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered 5 hours ago









                anuanu

                1688




                1688



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Data Science Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f30557%2fhow-does-the-bounding-box-regressor-work-in-fast-r-cnn%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Францішак Багушэвіч Змест Сям'я | Біяграфія | Творчасць | Мова Багушэвіча | Ацэнкі дзейнасці | Цікавыя факты | Спадчына | Выбраная бібліяграфія | Ушанаванне памяці | У філатэліі | Зноскі | Літаратура | Спасылкі | НавігацыяЛяхоўскі У. Рупіўся дзеля Бога і людзей: Жыццёвы шлях Лявона Вітан-Дубейкаўскага // Вольскі і Памідораў з песняй пра немца Адвакат, паэт, народны заступнік Ашмянскі веснікВ Минске появится площадь Богушевича и улица Сырокомли, Белорусская деловая газета, 19 июля 2001 г.Айцец беларускай нацыянальнай ідэі паўстаў у бронзе Сяргей Аляксандравіч Адашкевіч (1918, Мінск). 80-я гады. Бюст «Францішак Багушэвіч».Яўген Мікалаевіч Ціхановіч. «Партрэт Францішка Багушэвіча»Мікола Мікалаевіч Купава. «Партрэт зачынальніка новай беларускай літаратуры Францішка Багушэвіча»Уладзімір Іванавіч Мелехаў. На помніку «Змагарам за родную мову» Барэльеф «Францішак Багушэвіч»Памяць пра Багушэвіча на Віленшчыне Страчаная сталіца. Беларускія шыльды на вуліцах Вільні«Krynica». Ideologia i przywódcy białoruskiego katolicyzmuФранцішак БагушэвічТворы на knihi.comТворы Францішка Багушэвіча на bellib.byСодаль Уладзімір. Францішак Багушэвіч на Лідчыне;Луцкевіч Антон. Жыцьцё і творчасьць Фр. Багушэвіча ў успамінах ягоных сучасьнікаў // Запісы Беларускага Навуковага таварыства. Вільня, 1938. Сшытак 1. С. 16-34.Большая российская1188761710000 0000 5537 633Xn9209310021619551927869394п

                    Беларусь Змест Назва Гісторыя Геаграфія Сімволіка Дзяржаўны лад Палітычныя партыі Міжнароднае становішча і знешняя палітыка Адміністрацыйны падзел Насельніцтва Эканоміка Культура і грамадства Сацыяльная сфера Узброеныя сілы Заўвагі Літаратура Спасылкі НавігацыяHGЯOiТоп-2011 г. (па версіі ej.by)Топ-2013 г. (па версіі ej.by)Топ-2016 г. (па версіі ej.by)Топ-2017 г. (па версіі ej.by)Нацыянальны статыстычны камітэт Рэспублікі БеларусьШчыльнасць насельніцтва па краінахhttp://naviny.by/rubrics/society/2011/09/16/ic_articles_116_175144/А. Калечыц, У. Ксяндзоў. Спробы засялення краю неандэртальскім чалавекам.І ў Менску былі мамантыА. Калечыц, У. Ксяндзоў. Старажытны каменны век (палеаліт). Першапачатковае засяленне тэрыторыіГ. Штыхаў. Балты і славяне ў VI—VIII стст.М. Клімаў. Полацкае княства ў IX—XI стст.Г. Штыхаў, В. Ляўко. Палітычная гісторыя Полацкай зямліГ. Штыхаў. Дзяржаўны лад у землях-княствахГ. Штыхаў. Дзяржаўны лад у землях-княствахБеларускія землі ў складзе Вялікага Княства ЛітоўскагаЛюблінская унія 1569 г."The Early Stages of Independence"Zapomniane prawdy25 гадоў таму было аб'яўлена, што Язэп Пілсудскі — беларус (фота)Наша вадаДакументы ЧАЭС: Забруджванне тэрыторыі Беларусі « ЧАЭС Зона адчужэнняСведения о политических партиях, зарегистрированных в Республике Беларусь // Министерство юстиции Республики БеларусьСтатыстычны бюлетэнь „Полаўзроставая структура насельніцтва Рэспублікі Беларусь на 1 студзеня 2012 года і сярэднегадовая колькасць насельніцтва за 2011 год“Индекс человеческого развития Беларуси — не было бы нижеБеларусь занимает первое место в СНГ по индексу развития с учетом гендерного факцёраНацыянальны статыстычны камітэт Рэспублікі БеларусьКанстытуцыя РБ. Артыкул 17Трансфармацыйныя задачы БеларусіВыйсце з крызісу — далейшае рэфармаванне Беларускі рубель — сусветны лідар па дэвальвацыяхПра змену коштаў у кастрычніку 2011 г.Бядней за беларусаў у СНД толькі таджыкіСярэдні заробак у верасні дасягнуў 2,26 мільёна рублёўЭканомікаГаласуем за ТОП-100 беларускай прозыСучасныя беларускія мастакіАрхитектура Беларуси BELARUS.BYА. Каханоўскі. Культура Беларусі ўсярэдзіне XVII—XVIII ст.Анталогія беларускай народнай песні, гуказапісы спеваўБеларускія Музычныя IнструментыБеларускі рок, які мы страцілі. Топ-10 гуртоў«Мясцовы час» — нязгаслая легенда беларускай рок-музыкіСЯРГЕЙ БУДКІН. МЫ НЯ ЗНАЕМ СВАЁЙ МУЗЫКІМ. А. Каладзінскі. НАРОДНЫ ТЭАТРМагнацкія культурныя цэнтрыПублічная дыскусія «Беларуская новая пьеса: без беларускай мовы ці беларуская?»Беларускія драматургі па-ранейшаму лепш ставяцца за мяжой, чым на радзіме«Працэс незалежнага кіно пайшоў, і дзяржаву турбуе яго непадкантрольнасць»Беларускія філосафы ў пошуках прасторыВсе идём в библиотекуАрхіваванаАб Нацыянальнай праграме даследавання і выкарыстання касмічнай прасторы ў мірных мэтах на 2008—2012 гадыУ космас — разам.У суседнім з Барысаўскім раёне пабудуюць Камандна-вымяральны пунктСвяты і абрады беларусаў«Мірныя бульбашы з малой краіны» — 5 непраўдзівых стэрэатыпаў пра БеларусьМ. Раманюк. Беларускае народнае адзеннеУ Беларусі скарачаецца колькасць злачынстваўЛукашэнка незадаволены мінскімі ўладамі Крадзяжы складаюць у Мінску каля 70% злачынстваў Узровень злачыннасці ў Мінскай вобласці — адзін з самых высокіх у краіне Генпракуратура аналізуе стан са злачыннасцю ў Беларусі па каэфіцыенце злачыннасці У Беларусі стабілізавалася крымінагеннае становішча, лічыць генпракурорЗамежнікі сталі здзяйсняць у Беларусі больш злачынстваўМУС Беларусі турбуе рост рэцыдыўнай злачыннасціЯ з ЖЭСа. Дазволіце вас абкрасці! Рэйтынг усіх службаў і падраздзяленняў ГУУС Мінгарвыканкама вырасАб КДБ РБГісторыя Аператыўна-аналітычнага цэнтра РБГісторыя ДКФРТаможняagentura.ruБеларусьBelarus.by — Афіцыйны сайт Рэспублікі БеларусьСайт урада БеларусіRadzima.org — Збор архітэктурных помнікаў, гісторыя Беларусі«Глобус Беларуси»Гербы и флаги БеларусиАсаблівасці каменнага веку на БеларусіА. Калечыц, У. Ксяндзоў. Старажытны каменны век (палеаліт). Першапачатковае засяленне тэрыторыіУ. Ксяндзоў. Сярэдні каменны век (мезаліт). Засяленне краю плямёнамі паляўнічых, рыбакоў і збіральнікаўА. Калечыц, М. Чарняўскі. Плямёны на тэрыторыі Беларусі ў новым каменным веку (неаліце)А. Калечыц, У. Ксяндзоў, М. Чарняўскі. Гаспадарчыя заняткі ў каменным векуЭ. Зайкоўскі. Духоўная культура ў каменным векуАсаблівасці бронзавага веку на БеларусіФарміраванне супольнасцей ранняга перыяду бронзавага векуФотографии БеларусиРоля беларускіх зямель ва ўтварэнні і ўмацаванні ВКЛВ. Фадзеева. З гісторыі развіцця беларускай народнай вышыўкіDMOZGran catalanaБольшая российскаяBritannica (анлайн)Швейцарскі гістарычны15325917611952699xDA123282154079143-90000 0001 2171 2080n9112870100577502ge128882171858027501086026362074122714179пппппп

                    ValueError: Expected n_neighbors <= n_samples, but n_samples = 1, n_neighbors = 6 (SMOTE) The 2019 Stack Overflow Developer Survey Results Are InCan SMOTE be applied over sequence of words (sentences)?ValueError when doing validation with random forestsSMOTE and multi class oversamplingLogic behind SMOTE-NC?ValueError: Error when checking target: expected dense_1 to have shape (7,) but got array with shape (1,)SmoteBoost: Should SMOTE be ran individually for each iteration/tree in the boosting?solving multi-class imbalance classification using smote and OSSUsing SMOTE for Synthetic Data generation to improve performance on unbalanced dataproblem of entry format for a simple model in KerasSVM SMOTE fit_resample() function runs forever with no result