When one model is superior in real world use?2019 Community Moderator ElectionHow can one use a validation set to reduce overfitting Naive Bayes?Skip gram Word2Vec model, neural network implementationWhich is best Model to implement Question Answering SystemThoughts on improving the Multitask Learning ModelWhy does my model accuracy rise and then drop, with the loss sharing similar characteristics?keras' ModelCheckpoint not workingStrange Behavior for trying to Predict Tennis Millionaires with Keras (Validation Accuracy)Beyond one-hot encoding for LSTM model in KerasWhy is recall so high?

What does it mean to describe someone as a butt steak?

"You are your self first supporter", a more proper way to say it

Is a tag line useful on a cover?

Is it unprofessional to ask if a job posting on GlassDoor is real?

To string or not to string

Accidentally leaked the solution to an assignment, what to do now? (I'm the prof)

Why Is Death Allowed In the Matrix?

Why not use SQL instead of GraphQL?

Show that if two triangles built on parallel lines, with equal bases have the same perimeter only if they are congruent.

How to write a macro that is braces sensitive?

Languages that we cannot (dis)prove to be Context-Free

Can divisibility rules for digits be generalized to sum of digits

Why don't electron-positron collisions release infinite energy?

How does one intimidate enemies without having the capacity for violence?

What's the point of deactivating Num Lock on login screens?

The use of multiple foreign keys on same column in SQL Server

Why "Having chlorophyll without photosynthesis is actually very dangerous" and "like living with a bomb"?

I’m planning on buying a laser printer but concerned about the life cycle of toner in the machine

What's the output of a record cartridge playing an out-of-speed record

What are the differences between the usage of 'it' and 'they'?

Can I make popcorn with any corn?

TGV timetables / schedules?

Adding span tags within wp_list_pages list items

What are these boxed doors outside store fronts in New York?



When one model is superior in real world use?



2019 Community Moderator ElectionHow can one use a validation set to reduce overfitting Naive Bayes?Skip gram Word2Vec model, neural network implementationWhich is best Model to implement Question Answering SystemThoughts on improving the Multitask Learning ModelWhy does my model accuracy rise and then drop, with the loss sharing similar characteristics?keras' ModelCheckpoint not workingStrange Behavior for trying to Predict Tennis Millionaires with Keras (Validation Accuracy)Beyond one-hot encoding for LSTM model in KerasWhy is recall so high?










0












$begingroup$


I have an NLP neural network that I have developed with Keras for multi-label classification.



I have fit the model several times and save the best results (via best validation accuracy score) after each set of epochs completes. All of my saved models are in the 96%+ validation accuracy score (according to Keras).



However, when I run these models against real-world data where I also know the result (e.g. effectively a second round of validation) one model in particular outperforms the rest. I can take the champion model (96.29% validation accuracy) and put it up against another model (with something like 96.18% validation accuracy) and the champion model can achieve 90%+ accuracy in the second round of validation while the other model - or any other model - will do nowhere near that. This one model will achieve a minimum 8% accuracy above all other models.



I have double-checked my methodology and I'm nearly positive that all models are being created with the same code and process.



Should I be concerned that this one particular model outperforms the rest? Does it indicate anything in particular in my overall methodology?










share|improve this question









$endgroup$




bumped to the homepage by Community 5 hours ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.



















    0












    $begingroup$


    I have an NLP neural network that I have developed with Keras for multi-label classification.



    I have fit the model several times and save the best results (via best validation accuracy score) after each set of epochs completes. All of my saved models are in the 96%+ validation accuracy score (according to Keras).



    However, when I run these models against real-world data where I also know the result (e.g. effectively a second round of validation) one model in particular outperforms the rest. I can take the champion model (96.29% validation accuracy) and put it up against another model (with something like 96.18% validation accuracy) and the champion model can achieve 90%+ accuracy in the second round of validation while the other model - or any other model - will do nowhere near that. This one model will achieve a minimum 8% accuracy above all other models.



    I have double-checked my methodology and I'm nearly positive that all models are being created with the same code and process.



    Should I be concerned that this one particular model outperforms the rest? Does it indicate anything in particular in my overall methodology?










    share|improve this question









    $endgroup$




    bumped to the homepage by Community 5 hours ago


    This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.

















      0












      0








      0





      $begingroup$


      I have an NLP neural network that I have developed with Keras for multi-label classification.



      I have fit the model several times and save the best results (via best validation accuracy score) after each set of epochs completes. All of my saved models are in the 96%+ validation accuracy score (according to Keras).



      However, when I run these models against real-world data where I also know the result (e.g. effectively a second round of validation) one model in particular outperforms the rest. I can take the champion model (96.29% validation accuracy) and put it up against another model (with something like 96.18% validation accuracy) and the champion model can achieve 90%+ accuracy in the second round of validation while the other model - or any other model - will do nowhere near that. This one model will achieve a minimum 8% accuracy above all other models.



      I have double-checked my methodology and I'm nearly positive that all models are being created with the same code and process.



      Should I be concerned that this one particular model outperforms the rest? Does it indicate anything in particular in my overall methodology?










      share|improve this question









      $endgroup$




      I have an NLP neural network that I have developed with Keras for multi-label classification.



      I have fit the model several times and save the best results (via best validation accuracy score) after each set of epochs completes. All of my saved models are in the 96%+ validation accuracy score (according to Keras).



      However, when I run these models against real-world data where I also know the result (e.g. effectively a second round of validation) one model in particular outperforms the rest. I can take the champion model (96.29% validation accuracy) and put it up against another model (with something like 96.18% validation accuracy) and the champion model can achieve 90%+ accuracy in the second round of validation while the other model - or any other model - will do nowhere near that. This one model will achieve a minimum 8% accuracy above all other models.



      I have double-checked my methodology and I'm nearly positive that all models are being created with the same code and process.



      Should I be concerned that this one particular model outperforms the rest? Does it indicate anything in particular in my overall methodology?







      neural-network keras nlp






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 7 '18 at 14:11









      I_Play_With_DataI_Play_With_Data

      1,2321833




      1,2321833





      bumped to the homepage by Community 5 hours ago


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







      bumped to the homepage by Community 5 hours ago


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.






















          1 Answer
          1






          active

          oldest

          votes


















          0












          $begingroup$

          Maybe I did not get the question but all looks fine. This is how you do model selection. You have several models (either same algorithm with different parameters or different algorithms. does not matter) and then you perform cross validation to get the best model according to empirical errors coming from validation set. The best model wins the game and is chosen. Everything seems to be right.






          share|improve this answer









          $endgroup$












          • $begingroup$
            To be clear, these are all the same NN just run at different times throughout the day. All parameters are equal. So, really the only difference (I think) would be the random mix that Keras creates for each training epoch. No other parameters/processes were changed in between runs
            $endgroup$
            – I_Play_With_Data
            Nov 7 '18 at 14:31










          • $begingroup$
            If Keras does the splitting for you then be sure it uses proper shuffling techniques to keep the results statistically significant. If you do it yourself then try to shuffle data and evaluate each model n times and see the mean and std of errors. That tells you what is the best model. If all models were literally the same then you have only one model and the empirical error is the mean of all. see this answer and the comment datascience.stackexchange.com/a/40862/8878
            $endgroup$
            – Kasra Manshaei
            Nov 7 '18 at 14:44










          • $begingroup$
            Might happen that at one run the data is accidentally "too beautiful"! That's why we try several times and see the mean error to be sure our results are not just by chance.
            $endgroup$
            – Kasra Manshaei
            Nov 7 '18 at 14:45






          • 1




            $begingroup$
            I do shuffle the data upon load before running all my epochs
            $endgroup$
            – I_Play_With_Data
            Nov 7 '18 at 14:47










          • $begingroup$
            Yes. and one shuffle is just by chance well-separated (if it's a classification task). And anyways, you are not choosing any model as all of them are the same. Put the mean of all obtained errors in one basket and try "Other Models" (e.g. a NN with another architecture of layers) and see their errors as well. Then you can say which model is the best. So far there is no model"s" but just model. And it does not tell you anything
            $endgroup$
            – Kasra Manshaei
            Nov 7 '18 at 14:49











          Your Answer





          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "557"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f40870%2fwhen-one-model-is-superior-in-real-world-use%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0












          $begingroup$

          Maybe I did not get the question but all looks fine. This is how you do model selection. You have several models (either same algorithm with different parameters or different algorithms. does not matter) and then you perform cross validation to get the best model according to empirical errors coming from validation set. The best model wins the game and is chosen. Everything seems to be right.






          share|improve this answer









          $endgroup$












          • $begingroup$
            To be clear, these are all the same NN just run at different times throughout the day. All parameters are equal. So, really the only difference (I think) would be the random mix that Keras creates for each training epoch. No other parameters/processes were changed in between runs
            $endgroup$
            – I_Play_With_Data
            Nov 7 '18 at 14:31










          • $begingroup$
            If Keras does the splitting for you then be sure it uses proper shuffling techniques to keep the results statistically significant. If you do it yourself then try to shuffle data and evaluate each model n times and see the mean and std of errors. That tells you what is the best model. If all models were literally the same then you have only one model and the empirical error is the mean of all. see this answer and the comment datascience.stackexchange.com/a/40862/8878
            $endgroup$
            – Kasra Manshaei
            Nov 7 '18 at 14:44










          • $begingroup$
            Might happen that at one run the data is accidentally "too beautiful"! That's why we try several times and see the mean error to be sure our results are not just by chance.
            $endgroup$
            – Kasra Manshaei
            Nov 7 '18 at 14:45






          • 1




            $begingroup$
            I do shuffle the data upon load before running all my epochs
            $endgroup$
            – I_Play_With_Data
            Nov 7 '18 at 14:47










          • $begingroup$
            Yes. and one shuffle is just by chance well-separated (if it's a classification task). And anyways, you are not choosing any model as all of them are the same. Put the mean of all obtained errors in one basket and try "Other Models" (e.g. a NN with another architecture of layers) and see their errors as well. Then you can say which model is the best. So far there is no model"s" but just model. And it does not tell you anything
            $endgroup$
            – Kasra Manshaei
            Nov 7 '18 at 14:49















          0












          $begingroup$

          Maybe I did not get the question but all looks fine. This is how you do model selection. You have several models (either same algorithm with different parameters or different algorithms. does not matter) and then you perform cross validation to get the best model according to empirical errors coming from validation set. The best model wins the game and is chosen. Everything seems to be right.






          share|improve this answer









          $endgroup$












          • $begingroup$
            To be clear, these are all the same NN just run at different times throughout the day. All parameters are equal. So, really the only difference (I think) would be the random mix that Keras creates for each training epoch. No other parameters/processes were changed in between runs
            $endgroup$
            – I_Play_With_Data
            Nov 7 '18 at 14:31










          • $begingroup$
            If Keras does the splitting for you then be sure it uses proper shuffling techniques to keep the results statistically significant. If you do it yourself then try to shuffle data and evaluate each model n times and see the mean and std of errors. That tells you what is the best model. If all models were literally the same then you have only one model and the empirical error is the mean of all. see this answer and the comment datascience.stackexchange.com/a/40862/8878
            $endgroup$
            – Kasra Manshaei
            Nov 7 '18 at 14:44










          • $begingroup$
            Might happen that at one run the data is accidentally "too beautiful"! That's why we try several times and see the mean error to be sure our results are not just by chance.
            $endgroup$
            – Kasra Manshaei
            Nov 7 '18 at 14:45






          • 1




            $begingroup$
            I do shuffle the data upon load before running all my epochs
            $endgroup$
            – I_Play_With_Data
            Nov 7 '18 at 14:47










          • $begingroup$
            Yes. and one shuffle is just by chance well-separated (if it's a classification task). And anyways, you are not choosing any model as all of them are the same. Put the mean of all obtained errors in one basket and try "Other Models" (e.g. a NN with another architecture of layers) and see their errors as well. Then you can say which model is the best. So far there is no model"s" but just model. And it does not tell you anything
            $endgroup$
            – Kasra Manshaei
            Nov 7 '18 at 14:49













          0












          0








          0





          $begingroup$

          Maybe I did not get the question but all looks fine. This is how you do model selection. You have several models (either same algorithm with different parameters or different algorithms. does not matter) and then you perform cross validation to get the best model according to empirical errors coming from validation set. The best model wins the game and is chosen. Everything seems to be right.






          share|improve this answer









          $endgroup$



          Maybe I did not get the question but all looks fine. This is how you do model selection. You have several models (either same algorithm with different parameters or different algorithms. does not matter) and then you perform cross validation to get the best model according to empirical errors coming from validation set. The best model wins the game and is chosen. Everything seems to be right.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 7 '18 at 14:21









          Kasra ManshaeiKasra Manshaei

          3,7791135




          3,7791135











          • $begingroup$
            To be clear, these are all the same NN just run at different times throughout the day. All parameters are equal. So, really the only difference (I think) would be the random mix that Keras creates for each training epoch. No other parameters/processes were changed in between runs
            $endgroup$
            – I_Play_With_Data
            Nov 7 '18 at 14:31










          • $begingroup$
            If Keras does the splitting for you then be sure it uses proper shuffling techniques to keep the results statistically significant. If you do it yourself then try to shuffle data and evaluate each model n times and see the mean and std of errors. That tells you what is the best model. If all models were literally the same then you have only one model and the empirical error is the mean of all. see this answer and the comment datascience.stackexchange.com/a/40862/8878
            $endgroup$
            – Kasra Manshaei
            Nov 7 '18 at 14:44










          • $begingroup$
            Might happen that at one run the data is accidentally "too beautiful"! That's why we try several times and see the mean error to be sure our results are not just by chance.
            $endgroup$
            – Kasra Manshaei
            Nov 7 '18 at 14:45






          • 1




            $begingroup$
            I do shuffle the data upon load before running all my epochs
            $endgroup$
            – I_Play_With_Data
            Nov 7 '18 at 14:47










          • $begingroup$
            Yes. and one shuffle is just by chance well-separated (if it's a classification task). And anyways, you are not choosing any model as all of them are the same. Put the mean of all obtained errors in one basket and try "Other Models" (e.g. a NN with another architecture of layers) and see their errors as well. Then you can say which model is the best. So far there is no model"s" but just model. And it does not tell you anything
            $endgroup$
            – Kasra Manshaei
            Nov 7 '18 at 14:49
















          • $begingroup$
            To be clear, these are all the same NN just run at different times throughout the day. All parameters are equal. So, really the only difference (I think) would be the random mix that Keras creates for each training epoch. No other parameters/processes were changed in between runs
            $endgroup$
            – I_Play_With_Data
            Nov 7 '18 at 14:31










          • $begingroup$
            If Keras does the splitting for you then be sure it uses proper shuffling techniques to keep the results statistically significant. If you do it yourself then try to shuffle data and evaluate each model n times and see the mean and std of errors. That tells you what is the best model. If all models were literally the same then you have only one model and the empirical error is the mean of all. see this answer and the comment datascience.stackexchange.com/a/40862/8878
            $endgroup$
            – Kasra Manshaei
            Nov 7 '18 at 14:44










          • $begingroup$
            Might happen that at one run the data is accidentally "too beautiful"! That's why we try several times and see the mean error to be sure our results are not just by chance.
            $endgroup$
            – Kasra Manshaei
            Nov 7 '18 at 14:45






          • 1




            $begingroup$
            I do shuffle the data upon load before running all my epochs
            $endgroup$
            – I_Play_With_Data
            Nov 7 '18 at 14:47










          • $begingroup$
            Yes. and one shuffle is just by chance well-separated (if it's a classification task). And anyways, you are not choosing any model as all of them are the same. Put the mean of all obtained errors in one basket and try "Other Models" (e.g. a NN with another architecture of layers) and see their errors as well. Then you can say which model is the best. So far there is no model"s" but just model. And it does not tell you anything
            $endgroup$
            – Kasra Manshaei
            Nov 7 '18 at 14:49















          $begingroup$
          To be clear, these are all the same NN just run at different times throughout the day. All parameters are equal. So, really the only difference (I think) would be the random mix that Keras creates for each training epoch. No other parameters/processes were changed in between runs
          $endgroup$
          – I_Play_With_Data
          Nov 7 '18 at 14:31




          $begingroup$
          To be clear, these are all the same NN just run at different times throughout the day. All parameters are equal. So, really the only difference (I think) would be the random mix that Keras creates for each training epoch. No other parameters/processes were changed in between runs
          $endgroup$
          – I_Play_With_Data
          Nov 7 '18 at 14:31












          $begingroup$
          If Keras does the splitting for you then be sure it uses proper shuffling techniques to keep the results statistically significant. If you do it yourself then try to shuffle data and evaluate each model n times and see the mean and std of errors. That tells you what is the best model. If all models were literally the same then you have only one model and the empirical error is the mean of all. see this answer and the comment datascience.stackexchange.com/a/40862/8878
          $endgroup$
          – Kasra Manshaei
          Nov 7 '18 at 14:44




          $begingroup$
          If Keras does the splitting for you then be sure it uses proper shuffling techniques to keep the results statistically significant. If you do it yourself then try to shuffle data and evaluate each model n times and see the mean and std of errors. That tells you what is the best model. If all models were literally the same then you have only one model and the empirical error is the mean of all. see this answer and the comment datascience.stackexchange.com/a/40862/8878
          $endgroup$
          – Kasra Manshaei
          Nov 7 '18 at 14:44












          $begingroup$
          Might happen that at one run the data is accidentally "too beautiful"! That's why we try several times and see the mean error to be sure our results are not just by chance.
          $endgroup$
          – Kasra Manshaei
          Nov 7 '18 at 14:45




          $begingroup$
          Might happen that at one run the data is accidentally "too beautiful"! That's why we try several times and see the mean error to be sure our results are not just by chance.
          $endgroup$
          – Kasra Manshaei
          Nov 7 '18 at 14:45




          1




          1




          $begingroup$
          I do shuffle the data upon load before running all my epochs
          $endgroup$
          – I_Play_With_Data
          Nov 7 '18 at 14:47




          $begingroup$
          I do shuffle the data upon load before running all my epochs
          $endgroup$
          – I_Play_With_Data
          Nov 7 '18 at 14:47












          $begingroup$
          Yes. and one shuffle is just by chance well-separated (if it's a classification task). And anyways, you are not choosing any model as all of them are the same. Put the mean of all obtained errors in one basket and try "Other Models" (e.g. a NN with another architecture of layers) and see their errors as well. Then you can say which model is the best. So far there is no model"s" but just model. And it does not tell you anything
          $endgroup$
          – Kasra Manshaei
          Nov 7 '18 at 14:49




          $begingroup$
          Yes. and one shuffle is just by chance well-separated (if it's a classification task). And anyways, you are not choosing any model as all of them are the same. Put the mean of all obtained errors in one basket and try "Other Models" (e.g. a NN with another architecture of layers) and see their errors as well. Then you can say which model is the best. So far there is no model"s" but just model. And it does not tell you anything
          $endgroup$
          – Kasra Manshaei
          Nov 7 '18 at 14:49

















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f40870%2fwhen-one-model-is-superior-in-real-world-use%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Францішак Багушэвіч Змест Сям'я | Біяграфія | Творчасць | Мова Багушэвіча | Ацэнкі дзейнасці | Цікавыя факты | Спадчына | Выбраная бібліяграфія | Ушанаванне памяці | У філатэліі | Зноскі | Літаратура | Спасылкі | НавігацыяЛяхоўскі У. Рупіўся дзеля Бога і людзей: Жыццёвы шлях Лявона Вітан-Дубейкаўскага // Вольскі і Памідораў з песняй пра немца Адвакат, паэт, народны заступнік Ашмянскі веснікВ Минске появится площадь Богушевича и улица Сырокомли, Белорусская деловая газета, 19 июля 2001 г.Айцец беларускай нацыянальнай ідэі паўстаў у бронзе Сяргей Аляксандравіч Адашкевіч (1918, Мінск). 80-я гады. Бюст «Францішак Багушэвіч».Яўген Мікалаевіч Ціхановіч. «Партрэт Францішка Багушэвіча»Мікола Мікалаевіч Купава. «Партрэт зачынальніка новай беларускай літаратуры Францішка Багушэвіча»Уладзімір Іванавіч Мелехаў. На помніку «Змагарам за родную мову» Барэльеф «Францішак Багушэвіч»Памяць пра Багушэвіча на Віленшчыне Страчаная сталіца. Беларускія шыльды на вуліцах Вільні«Krynica». Ideologia i przywódcy białoruskiego katolicyzmuФранцішак БагушэвічТворы на knihi.comТворы Францішка Багушэвіча на bellib.byСодаль Уладзімір. Францішак Багушэвіч на Лідчыне;Луцкевіч Антон. Жыцьцё і творчасьць Фр. Багушэвіча ў успамінах ягоных сучасьнікаў // Запісы Беларускага Навуковага таварыства. Вільня, 1938. Сшытак 1. С. 16-34.Большая российская1188761710000 0000 5537 633Xn9209310021619551927869394п

          На ростанях Змест Гісторыя напісання | Месца дзеяння | Час дзеяння | Назва | Праблематыка трылогіі | Аўтабіяграфічнасць | Трылогія ў тэатры і кіно | Пераклады | У культуры | Зноскі Літаратура | Спасылкі | НавігацыяДагледжаная версіяправерана1 зменаДагледжаная версіяправерана1 зменаАкадэмік МІЦКЕВІЧ Канстанцін Міхайлавіч (Якуб Колас) Прадмова М. І. Мушынскага, доктара філалагічных навук, члена-карэспандэнта Нацыянальнай акадэміі навук Рэспублікі Беларусь, прафесараНашаніўцы ў трылогіі Якуба Коласа «На ростанях»: вобразы і прататыпы125 лет Янке МавруКнижно-документальная выставка к 125-летию со дня рождения Якуба Коласа (1882—1956)Колас Якуб. Новая зямля (паэма), На ростанях (трылогія). Сулкоўскі Уладзімір. Радзіма Якуба Коласа (серыял жывапісных палотнаў)Вокладка кнігіІлюстрацыя М. С. БасалыгіНа ростаняхАўдыёверсія трылогііВ. Жолтак У Люсiнскай школе 1959

          Беларусь Змест Назва Гісторыя Геаграфія Сімволіка Дзяржаўны лад Палітычныя партыі Міжнароднае становішча і знешняя палітыка Адміністрацыйны падзел Насельніцтва Эканоміка Культура і грамадства Сацыяльная сфера Узброеныя сілы Заўвагі Літаратура Спасылкі НавігацыяHGЯOiТоп-2011 г. (па версіі ej.by)Топ-2013 г. (па версіі ej.by)Топ-2016 г. (па версіі ej.by)Топ-2017 г. (па версіі ej.by)Нацыянальны статыстычны камітэт Рэспублікі БеларусьШчыльнасць насельніцтва па краінахhttp://naviny.by/rubrics/society/2011/09/16/ic_articles_116_175144/А. Калечыц, У. Ксяндзоў. Спробы засялення краю неандэртальскім чалавекам.І ў Менску былі мамантыА. Калечыц, У. Ксяндзоў. Старажытны каменны век (палеаліт). Першапачатковае засяленне тэрыторыіГ. Штыхаў. Балты і славяне ў VI—VIII стст.М. Клімаў. Полацкае княства ў IX—XI стст.Г. Штыхаў, В. Ляўко. Палітычная гісторыя Полацкай зямліГ. Штыхаў. Дзяржаўны лад у землях-княствахГ. Штыхаў. Дзяржаўны лад у землях-княствахБеларускія землі ў складзе Вялікага Княства ЛітоўскагаЛюблінская унія 1569 г."The Early Stages of Independence"Zapomniane prawdy25 гадоў таму было аб'яўлена, што Язэп Пілсудскі — беларус (фота)Наша вадаДакументы ЧАЭС: Забруджванне тэрыторыі Беларусі « ЧАЭС Зона адчужэнняСведения о политических партиях, зарегистрированных в Республике Беларусь // Министерство юстиции Республики БеларусьСтатыстычны бюлетэнь „Полаўзроставая структура насельніцтва Рэспублікі Беларусь на 1 студзеня 2012 года і сярэднегадовая колькасць насельніцтва за 2011 год“Индекс человеческого развития Беларуси — не было бы нижеБеларусь занимает первое место в СНГ по индексу развития с учетом гендерного факцёраНацыянальны статыстычны камітэт Рэспублікі БеларусьКанстытуцыя РБ. Артыкул 17Трансфармацыйныя задачы БеларусіВыйсце з крызісу — далейшае рэфармаванне Беларускі рубель — сусветны лідар па дэвальвацыяхПра змену коштаў у кастрычніку 2011 г.Бядней за беларусаў у СНД толькі таджыкіСярэдні заробак у верасні дасягнуў 2,26 мільёна рублёўЭканомікаГаласуем за ТОП-100 беларускай прозыСучасныя беларускія мастакіАрхитектура Беларуси BELARUS.BYА. Каханоўскі. Культура Беларусі ўсярэдзіне XVII—XVIII ст.Анталогія беларускай народнай песні, гуказапісы спеваўБеларускія Музычныя IнструментыБеларускі рок, які мы страцілі. Топ-10 гуртоў«Мясцовы час» — нязгаслая легенда беларускай рок-музыкіСЯРГЕЙ БУДКІН. МЫ НЯ ЗНАЕМ СВАЁЙ МУЗЫКІМ. А. Каладзінскі. НАРОДНЫ ТЭАТРМагнацкія культурныя цэнтрыПублічная дыскусія «Беларуская новая пьеса: без беларускай мовы ці беларуская?»Беларускія драматургі па-ранейшаму лепш ставяцца за мяжой, чым на радзіме«Працэс незалежнага кіно пайшоў, і дзяржаву турбуе яго непадкантрольнасць»Беларускія філосафы ў пошуках прасторыВсе идём в библиотекуАрхіваванаАб Нацыянальнай праграме даследавання і выкарыстання касмічнай прасторы ў мірных мэтах на 2008—2012 гадыУ космас — разам.У суседнім з Барысаўскім раёне пабудуюць Камандна-вымяральны пунктСвяты і абрады беларусаў«Мірныя бульбашы з малой краіны» — 5 непраўдзівых стэрэатыпаў пра БеларусьМ. Раманюк. Беларускае народнае адзеннеУ Беларусі скарачаецца колькасць злачынстваўЛукашэнка незадаволены мінскімі ўладамі Крадзяжы складаюць у Мінску каля 70% злачынстваў Узровень злачыннасці ў Мінскай вобласці — адзін з самых высокіх у краіне Генпракуратура аналізуе стан са злачыннасцю ў Беларусі па каэфіцыенце злачыннасці У Беларусі стабілізавалася крымінагеннае становішча, лічыць генпракурорЗамежнікі сталі здзяйсняць у Беларусі больш злачынстваўМУС Беларусі турбуе рост рэцыдыўнай злачыннасціЯ з ЖЭСа. Дазволіце вас абкрасці! Рэйтынг усіх службаў і падраздзяленняў ГУУС Мінгарвыканкама вырасАб КДБ РБГісторыя Аператыўна-аналітычнага цэнтра РБГісторыя ДКФРТаможняagentura.ruБеларусьBelarus.by — Афіцыйны сайт Рэспублікі БеларусьСайт урада БеларусіRadzima.org — Збор архітэктурных помнікаў, гісторыя Беларусі«Глобус Беларуси»Гербы и флаги БеларусиАсаблівасці каменнага веку на БеларусіА. Калечыц, У. Ксяндзоў. Старажытны каменны век (палеаліт). Першапачатковае засяленне тэрыторыіУ. Ксяндзоў. Сярэдні каменны век (мезаліт). Засяленне краю плямёнамі паляўнічых, рыбакоў і збіральнікаўА. Калечыц, М. Чарняўскі. Плямёны на тэрыторыі Беларусі ў новым каменным веку (неаліце)А. Калечыц, У. Ксяндзоў, М. Чарняўскі. Гаспадарчыя заняткі ў каменным векуЭ. Зайкоўскі. Духоўная культура ў каменным векуАсаблівасці бронзавага веку на БеларусіФарміраванне супольнасцей ранняга перыяду бронзавага векуФотографии БеларусиРоля беларускіх зямель ва ўтварэнні і ўмацаванні ВКЛВ. Фадзеева. З гісторыі развіцця беларускай народнай вышыўкіDMOZGran catalanaБольшая российскаяBritannica (анлайн)Швейцарскі гістарычны15325917611952699xDA123282154079143-90000 0001 2171 2080n9112870100577502ge128882171858027501086026362074122714179пппппп