Neural network only converges when data cloud is close to 02019 Community Moderator ElectionNeural Net for regression task on images only learning mean of training datawhat are the default values of nodes and internal layers in Neural Network model?Amount of multiplications in a neural network modelError in Neural NetworkCan you have too uniform test data in a feedforward neural network?Properly using activation functions of neural networkhow to optimize the weights of a neural net when feeding it with multiple training samples?Neural network example not working with sigmoid activation functionUsing both positive and negative values as neural network input?

What's the point of deactivating Num Lock on login screens?

How to write a macro that is braces sensitive?

Why not use SQL instead of GraphQL?

A newer friend of my brother's gave him a load of baseball cards that are supposedly extremely valuable. Is this a scam?

What's the output of a record cartridge playing an out-of-speed record

How is the claim "I am in New York only if I am in America" the same as "If I am in New York, then I am in America?

Risk of getting Chronic Wasting Disease (CWD) in the United States?

Why, historically, did Gödel think CH was false?

"to be prejudice towards/against someone" vs "to be prejudiced against/towards someone"

Is it unprofessional to ask if a job posting on GlassDoor is real?

Prove that NP is closed under karp reduction?

An academic/student plagiarism

"You are your self first supporter", a more proper way to say it

How is it possible to have an ability score that is less than 3?

Has the BBC provided arguments for saying Brexit being cancelled is unlikely?

What do the dots in this tr command do: tr .............A-Z A-ZA-Z <<< "JVPQBOV" (with 13 dots)

Is it important to consider tone, melody, and musical form while writing a song?

Can I make popcorn with any corn?

Is it tax fraud for an individual to declare non-taxable revenue as taxable income? (US tax laws)

TGV timetables / schedules?

Fencing style for blades that can attack from a distance

Problem of parity - Can we draw a closed path made up of 20 line segments...

Why do falling prices hurt debtors?

I’m planning on buying a laser printer but concerned about the life cycle of toner in the machine



Neural network only converges when data cloud is close to 0



2019 Community Moderator ElectionNeural Net for regression task on images only learning mean of training datawhat are the default values of nodes and internal layers in Neural Network model?Amount of multiplications in a neural network modelError in Neural NetworkCan you have too uniform test data in a feedforward neural network?Properly using activation functions of neural networkhow to optimize the weights of a neural net when feeding it with multiple training samples?Neural network example not working with sigmoid activation functionUsing both positive and negative values as neural network input?










2












$begingroup$


I am new to tensorflow and am learning the basics at the moment so please bear with me.



My problem concerns strange non-convergent behaviour of neural networks when presented with the supposedly simple task of finding a regression function for a small training set consisting only of m = 100 data points (x_1, y_1), (x_2, y_2),...,(x_100, y_100), where x_i and y_i are real numbers.



I first constructed a function that automatically generates a computational graph corresponding to a classical fully connected feedforward neural network:



import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import math

def neural_network_constructor(arch_list = [1,3,3,1],
act_func = tf.nn.sigmoid,
w_initializer = tf.contrib.layers.xavier_initializer(),
b_initializer = tf.zeros_initializer(),
loss_function = tf.losses.mean_squared_error,
training_method = tf.train.GradientDescentOptimizer(0.5)):

n_input = arch_list[0]
n_output = arch_list[-1]

X = tf.placeholder(dtype = tf.float32, shape = [None, n_input])

layer = tf.contrib.layers.fully_connected(
inputs = X,
num_outputs = arch_list[1],
activation_fn = act_func,
weights_initializer = w_initializer,
biases_initializer = b_initializer)

for N in arch_list[2:-1]:
layer = tf.contrib.layers.fully_connected(
inputs = layer,
num_outputs = N,
activation_fn = act_func,
weights_initializer = w_initializer,
biases_initializer = b_initializer)

Phi = tf.contrib.layers.fully_connected(
inputs = layer,
num_outputs = n_output,
activation_fn = tf.identity,
weights_initializer = w_initializer,
biases_initializer = b_initializer)


Y = tf.placeholder(tf.float32, [None, n_output])

loss = loss_function(Y, Phi)
train_step = training_method.minimize(loss)

return [X, Phi, Y, train_step]


With the above default values for the arguments, this function would construct a computational graph corresponding to a neural network with 1 input neuron, 2 hidden layers with 3 neurons each and 1 output neuron. The activation function is per default the sigmoid function. X corresponds to the input tensor, Y to the labels of the training data and Phi to the feedforward output of the neural network. The operation train_step performs one gradient-descent step when executed in the session environment.



So far, so good. If I now test a particular neural network (constructed with this function and the exact default values for the arguments given above) by making it learn a simple regression function for artificial data extracted from a sinewave, strange things happen:



before training



enter image description here



Before training, the network seems to be a flat line. After 100.000 training iterations, it manages to partially learn the function, but only the part which is closer to 0. After this, it becomes flat again. Further training does not decrease the loss function anymore.



This get even stranger, when I take the exact same data set, but shift all x-values by adding 500:



enter image description hereenter image description here



Here, the network completely refuses to learn. I cannot understand why this is happening. I have tried changing the architecture of the network and its learning rate, but have observed similar effects: the closer the x-values of the data cloud are to the origin, the easier the network can learn. After a certain distance to the origin, learning stops completely. Changing the activation function from sigmoid to ReLu has only made things worse; here, the network tends to just converge to the average, no matter what position the data cloud is in.



Is there something wrong with my implementation of the neural-network-constructor? Or does this have something do do with initialization values? I have tried to get a deeper understanding of this problem now for quite a while and would greatly appreciate some advice. What could be the cause of this? All thoughts on why this behaviour is occurring are very much welcome!



Thanks,
Joker










share|improve this question









$endgroup$




bumped to the homepage by Community 2 hours ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.










  • 1




    $begingroup$
    The network isn't learning..... Lower your learning rate .5 is too high and correspondingly increase the epochs and report back what you found... Happy learning
    $endgroup$
    – Aditya
    May 11 '18 at 1:41










  • $begingroup$
    Hey Aditya, thank you for your comment. I have tried different learning rates (5,0.5,0.05,0.005,...) and have gone up to several houndred thousand iterations but results remain unchanged.
    $endgroup$
    – Joker123
    May 11 '18 at 10:51















2












$begingroup$


I am new to tensorflow and am learning the basics at the moment so please bear with me.



My problem concerns strange non-convergent behaviour of neural networks when presented with the supposedly simple task of finding a regression function for a small training set consisting only of m = 100 data points (x_1, y_1), (x_2, y_2),...,(x_100, y_100), where x_i and y_i are real numbers.



I first constructed a function that automatically generates a computational graph corresponding to a classical fully connected feedforward neural network:



import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import math

def neural_network_constructor(arch_list = [1,3,3,1],
act_func = tf.nn.sigmoid,
w_initializer = tf.contrib.layers.xavier_initializer(),
b_initializer = tf.zeros_initializer(),
loss_function = tf.losses.mean_squared_error,
training_method = tf.train.GradientDescentOptimizer(0.5)):

n_input = arch_list[0]
n_output = arch_list[-1]

X = tf.placeholder(dtype = tf.float32, shape = [None, n_input])

layer = tf.contrib.layers.fully_connected(
inputs = X,
num_outputs = arch_list[1],
activation_fn = act_func,
weights_initializer = w_initializer,
biases_initializer = b_initializer)

for N in arch_list[2:-1]:
layer = tf.contrib.layers.fully_connected(
inputs = layer,
num_outputs = N,
activation_fn = act_func,
weights_initializer = w_initializer,
biases_initializer = b_initializer)

Phi = tf.contrib.layers.fully_connected(
inputs = layer,
num_outputs = n_output,
activation_fn = tf.identity,
weights_initializer = w_initializer,
biases_initializer = b_initializer)


Y = tf.placeholder(tf.float32, [None, n_output])

loss = loss_function(Y, Phi)
train_step = training_method.minimize(loss)

return [X, Phi, Y, train_step]


With the above default values for the arguments, this function would construct a computational graph corresponding to a neural network with 1 input neuron, 2 hidden layers with 3 neurons each and 1 output neuron. The activation function is per default the sigmoid function. X corresponds to the input tensor, Y to the labels of the training data and Phi to the feedforward output of the neural network. The operation train_step performs one gradient-descent step when executed in the session environment.



So far, so good. If I now test a particular neural network (constructed with this function and the exact default values for the arguments given above) by making it learn a simple regression function for artificial data extracted from a sinewave, strange things happen:



before training



enter image description here



Before training, the network seems to be a flat line. After 100.000 training iterations, it manages to partially learn the function, but only the part which is closer to 0. After this, it becomes flat again. Further training does not decrease the loss function anymore.



This get even stranger, when I take the exact same data set, but shift all x-values by adding 500:



enter image description hereenter image description here



Here, the network completely refuses to learn. I cannot understand why this is happening. I have tried changing the architecture of the network and its learning rate, but have observed similar effects: the closer the x-values of the data cloud are to the origin, the easier the network can learn. After a certain distance to the origin, learning stops completely. Changing the activation function from sigmoid to ReLu has only made things worse; here, the network tends to just converge to the average, no matter what position the data cloud is in.



Is there something wrong with my implementation of the neural-network-constructor? Or does this have something do do with initialization values? I have tried to get a deeper understanding of this problem now for quite a while and would greatly appreciate some advice. What could be the cause of this? All thoughts on why this behaviour is occurring are very much welcome!



Thanks,
Joker










share|improve this question









$endgroup$




bumped to the homepage by Community 2 hours ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.










  • 1




    $begingroup$
    The network isn't learning..... Lower your learning rate .5 is too high and correspondingly increase the epochs and report back what you found... Happy learning
    $endgroup$
    – Aditya
    May 11 '18 at 1:41










  • $begingroup$
    Hey Aditya, thank you for your comment. I have tried different learning rates (5,0.5,0.05,0.005,...) and have gone up to several houndred thousand iterations but results remain unchanged.
    $endgroup$
    – Joker123
    May 11 '18 at 10:51













2












2








2





$begingroup$


I am new to tensorflow and am learning the basics at the moment so please bear with me.



My problem concerns strange non-convergent behaviour of neural networks when presented with the supposedly simple task of finding a regression function for a small training set consisting only of m = 100 data points (x_1, y_1), (x_2, y_2),...,(x_100, y_100), where x_i and y_i are real numbers.



I first constructed a function that automatically generates a computational graph corresponding to a classical fully connected feedforward neural network:



import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import math

def neural_network_constructor(arch_list = [1,3,3,1],
act_func = tf.nn.sigmoid,
w_initializer = tf.contrib.layers.xavier_initializer(),
b_initializer = tf.zeros_initializer(),
loss_function = tf.losses.mean_squared_error,
training_method = tf.train.GradientDescentOptimizer(0.5)):

n_input = arch_list[0]
n_output = arch_list[-1]

X = tf.placeholder(dtype = tf.float32, shape = [None, n_input])

layer = tf.contrib.layers.fully_connected(
inputs = X,
num_outputs = arch_list[1],
activation_fn = act_func,
weights_initializer = w_initializer,
biases_initializer = b_initializer)

for N in arch_list[2:-1]:
layer = tf.contrib.layers.fully_connected(
inputs = layer,
num_outputs = N,
activation_fn = act_func,
weights_initializer = w_initializer,
biases_initializer = b_initializer)

Phi = tf.contrib.layers.fully_connected(
inputs = layer,
num_outputs = n_output,
activation_fn = tf.identity,
weights_initializer = w_initializer,
biases_initializer = b_initializer)


Y = tf.placeholder(tf.float32, [None, n_output])

loss = loss_function(Y, Phi)
train_step = training_method.minimize(loss)

return [X, Phi, Y, train_step]


With the above default values for the arguments, this function would construct a computational graph corresponding to a neural network with 1 input neuron, 2 hidden layers with 3 neurons each and 1 output neuron. The activation function is per default the sigmoid function. X corresponds to the input tensor, Y to the labels of the training data and Phi to the feedforward output of the neural network. The operation train_step performs one gradient-descent step when executed in the session environment.



So far, so good. If I now test a particular neural network (constructed with this function and the exact default values for the arguments given above) by making it learn a simple regression function for artificial data extracted from a sinewave, strange things happen:



before training



enter image description here



Before training, the network seems to be a flat line. After 100.000 training iterations, it manages to partially learn the function, but only the part which is closer to 0. After this, it becomes flat again. Further training does not decrease the loss function anymore.



This get even stranger, when I take the exact same data set, but shift all x-values by adding 500:



enter image description hereenter image description here



Here, the network completely refuses to learn. I cannot understand why this is happening. I have tried changing the architecture of the network and its learning rate, but have observed similar effects: the closer the x-values of the data cloud are to the origin, the easier the network can learn. After a certain distance to the origin, learning stops completely. Changing the activation function from sigmoid to ReLu has only made things worse; here, the network tends to just converge to the average, no matter what position the data cloud is in.



Is there something wrong with my implementation of the neural-network-constructor? Or does this have something do do with initialization values? I have tried to get a deeper understanding of this problem now for quite a while and would greatly appreciate some advice. What could be the cause of this? All thoughts on why this behaviour is occurring are very much welcome!



Thanks,
Joker










share|improve this question









$endgroup$




I am new to tensorflow and am learning the basics at the moment so please bear with me.



My problem concerns strange non-convergent behaviour of neural networks when presented with the supposedly simple task of finding a regression function for a small training set consisting only of m = 100 data points (x_1, y_1), (x_2, y_2),...,(x_100, y_100), where x_i and y_i are real numbers.



I first constructed a function that automatically generates a computational graph corresponding to a classical fully connected feedforward neural network:



import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import math

def neural_network_constructor(arch_list = [1,3,3,1],
act_func = tf.nn.sigmoid,
w_initializer = tf.contrib.layers.xavier_initializer(),
b_initializer = tf.zeros_initializer(),
loss_function = tf.losses.mean_squared_error,
training_method = tf.train.GradientDescentOptimizer(0.5)):

n_input = arch_list[0]
n_output = arch_list[-1]

X = tf.placeholder(dtype = tf.float32, shape = [None, n_input])

layer = tf.contrib.layers.fully_connected(
inputs = X,
num_outputs = arch_list[1],
activation_fn = act_func,
weights_initializer = w_initializer,
biases_initializer = b_initializer)

for N in arch_list[2:-1]:
layer = tf.contrib.layers.fully_connected(
inputs = layer,
num_outputs = N,
activation_fn = act_func,
weights_initializer = w_initializer,
biases_initializer = b_initializer)

Phi = tf.contrib.layers.fully_connected(
inputs = layer,
num_outputs = n_output,
activation_fn = tf.identity,
weights_initializer = w_initializer,
biases_initializer = b_initializer)


Y = tf.placeholder(tf.float32, [None, n_output])

loss = loss_function(Y, Phi)
train_step = training_method.minimize(loss)

return [X, Phi, Y, train_step]


With the above default values for the arguments, this function would construct a computational graph corresponding to a neural network with 1 input neuron, 2 hidden layers with 3 neurons each and 1 output neuron. The activation function is per default the sigmoid function. X corresponds to the input tensor, Y to the labels of the training data and Phi to the feedforward output of the neural network. The operation train_step performs one gradient-descent step when executed in the session environment.



So far, so good. If I now test a particular neural network (constructed with this function and the exact default values for the arguments given above) by making it learn a simple regression function for artificial data extracted from a sinewave, strange things happen:



before training



enter image description here



Before training, the network seems to be a flat line. After 100.000 training iterations, it manages to partially learn the function, but only the part which is closer to 0. After this, it becomes flat again. Further training does not decrease the loss function anymore.



This get even stranger, when I take the exact same data set, but shift all x-values by adding 500:



enter image description hereenter image description here



Here, the network completely refuses to learn. I cannot understand why this is happening. I have tried changing the architecture of the network and its learning rate, but have observed similar effects: the closer the x-values of the data cloud are to the origin, the easier the network can learn. After a certain distance to the origin, learning stops completely. Changing the activation function from sigmoid to ReLu has only made things worse; here, the network tends to just converge to the average, no matter what position the data cloud is in.



Is there something wrong with my implementation of the neural-network-constructor? Or does this have something do do with initialization values? I have tried to get a deeper understanding of this problem now for quite a while and would greatly appreciate some advice. What could be the cause of this? All thoughts on why this behaviour is occurring are very much welcome!



Thanks,
Joker







machine-learning neural-network regression






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked May 10 '18 at 21:41









Joker123Joker123

1111




1111





bumped to the homepage by Community 2 hours ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







bumped to the homepage by Community 2 hours ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.









  • 1




    $begingroup$
    The network isn't learning..... Lower your learning rate .5 is too high and correspondingly increase the epochs and report back what you found... Happy learning
    $endgroup$
    – Aditya
    May 11 '18 at 1:41










  • $begingroup$
    Hey Aditya, thank you for your comment. I have tried different learning rates (5,0.5,0.05,0.005,...) and have gone up to several houndred thousand iterations but results remain unchanged.
    $endgroup$
    – Joker123
    May 11 '18 at 10:51












  • 1




    $begingroup$
    The network isn't learning..... Lower your learning rate .5 is too high and correspondingly increase the epochs and report back what you found... Happy learning
    $endgroup$
    – Aditya
    May 11 '18 at 1:41










  • $begingroup$
    Hey Aditya, thank you for your comment. I have tried different learning rates (5,0.5,0.05,0.005,...) and have gone up to several houndred thousand iterations but results remain unchanged.
    $endgroup$
    – Joker123
    May 11 '18 at 10:51







1




1




$begingroup$
The network isn't learning..... Lower your learning rate .5 is too high and correspondingly increase the epochs and report back what you found... Happy learning
$endgroup$
– Aditya
May 11 '18 at 1:41




$begingroup$
The network isn't learning..... Lower your learning rate .5 is too high and correspondingly increase the epochs and report back what you found... Happy learning
$endgroup$
– Aditya
May 11 '18 at 1:41












$begingroup$
Hey Aditya, thank you for your comment. I have tried different learning rates (5,0.5,0.05,0.005,...) and have gone up to several houndred thousand iterations but results remain unchanged.
$endgroup$
– Joker123
May 11 '18 at 10:51




$begingroup$
Hey Aditya, thank you for your comment. I have tried different learning rates (5,0.5,0.05,0.005,...) and have gone up to several houndred thousand iterations but results remain unchanged.
$endgroup$
– Joker123
May 11 '18 at 10:51










1 Answer
1






active

oldest

votes


















0












$begingroup$

the input is simply not enough to correctly predict the output, the model can not learn the output conditional distribution P(y|x).



Either you have to add more features to the Naive NN model, e.g concat the previous x to the current x to predict the current y or use RNN like models to model the problem as

p(y_t|x_t,x_t-1,x_t-2,....x_t0)



see Time Series Prediction with LSTM






share|improve this answer









$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "557"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f31499%2fneural-network-only-converges-when-data-cloud-is-close-to-0%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0












    $begingroup$

    the input is simply not enough to correctly predict the output, the model can not learn the output conditional distribution P(y|x).



    Either you have to add more features to the Naive NN model, e.g concat the previous x to the current x to predict the current y or use RNN like models to model the problem as

    p(y_t|x_t,x_t-1,x_t-2,....x_t0)



    see Time Series Prediction with LSTM






    share|improve this answer









    $endgroup$

















      0












      $begingroup$

      the input is simply not enough to correctly predict the output, the model can not learn the output conditional distribution P(y|x).



      Either you have to add more features to the Naive NN model, e.g concat the previous x to the current x to predict the current y or use RNN like models to model the problem as

      p(y_t|x_t,x_t-1,x_t-2,....x_t0)



      see Time Series Prediction with LSTM






      share|improve this answer









      $endgroup$















        0












        0








        0





        $begingroup$

        the input is simply not enough to correctly predict the output, the model can not learn the output conditional distribution P(y|x).



        Either you have to add more features to the Naive NN model, e.g concat the previous x to the current x to predict the current y or use RNN like models to model the problem as

        p(y_t|x_t,x_t-1,x_t-2,....x_t0)



        see Time Series Prediction with LSTM






        share|improve this answer









        $endgroup$



        the input is simply not enough to correctly predict the output, the model can not learn the output conditional distribution P(y|x).



        Either you have to add more features to the Naive NN model, e.g concat the previous x to the current x to predict the current y or use RNN like models to model the problem as

        p(y_t|x_t,x_t-1,x_t-2,....x_t0)



        see Time Series Prediction with LSTM







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered May 11 '18 at 12:09









        Fadi BakouraFadi Bakoura

        653212




        653212



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Data Science Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f31499%2fneural-network-only-converges-when-data-cloud-is-close-to-0%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Францішак Багушэвіч Змест Сям'я | Біяграфія | Творчасць | Мова Багушэвіча | Ацэнкі дзейнасці | Цікавыя факты | Спадчына | Выбраная бібліяграфія | Ушанаванне памяці | У філатэліі | Зноскі | Літаратура | Спасылкі | НавігацыяЛяхоўскі У. Рупіўся дзеля Бога і людзей: Жыццёвы шлях Лявона Вітан-Дубейкаўскага // Вольскі і Памідораў з песняй пра немца Адвакат, паэт, народны заступнік Ашмянскі веснікВ Минске появится площадь Богушевича и улица Сырокомли, Белорусская деловая газета, 19 июля 2001 г.Айцец беларускай нацыянальнай ідэі паўстаў у бронзе Сяргей Аляксандравіч Адашкевіч (1918, Мінск). 80-я гады. Бюст «Францішак Багушэвіч».Яўген Мікалаевіч Ціхановіч. «Партрэт Францішка Багушэвіча»Мікола Мікалаевіч Купава. «Партрэт зачынальніка новай беларускай літаратуры Францішка Багушэвіча»Уладзімір Іванавіч Мелехаў. На помніку «Змагарам за родную мову» Барэльеф «Францішак Багушэвіч»Памяць пра Багушэвіча на Віленшчыне Страчаная сталіца. Беларускія шыльды на вуліцах Вільні«Krynica». Ideologia i przywódcy białoruskiego katolicyzmuФранцішак БагушэвічТворы на knihi.comТворы Францішка Багушэвіча на bellib.byСодаль Уладзімір. Францішак Багушэвіч на Лідчыне;Луцкевіч Антон. Жыцьцё і творчасьць Фр. Багушэвіча ў успамінах ягоных сучасьнікаў // Запісы Беларускага Навуковага таварыства. Вільня, 1938. Сшытак 1. С. 16-34.Большая российская1188761710000 0000 5537 633Xn9209310021619551927869394п

            На ростанях Змест Гісторыя напісання | Месца дзеяння | Час дзеяння | Назва | Праблематыка трылогіі | Аўтабіяграфічнасць | Трылогія ў тэатры і кіно | Пераклады | У культуры | Зноскі Літаратура | Спасылкі | НавігацыяДагледжаная версіяправерана1 зменаДагледжаная версіяправерана1 зменаАкадэмік МІЦКЕВІЧ Канстанцін Міхайлавіч (Якуб Колас) Прадмова М. І. Мушынскага, доктара філалагічных навук, члена-карэспандэнта Нацыянальнай акадэміі навук Рэспублікі Беларусь, прафесараНашаніўцы ў трылогіі Якуба Коласа «На ростанях»: вобразы і прататыпы125 лет Янке МавруКнижно-документальная выставка к 125-летию со дня рождения Якуба Коласа (1882—1956)Колас Якуб. Новая зямля (паэма), На ростанях (трылогія). Сулкоўскі Уладзімір. Радзіма Якуба Коласа (серыял жывапісных палотнаў)Вокладка кнігіІлюстрацыя М. С. БасалыгіНа ростаняхАўдыёверсія трылогііВ. Жолтак У Люсiнскай школе 1959

            Беларусь Змест Назва Гісторыя Геаграфія Сімволіка Дзяржаўны лад Палітычныя партыі Міжнароднае становішча і знешняя палітыка Адміністрацыйны падзел Насельніцтва Эканоміка Культура і грамадства Сацыяльная сфера Узброеныя сілы Заўвагі Літаратура Спасылкі НавігацыяHGЯOiТоп-2011 г. (па версіі ej.by)Топ-2013 г. (па версіі ej.by)Топ-2016 г. (па версіі ej.by)Топ-2017 г. (па версіі ej.by)Нацыянальны статыстычны камітэт Рэспублікі БеларусьШчыльнасць насельніцтва па краінахhttp://naviny.by/rubrics/society/2011/09/16/ic_articles_116_175144/А. Калечыц, У. Ксяндзоў. Спробы засялення краю неандэртальскім чалавекам.І ў Менску былі мамантыА. Калечыц, У. Ксяндзоў. Старажытны каменны век (палеаліт). Першапачатковае засяленне тэрыторыіГ. Штыхаў. Балты і славяне ў VI—VIII стст.М. Клімаў. Полацкае княства ў IX—XI стст.Г. Штыхаў, В. Ляўко. Палітычная гісторыя Полацкай зямліГ. Штыхаў. Дзяржаўны лад у землях-княствахГ. Штыхаў. Дзяржаўны лад у землях-княствахБеларускія землі ў складзе Вялікага Княства ЛітоўскагаЛюблінская унія 1569 г."The Early Stages of Independence"Zapomniane prawdy25 гадоў таму было аб'яўлена, што Язэп Пілсудскі — беларус (фота)Наша вадаДакументы ЧАЭС: Забруджванне тэрыторыі Беларусі « ЧАЭС Зона адчужэнняСведения о политических партиях, зарегистрированных в Республике Беларусь // Министерство юстиции Республики БеларусьСтатыстычны бюлетэнь „Полаўзроставая структура насельніцтва Рэспублікі Беларусь на 1 студзеня 2012 года і сярэднегадовая колькасць насельніцтва за 2011 год“Индекс человеческого развития Беларуси — не было бы нижеБеларусь занимает первое место в СНГ по индексу развития с учетом гендерного факцёраНацыянальны статыстычны камітэт Рэспублікі БеларусьКанстытуцыя РБ. Артыкул 17Трансфармацыйныя задачы БеларусіВыйсце з крызісу — далейшае рэфармаванне Беларускі рубель — сусветны лідар па дэвальвацыяхПра змену коштаў у кастрычніку 2011 г.Бядней за беларусаў у СНД толькі таджыкіСярэдні заробак у верасні дасягнуў 2,26 мільёна рублёўЭканомікаГаласуем за ТОП-100 беларускай прозыСучасныя беларускія мастакіАрхитектура Беларуси BELARUS.BYА. Каханоўскі. Культура Беларусі ўсярэдзіне XVII—XVIII ст.Анталогія беларускай народнай песні, гуказапісы спеваўБеларускія Музычныя IнструментыБеларускі рок, які мы страцілі. Топ-10 гуртоў«Мясцовы час» — нязгаслая легенда беларускай рок-музыкіСЯРГЕЙ БУДКІН. МЫ НЯ ЗНАЕМ СВАЁЙ МУЗЫКІМ. А. Каладзінскі. НАРОДНЫ ТЭАТРМагнацкія культурныя цэнтрыПублічная дыскусія «Беларуская новая пьеса: без беларускай мовы ці беларуская?»Беларускія драматургі па-ранейшаму лепш ставяцца за мяжой, чым на радзіме«Працэс незалежнага кіно пайшоў, і дзяржаву турбуе яго непадкантрольнасць»Беларускія філосафы ў пошуках прасторыВсе идём в библиотекуАрхіваванаАб Нацыянальнай праграме даследавання і выкарыстання касмічнай прасторы ў мірных мэтах на 2008—2012 гадыУ космас — разам.У суседнім з Барысаўскім раёне пабудуюць Камандна-вымяральны пунктСвяты і абрады беларусаў«Мірныя бульбашы з малой краіны» — 5 непраўдзівых стэрэатыпаў пра БеларусьМ. Раманюк. Беларускае народнае адзеннеУ Беларусі скарачаецца колькасць злачынстваўЛукашэнка незадаволены мінскімі ўладамі Крадзяжы складаюць у Мінску каля 70% злачынстваў Узровень злачыннасці ў Мінскай вобласці — адзін з самых высокіх у краіне Генпракуратура аналізуе стан са злачыннасцю ў Беларусі па каэфіцыенце злачыннасці У Беларусі стабілізавалася крымінагеннае становішча, лічыць генпракурорЗамежнікі сталі здзяйсняць у Беларусі больш злачынстваўМУС Беларусі турбуе рост рэцыдыўнай злачыннасціЯ з ЖЭСа. Дазволіце вас абкрасці! Рэйтынг усіх службаў і падраздзяленняў ГУУС Мінгарвыканкама вырасАб КДБ РБГісторыя Аператыўна-аналітычнага цэнтра РБГісторыя ДКФРТаможняagentura.ruБеларусьBelarus.by — Афіцыйны сайт Рэспублікі БеларусьСайт урада БеларусіRadzima.org — Збор архітэктурных помнікаў, гісторыя Беларусі«Глобус Беларуси»Гербы и флаги БеларусиАсаблівасці каменнага веку на БеларусіА. Калечыц, У. Ксяндзоў. Старажытны каменны век (палеаліт). Першапачатковае засяленне тэрыторыіУ. Ксяндзоў. Сярэдні каменны век (мезаліт). Засяленне краю плямёнамі паляўнічых, рыбакоў і збіральнікаўА. Калечыц, М. Чарняўскі. Плямёны на тэрыторыі Беларусі ў новым каменным веку (неаліце)А. Калечыц, У. Ксяндзоў, М. Чарняўскі. Гаспадарчыя заняткі ў каменным векуЭ. Зайкоўскі. Духоўная культура ў каменным векуАсаблівасці бронзавага веку на БеларусіФарміраванне супольнасцей ранняга перыяду бронзавага векуФотографии БеларусиРоля беларускіх зямель ва ўтварэнні і ўмацаванні ВКЛВ. Фадзеева. З гісторыі развіцця беларускай народнай вышыўкіDMOZGran catalanaБольшая российскаяBritannica (анлайн)Швейцарскі гістарычны15325917611952699xDA123282154079143-90000 0001 2171 2080n9112870100577502ge128882171858027501086026362074122714179пппппп