Vanishing gradient problem for recent stochastic recurrent neural networks2019 Community Moderator ElectionStochastic gradient descent based on vector operations?Stochastic gradient descent and different approachesWhy is vanishing gradient a problem?Global average polling without fc layer, Vanishing gradient or other problem?Stochastic Gradient Descent BatchingImplementation of Stochastic Gradient Descent in PythonTypes of Recurrent Neural NetworksTraining Examples used in Stochastic Gradient DescentWhat's the correct reasoning behind solving the vanishing/exploding gradient problem in deep neural networks.?Gradient computation in neural networks

Night of Shab e Meraj

Notepad++ delete until colon for every line with replace all

GFCI outlets - can they be repaired? Are they really needed at the end of a circuit?

How to install cross-compiler on Ubuntu 18.04?

Forgetting the musical notes while performing in concert

Why do I get negative height?

Can compressed videos be decoded back to their uncompresed original format?

What is an equivalently powerful replacement spell for the Yuan-Ti's Suggestion spell?

Is there a hemisphere-neutral way of specifying a season?

How can saying a song's name be a copyright violation?

What historical events would have to change in order to make 19th century "steampunk" technology possible?

Finitely generated matrix groups whose eigenvalues are all algebraic

What do you call someone who asks many questions?

How to stretch the corners of this image so that it looks like a perfect rectangle?

files created then deleted at every second in tmp directory

Is this answer explanation correct?

How to show a landlord what we have in savings?

What does the same-ish mean?

Where would I need my direct neural interface to be implanted?

Fair gambler's ruin problem intuition

How seriously should I take size and weight limits of hand luggage?

OP Amp not amplifying audio signal

How do conventional missiles fly?

How to compactly explain secondary and tertiary characters without resorting to stereotypes?



Vanishing gradient problem for recent stochastic recurrent neural networks



2019 Community Moderator ElectionStochastic gradient descent based on vector operations?Stochastic gradient descent and different approachesWhy is vanishing gradient a problem?Global average polling without fc layer, Vanishing gradient or other problem?Stochastic Gradient Descent BatchingImplementation of Stochastic Gradient Descent in PythonTypes of Recurrent Neural NetworksTraining Examples used in Stochastic Gradient DescentWhat's the correct reasoning behind solving the vanishing/exploding gradient problem in deep neural networks.?Gradient computation in neural networks










0












$begingroup$


Recently, I've found some papers about generative recurrent models. All have attached sub-networks like prior/encoder/decoder/etc. to well-known LSTM cell for composing an aggregation of new-type RNN cell.



I am just curious about whether the gradient vanishing/exploding happens or not to those new RNN cell. Isn't there any problem about that kind of combination?



References:



It seems like they all have similar pattern as mentioned above.




A Recurrent Latent Variable Model for Sequential Data



Learning Stochastic Recurrent Networks



Z-Forcing: Training Stochastic Recurrent Networks




Pseudocode



The pseudocode for recurrent architecture is below:



def new_rnncell_call(x, htm1):
#prior_net/posterior_net/decoder_net is single layer or mlp each
q_prior = prior_net(htm1) # prior step
q = posterior_net([htm1, x]) # inference step
z = sample_from(q) # reparameterization trick
target_dist = decoder_net(z) # generation step
ht = innerLSTM([z, x], htm1) # recurrent step
return [q_prior, q, target_dist], ht


What concerns me are those naked weights outside of well-known LSTM (or GRU etc.) cell during processing bptt without any gating logic for activations as the weights inside LSTM. For me, this looks not similar to stacked-rnn layers or additional dense layers just to outputs.



Doesn't that have any gradient vanishing/exploding problem?










share|improve this question







New contributor




Sehee Park is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$
















    0












    $begingroup$


    Recently, I've found some papers about generative recurrent models. All have attached sub-networks like prior/encoder/decoder/etc. to well-known LSTM cell for composing an aggregation of new-type RNN cell.



    I am just curious about whether the gradient vanishing/exploding happens or not to those new RNN cell. Isn't there any problem about that kind of combination?



    References:



    It seems like they all have similar pattern as mentioned above.




    A Recurrent Latent Variable Model for Sequential Data



    Learning Stochastic Recurrent Networks



    Z-Forcing: Training Stochastic Recurrent Networks




    Pseudocode



    The pseudocode for recurrent architecture is below:



    def new_rnncell_call(x, htm1):
    #prior_net/posterior_net/decoder_net is single layer or mlp each
    q_prior = prior_net(htm1) # prior step
    q = posterior_net([htm1, x]) # inference step
    z = sample_from(q) # reparameterization trick
    target_dist = decoder_net(z) # generation step
    ht = innerLSTM([z, x], htm1) # recurrent step
    return [q_prior, q, target_dist], ht


    What concerns me are those naked weights outside of well-known LSTM (or GRU etc.) cell during processing bptt without any gating logic for activations as the weights inside LSTM. For me, this looks not similar to stacked-rnn layers or additional dense layers just to outputs.



    Doesn't that have any gradient vanishing/exploding problem?










    share|improve this question







    New contributor




    Sehee Park is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.







    $endgroup$














      0












      0








      0





      $begingroup$


      Recently, I've found some papers about generative recurrent models. All have attached sub-networks like prior/encoder/decoder/etc. to well-known LSTM cell for composing an aggregation of new-type RNN cell.



      I am just curious about whether the gradient vanishing/exploding happens or not to those new RNN cell. Isn't there any problem about that kind of combination?



      References:



      It seems like they all have similar pattern as mentioned above.




      A Recurrent Latent Variable Model for Sequential Data



      Learning Stochastic Recurrent Networks



      Z-Forcing: Training Stochastic Recurrent Networks




      Pseudocode



      The pseudocode for recurrent architecture is below:



      def new_rnncell_call(x, htm1):
      #prior_net/posterior_net/decoder_net is single layer or mlp each
      q_prior = prior_net(htm1) # prior step
      q = posterior_net([htm1, x]) # inference step
      z = sample_from(q) # reparameterization trick
      target_dist = decoder_net(z) # generation step
      ht = innerLSTM([z, x], htm1) # recurrent step
      return [q_prior, q, target_dist], ht


      What concerns me are those naked weights outside of well-known LSTM (or GRU etc.) cell during processing bptt without any gating logic for activations as the weights inside LSTM. For me, this looks not similar to stacked-rnn layers or additional dense layers just to outputs.



      Doesn't that have any gradient vanishing/exploding problem?










      share|improve this question







      New contributor




      Sehee Park is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.







      $endgroup$




      Recently, I've found some papers about generative recurrent models. All have attached sub-networks like prior/encoder/decoder/etc. to well-known LSTM cell for composing an aggregation of new-type RNN cell.



      I am just curious about whether the gradient vanishing/exploding happens or not to those new RNN cell. Isn't there any problem about that kind of combination?



      References:



      It seems like they all have similar pattern as mentioned above.




      A Recurrent Latent Variable Model for Sequential Data



      Learning Stochastic Recurrent Networks



      Z-Forcing: Training Stochastic Recurrent Networks




      Pseudocode



      The pseudocode for recurrent architecture is below:



      def new_rnncell_call(x, htm1):
      #prior_net/posterior_net/decoder_net is single layer or mlp each
      q_prior = prior_net(htm1) # prior step
      q = posterior_net([htm1, x]) # inference step
      z = sample_from(q) # reparameterization trick
      target_dist = decoder_net(z) # generation step
      ht = innerLSTM([z, x], htm1) # recurrent step
      return [q_prior, q, target_dist], ht


      What concerns me are those naked weights outside of well-known LSTM (or GRU etc.) cell during processing bptt without any gating logic for activations as the weights inside LSTM. For me, this looks not similar to stacked-rnn layers or additional dense layers just to outputs.



      Doesn't that have any gradient vanishing/exploding problem?







      python deep-learning gradient-descent recurrent-neural-net






      share|improve this question







      New contributor




      Sehee Park is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|improve this question







      New contributor




      Sehee Park is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|improve this question




      share|improve this question






      New contributor




      Sehee Park is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 26 mins ago









      Sehee ParkSehee Park

      1




      1




      New contributor




      Sehee Park is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      Sehee Park is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      Sehee Park is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.




















          0






          active

          oldest

          votes












          Your Answer





          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "557"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );






          Sehee Park is a new contributor. Be nice, and check out our Code of Conduct.









          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48484%2fvanishing-gradient-problem-for-recent-stochastic-recurrent-neural-networks%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          Sehee Park is a new contributor. Be nice, and check out our Code of Conduct.









          draft saved

          draft discarded


















          Sehee Park is a new contributor. Be nice, and check out our Code of Conduct.












          Sehee Park is a new contributor. Be nice, and check out our Code of Conduct.











          Sehee Park is a new contributor. Be nice, and check out our Code of Conduct.














          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48484%2fvanishing-gradient-problem-for-recent-stochastic-recurrent-neural-networks%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Францішак Багушэвіч Змест Сям'я | Біяграфія | Творчасць | Мова Багушэвіча | Ацэнкі дзейнасці | Цікавыя факты | Спадчына | Выбраная бібліяграфія | Ушанаванне памяці | У філатэліі | Зноскі | Літаратура | Спасылкі | НавігацыяЛяхоўскі У. Рупіўся дзеля Бога і людзей: Жыццёвы шлях Лявона Вітан-Дубейкаўскага // Вольскі і Памідораў з песняй пра немца Адвакат, паэт, народны заступнік Ашмянскі веснікВ Минске появится площадь Богушевича и улица Сырокомли, Белорусская деловая газета, 19 июля 2001 г.Айцец беларускай нацыянальнай ідэі паўстаў у бронзе Сяргей Аляксандравіч Адашкевіч (1918, Мінск). 80-я гады. Бюст «Францішак Багушэвіч».Яўген Мікалаевіч Ціхановіч. «Партрэт Францішка Багушэвіча»Мікола Мікалаевіч Купава. «Партрэт зачынальніка новай беларускай літаратуры Францішка Багушэвіча»Уладзімір Іванавіч Мелехаў. На помніку «Змагарам за родную мову» Барэльеф «Францішак Багушэвіч»Памяць пра Багушэвіча на Віленшчыне Страчаная сталіца. Беларускія шыльды на вуліцах Вільні«Krynica». Ideologia i przywódcy białoruskiego katolicyzmuФранцішак БагушэвічТворы на knihi.comТворы Францішка Багушэвіча на bellib.byСодаль Уладзімір. Францішак Багушэвіч на Лідчыне;Луцкевіч Антон. Жыцьцё і творчасьць Фр. Багушэвіча ў успамінах ягоных сучасьнікаў // Запісы Беларускага Навуковага таварыства. Вільня, 1938. Сшытак 1. С. 16-34.Большая российская1188761710000 0000 5537 633Xn9209310021619551927869394п

          Partai Komunis Tiongkok Daftar isi Kepemimpinan | Pranala luar | Referensi | Menu navigasidiperiksa1 perubahan tertundacpc.people.com.cnSitus resmiSurat kabar resmi"Why the Communist Party is alive, well and flourishing in China"0307-1235"Full text of Constitution of Communist Party of China"smengembangkannyas

          ValueError: Expected n_neighbors <= n_samples, but n_samples = 1, n_neighbors = 6 (SMOTE) The 2019 Stack Overflow Developer Survey Results Are InCan SMOTE be applied over sequence of words (sentences)?ValueError when doing validation with random forestsSMOTE and multi class oversamplingLogic behind SMOTE-NC?ValueError: Error when checking target: expected dense_1 to have shape (7,) but got array with shape (1,)SmoteBoost: Should SMOTE be ran individually for each iteration/tree in the boosting?solving multi-class imbalance classification using smote and OSSUsing SMOTE for Synthetic Data generation to improve performance on unbalanced dataproblem of entry format for a simple model in KerasSVM SMOTE fit_resample() function runs forever with no result