BibSonomy bookmarks for /tag/rnnhttps://www.bibsonomy.org/tag/rnnBibSonomy RSS Feed for /tag/rnnWritten Memories: Understanding, Deriving and Extending the LSTM - R2RThttps://r2rt.com/written-memories-understanding-deriving-and-extending-the-lstm.htmlannakrause2022-10-12T17:36:18+02:00lstm rnn todo:read <a itemprop="url" data-versiondate="2022-10-12T17:36:18+02:00" href="https://r2rt.com/written-memories-understanding-deriving-and-extending-the-lstm.html" rel="nofollow" class="description-link">https://r2rt.com/written-memories-understanding-deriving-and-extending-the-lstm.html</a>A gentle introduction to the tiresome part of understanding RNN – Data Science Bloghttps://data-science-blog.com/blog/2020/05/01/recurrent-neural-network/analyst2021-06-25T16:47:12+02:002020 article blog deep-learning rnn <a itemprop="url" data-versiondate="2021-06-25T16:47:12+02:00" href="https://data-science-blog.com/blog/2020/05/01/recurrent-neural-network/" rel="nofollow" class="description-link">https://data-science-blog.com/blog/2020/05/01/recurrent-neural-network/</a>Teaching recurrent neural networks to infer global temporal structure from local examples | Nature Machine IntelligenceThe ability to store and manipulate information is a hallmark of computational systems. Whereas computers are carefully engineered to represent and perform mathematical operations on structured data, neurobiological systems adapt to perform analogous functions without needing to be explicitly engineered. Recent efforts have made progress in modelling the representation and recall of information in neural systems. However, precisely how neural systems learn to modify these representations remains far from understood. Here, we demonstrate that a recurrent neural network (RNN) can learn to modify its representation of complex information using only examples, and we explain the associated learning mechanism with new theory. Specifically, we drive an RNN with examples of translated, linearly transformed or pre-bifurcated time series from a chaotic Lorenz system, alongside an additional control signal that changes value for each example. By training the network to replicate the Lorenz inputs, it learns to autonomously evolve about a Lorenz-shaped manifold. Additionally, it learns to continuously interpolate and extrapolate the translation, transformation and bifurcation of this representation far beyond the training data by changing the control signal. Furthermore, we demonstrate that RNNs can infer the bifurcation structure of normal forms and period doubling routes to chaos, and extrapolate non-dynamical, kinematic trajectories. Finally, we provide a mechanism for how these computations are learned, and replicate our main results using a Wilson–Cowan reservoir. Together, our results provide a simple but powerful mechanism by which an RNN can learn to manipulate internal representations of complex information, enabling the principled study and precise design of RNNs. Recurrent neural networks (RNNs) can learn to process temporal information, such as speech or movement. New work makes such approaches more powerful and flexible by describing theory and experiments demonstrating that RNNs can learn from a few examples to generalize and predict complex dynamics including chaotic behaviour.https://www.nature.com/articles/s42256-021-00321-2analyst2021-06-02T10:29:09+02:002021 article deep-learning nature research rnn <span itemprop="description">The ability to store and manipulate information is a hallmark of computational systems. Whereas computers are carefully engineered to represent and perform mathematical operations on structured data, neurobiological systems adapt to perform analogous functions without needing to be explicitly engineered. Recent efforts have made progress in modelling the representation and recall of information in neural systems. However, precisely how neural systems learn to modify these representations remains far from understood. Here, we demonstrate that a recurrent neural network (RNN) can learn to modify its representation of complex information using only examples, and we explain the associated learning mechanism with new theory. Specifically, we drive an RNN with examples of translated, linearly transformed or pre-bifurcated time series from a chaotic Lorenz system, alongside an additional control signal that changes value for each example. By training the network to replicate the Lorenz inputs, it learns to autonomously evolve about a Lorenz-shaped manifold. Additionally, it learns to continuously interpolate and extrapolate the translation, transformation and bifurcation of this representation far beyond the training data by changing the control signal. Furthermore, we demonstrate that RNNs can infer the bifurcation structure of normal forms and period doubling routes to chaos, and extrapolate non-dynamical, kinematic trajectories. Finally, we provide a mechanism for how these computations are learned, and replicate our main results using a Wilson–Cowan reservoir. Together, our results provide a simple but powerful mechanism by which an RNN can learn to manipulate internal representations of complex information, enabling the principled study and precise design of RNNs. Recurrent neural networks (RNNs) can learn to process temporal information, such as speech or movement. New work makes such approaches more powerful and flexible by describing theory and experiments demonstrating that RNNs can learn from a few examples to generalize and predict complex dynamics including chaotic behaviour.</span>2018_Book_NeuralNetworksAndDeepLearning-1.pdfhttps://link.springer.com/content/pdf/10.1007/978-3-319-94463-0.pdfninawue2021-01-02T12:38:21+01:00cnn deep_learning final overview rnn thema:machine_monitoring <a itemprop="url" data-versiondate="2021-01-02T12:38:21+01:00" href="https://link.springer.com/content/pdf/10.1007/978-3-319-94463-0.pdf" rel="nofollow" class="description-link">https://link.springer.com/content/pdf/10.1007/978-3-319-94463-0.pdf</a>A hybrid information model based on long short-term memory network for tool condition monitoring | SpringerLinkhttps://link.springer.com/article/10.1007/s10845-019-01526-4ninawue2020-12-14T11:39:06+01:00deep_learning lstm rnn thema:machine_monitoring <a itemprop="url" data-versiondate="2020-12-14T11:39:06+01:00" href="https://link.springer.com/article/10.1007/s10845-019-01526-4" rel="nofollow" class="description-link">https://link.springer.com/article/10.1007/s10845-019-01526-4</a>Deep heterogeneous GRU model for predictive analytics in smart manufacturing: Application to tool wear prediction | BibSonomyhttps://www.bibsonomy.org/bibtex/2210e45c3d28fb017372e1ac33f50e8b1/ninawueninawue2020-12-14T11:19:18+01:00final hema:machine_monitoring hybrid_prediction_scheme local_feature_extraction rnn <a itemprop="url" data-versiondate="2020-12-14T11:19:18+01:00" href="https://www.bibsonomy.org/bibtex/2210e45c3d28fb017372e1ac33f50e8b1/ninawue" rel="nofollow" class="description-link">https://www.bibsonomy.org/bibtex/2210e45c3d28fb017372e1ac33f50e8b1/ninawue</a>Machine Health Monitoring Using Local Feature-Based Gated Recurrent Unit Networks | BibSonomyhttps://www.bibsonomy.org/bibtex/229a9222100855ce76dc250e4016ddd11/ninawueninawue2020-12-14T11:18:41+01:00final gru local_feature_extraction rnn thema:machine_monitoring <a itemprop="url" data-versiondate="2020-12-14T11:18:41+01:00" href="https://www.bibsonomy.org/bibtex/229a9222100855ce76dc250e4016ddd11/ninawue" rel="nofollow" class="description-link">https://www.bibsonomy.org/bibtex/229a9222100855ce76dc250e4016ddd11/ninawue</a>Learning to Monitor Machine Health with Convolutional Bi-Directional LSTM Networks | BibSonomyhttps://www.bibsonomy.org/bibtex/2861949292ca6ad67a83261751c6e6bb5/ninawueninawue2020-12-14T11:16:32+01:00automated_feature_extraction cnn deep_learning final lstm rnn thema:machine_monitoring <a itemprop="url" data-versiondate="2020-12-14T11:16:32+01:00" href="https://www.bibsonomy.org/bibtex/2861949292ca6ad67a83261751c6e6bb5/ninawue" rel="nofollow" class="description-link">https://www.bibsonomy.org/bibtex/2861949292ca6ad67a83261751c6e6bb5/ninawue</a>Efficient Processing of Deep Neural Networks: A Tutorial and Survey - IEEE Journals & Magazinehttps://ieeexplore.ieee.org/document/8114708ninawue2020-12-14T10:47:20+01:00cnn general rnn thema:machine_monitoring <a itemprop="url" data-versiondate="2020-12-14T10:47:20+01:00" href="https://ieeexplore.ieee.org/document/8114708" rel="nofollow" class="description-link">https://ieeexplore.ieee.org/document/8114708</a>The fall of RNN / LSTM - Towards Data Sciencehttps://towardsdatascience.com/the-fall-of-rnn-lstm-2d1594c74ce0annakrause2020-05-05T07:50:33+02:00rnn tcn <a itemprop="url" data-versiondate="2020-05-05T07:50:33+02:00" href="https://towardsdatascience.com/the-fall-of-rnn-lstm-2d1594c74ce0" rel="nofollow" class="description-link">https://towardsdatascience.com/the-fall-of-rnn-lstm-2d1594c74ce0</a>