How do you visualize neural network architectures? The 2019 Stack Overflow Developer Survey Results Are InAre there any libraries for drawing a neural network in Python?How to draw Deep learning network architecture diagrams?Visualizing deep neural network trainingWhen is something a Deep Neural Network (DNN) and not NN?How to determine if a neural-network has a static computation graph?Important CNN architecturesCan you have too uniform test data in a feedforward neural network?Hybrid Convolutional and Conventional Neural NetworksHow are new neural network architectures 'discovered'Common deep learning practices in NLP for text classificationWhere do Kohonen and counterpropagation networks fall in the scheme of neural network architectures?neural network training algorithms
Geography at the pixel level
Is three citations per paragraph excessive for undergraduate research paper?
How to manage monthly salary
How can I fix this gap between bookcases I made?
Is "plugging out" electronic devices an American expression?
What do hard-Brexiteers want with respect to the Irish border?
Where to refill my bottle in India?
I see my dog run
What are the motivations for publishing new editions of an existing textbook, beyond new discoveries in a field?
Time travel alters history but people keep saying nothing's changed
I looked up a future colleague on LinkedIn before I started a job. I told my colleague about it and he seemed surprised. Should I apologize?
What is the use of option -o in the useradd command?
Are there any other methods to apply to solving simultaneous equations?
What is the meaning of Triage in Cybersec world?
Can the Protection from Evil and Good spell be used on the caster?
How long do I have to send my income tax payment to the IRS?
"To split hairs" vs "To be pedantic"
If the Wish spell is used to duplicate the effect of Simulacrum, are existing duplicates destroyed?
Extreme, unacceptable situation and I can't attend work tomorrow morning
A poker game description that does not feel gimmicky
Does duplicating a spell with Wish count as casting that spell?
How to answer pointed "are you quitting" questioning when I don't want them to suspect
Why could you hear an Amstrad CPC working?
Protecting Dualbooting Windows from dangerous code (like rm -rf)
How do you visualize neural network architectures?
The 2019 Stack Overflow Developer Survey Results Are InAre there any libraries for drawing a neural network in Python?How to draw Deep learning network architecture diagrams?Visualizing deep neural network trainingWhen is something a Deep Neural Network (DNN) and not NN?How to determine if a neural-network has a static computation graph?Important CNN architecturesCan you have too uniform test data in a feedforward neural network?Hybrid Convolutional and Conventional Neural NetworksHow are new neural network architectures 'discovered'Common deep learning practices in NLP for text classificationWhere do Kohonen and counterpropagation networks fall in the scheme of neural network architectures?neural network training algorithms
$begingroup$
When writing a paper / making a presentation about a topic which is about neural networks, one usually visualizes the networks architecture.
What are good / simple ways to visualize common architectures automatically?
machine-learning neural-network deep-learning visualization
$endgroup$
add a comment |
$begingroup$
When writing a paper / making a presentation about a topic which is about neural networks, one usually visualizes the networks architecture.
What are good / simple ways to visualize common architectures automatically?
machine-learning neural-network deep-learning visualization
$endgroup$
$begingroup$
See also: How can a neural network architecture be visualized with Keras?
$endgroup$
– Martin Thoma
Nov 17 '16 at 9:52
1
$begingroup$
Just found reddit.com/r/MachineLearning/comments/4sgsn9/…
$endgroup$
– Martin Thoma
Mar 9 '17 at 10:10
1
$begingroup$
I wrote Simple diagrams of convoluted neural networks with a survey of deep learning visualization approaches (both manual and automatic). I got a lot of inspiration, and links, from this thread - thx!
$endgroup$
– Piotr Migdal
Sep 17 '18 at 20:00
add a comment |
$begingroup$
When writing a paper / making a presentation about a topic which is about neural networks, one usually visualizes the networks architecture.
What are good / simple ways to visualize common architectures automatically?
machine-learning neural-network deep-learning visualization
$endgroup$
When writing a paper / making a presentation about a topic which is about neural networks, one usually visualizes the networks architecture.
What are good / simple ways to visualize common architectures automatically?
machine-learning neural-network deep-learning visualization
machine-learning neural-network deep-learning visualization
edited Jan 22 '18 at 12:01
Vaalizaadeh
7,55062263
7,55062263
asked Jul 18 '16 at 17:08
Martin ThomaMartin Thoma
6,6351656134
6,6351656134
$begingroup$
See also: How can a neural network architecture be visualized with Keras?
$endgroup$
– Martin Thoma
Nov 17 '16 at 9:52
1
$begingroup$
Just found reddit.com/r/MachineLearning/comments/4sgsn9/…
$endgroup$
– Martin Thoma
Mar 9 '17 at 10:10
1
$begingroup$
I wrote Simple diagrams of convoluted neural networks with a survey of deep learning visualization approaches (both manual and automatic). I got a lot of inspiration, and links, from this thread - thx!
$endgroup$
– Piotr Migdal
Sep 17 '18 at 20:00
add a comment |
$begingroup$
See also: How can a neural network architecture be visualized with Keras?
$endgroup$
– Martin Thoma
Nov 17 '16 at 9:52
1
$begingroup$
Just found reddit.com/r/MachineLearning/comments/4sgsn9/…
$endgroup$
– Martin Thoma
Mar 9 '17 at 10:10
1
$begingroup$
I wrote Simple diagrams of convoluted neural networks with a survey of deep learning visualization approaches (both manual and automatic). I got a lot of inspiration, and links, from this thread - thx!
$endgroup$
– Piotr Migdal
Sep 17 '18 at 20:00
$begingroup$
See also: How can a neural network architecture be visualized with Keras?
$endgroup$
– Martin Thoma
Nov 17 '16 at 9:52
$begingroup$
See also: How can a neural network architecture be visualized with Keras?
$endgroup$
– Martin Thoma
Nov 17 '16 at 9:52
1
1
$begingroup$
Just found reddit.com/r/MachineLearning/comments/4sgsn9/…
$endgroup$
– Martin Thoma
Mar 9 '17 at 10:10
$begingroup$
Just found reddit.com/r/MachineLearning/comments/4sgsn9/…
$endgroup$
– Martin Thoma
Mar 9 '17 at 10:10
1
1
$begingroup$
I wrote Simple diagrams of convoluted neural networks with a survey of deep learning visualization approaches (both manual and automatic). I got a lot of inspiration, and links, from this thread - thx!
$endgroup$
– Piotr Migdal
Sep 17 '18 at 20:00
$begingroup$
I wrote Simple diagrams of convoluted neural networks with a survey of deep learning visualization approaches (both manual and automatic). I got a lot of inspiration, and links, from this thread - thx!
$endgroup$
– Piotr Migdal
Sep 17 '18 at 20:00
add a comment |
15 Answers
15
active
oldest
votes
$begingroup$
Tensorflow, Keras, MXNet, PyTorch
If the neural network is given as a Tensorflow graph, then you can visualize this graph with TensorBoard: https://www.tensorflow.org/versions/r0.9/how_tos/graph_viz/index.html
Here is how the MNIST CNN looks like:
You can add names / scopes (like "dropout", "softmax", "fc1", "conv1", "conv2") yourself.
Interpretation
The following is only about the left graph. I ignore the 4 small graphs on the right half.
Each box is a layer with parameters that can be learned. For inference, information flows from bottom to the top. Ellipses are layers which do not contain learned parameters.
The color of the boxes does not have a meaning.
I'm not sure of the value of the dashed small boxes ("gradients", "Adam", "save").
$endgroup$
$begingroup$
it is good, I am trying to avoid the name like conv1, conv2 etc, I want to make all the name of conv later as CONV, How I will do??
$endgroup$
– Sudip Das
Mar 26 '18 at 13:03
$begingroup$
+1. It's not only for TF though: MXNet and Pytorch have some support too
$endgroup$
– Jakub Bartczuk
Jul 3 '18 at 16:08
$begingroup$
@SudipDas You can add names in the code to the layers, which will show up as you plot it.
$endgroup$
– Ben
Nov 27 '18 at 16:16
$begingroup$
How I will show the name of each layer as "CONV", if I write it as "CONV" of each layer then I will get error, cause each layer should have a unique name as tf rules, BUT I want to know, is there any other way to overcome this problem?? @Ben
$endgroup$
– Sudip Das
Nov 27 '18 at 16:22
$begingroup$
@SudipDas Oh well, that does not work. I think you can only overcome this problem by storing it and editing it yourself. Maybe you can cheat by adding invisible characters to the name, but I would recommend giving them unique names anyway.
$endgroup$
– Ben
Nov 27 '18 at 16:32
|
show 1 more comment
$begingroup$
I recently created a tool for drawing NN architectures and exporting SVG, called NN-SVG
$endgroup$
1
$begingroup$
Download SVG doesn't work
$endgroup$
– image
Jan 23 at 16:13
$begingroup$
works for me 1/23/19. If you're still having an issue, please feel free to open an issue.
$endgroup$
– Alex Lenail
Jan 23 at 23:28
1
$begingroup$
this is the only right answer
$endgroup$
– ArtificiallyIntelligence
Feb 23 at 15:52
add a comment |
$begingroup$
In Caffe you can use caffe/draw.py to draw the NetParameter protobuffer:
In Matlab, you can use view(net)
Keras.js:
$endgroup$
add a comment |
$begingroup$
There is an open source project called Netron
Netron is a viewer for neural network, deep learning and machine learning models.
Netron supports ONNX (.onnx, .pb), Keras (.h5, .keras), CoreML (.mlmodel) and TensorFlow Lite (.tflite). Netron has experimental support for Caffe (.caffemodel), Caffe2 (predict_net.pb), MXNet (-symbol.json), TensorFlow.js (model.json, .pb) and TensorFlow (.pb, .meta).
$endgroup$
add a comment |
$begingroup$
Here is yet another way - dotnets, using Graphviz, heavily inspired by this post by Thiago G. Martins.
$endgroup$
add a comment |
$begingroup$
I would add ASCII visualizations using keras-sequential-ascii (disclaimer: I am the author).
A small network for CIFAR-10 (from this tutorial) would be:
OPERATION DATA DIMENSIONS WEIGHTS(N) WEIGHTS(%)
Input ##### 32 32 3
Conv2D |/ ------------------- 896 2.1%
relu ##### 30 30 32
MaxPooling2D Y max ------------------- 0 0.0%
##### 15 15 32
Conv2D |/ ------------------- 18496 43.6%
relu ##### 13 13 64
MaxPooling2D Y max ------------------- 0 0.0%
##### 6 6 64
Flatten ||||| ------------------- 0 0.0%
##### 2304
Dense XXXXX ------------------- 23050 54.3%
softmax ##### 10
For VGG16 it would be:
OPERATION DATA DIMENSIONS WEIGHTS(N) WEIGHTS(%)
Input ##### 3 224 224
InputLayer | ------------------- 0 0.0%
##### 3 224 224
Convolution2D |/ ------------------- 1792 0.0%
relu ##### 64 224 224
Convolution2D |/ ------------------- 36928 0.0%
relu ##### 64 224 224
MaxPooling2D Y max ------------------- 0 0.0%
##### 64 112 112
Convolution2D |/ ------------------- 73856 0.1%
relu ##### 128 112 112
Convolution2D |/ ------------------- 147584 0.1%
relu ##### 128 112 112
MaxPooling2D Y max ------------------- 0 0.0%
##### 128 56 56
Convolution2D |/ ------------------- 295168 0.2%
relu ##### 256 56 56
Convolution2D |/ ------------------- 590080 0.4%
relu ##### 256 56 56
Convolution2D |/ ------------------- 590080 0.4%
relu ##### 256 56 56
MaxPooling2D Y max ------------------- 0 0.0%
##### 256 28 28
Convolution2D |/ ------------------- 1180160 0.9%
relu ##### 512 28 28
Convolution2D |/ ------------------- 2359808 1.7%
relu ##### 512 28 28
Convolution2D |/ ------------------- 2359808 1.7%
relu ##### 512 28 28
MaxPooling2D Y max ------------------- 0 0.0%
##### 512 14 14
Convolution2D |/ ------------------- 2359808 1.7%
relu ##### 512 14 14
Convolution2D |/ ------------------- 2359808 1.7%
relu ##### 512 14 14
Convolution2D |/ ------------------- 2359808 1.7%
relu ##### 512 14 14
MaxPooling2D Y max ------------------- 0 0.0%
##### 512 7 7
Flatten ||||| ------------------- 0 0.0%
##### 25088
Dense XXXXX ------------------- 102764544 74.3%
relu ##### 4096
Dense XXXXX ------------------- 16781312 12.1%
relu ##### 4096
Dense XXXXX ------------------- 4097000 3.0%
softmax ##### 1000
$endgroup$
add a comment |
$begingroup$
Keras
The keras.utils.vis_utils module provides utility functions to plot a Keras model (using graphviz)
The following shows a network model that the first hidden layer has 50 neurons and expects 104 input variables.
plot_model(model, to_file='model.png', show_shapes=True, show_layer_names=True)
$endgroup$
add a comment |
$begingroup$
The Python package conx
can visualize networks with activations with the function net.picture()
to produce SVG, PNG, or PIL Images like this:
Conx is built on Keras, and can read in Keras' models. The colormap at each bank can be changed, and it can show all bank types.
More information can be found at: http://conx.readthedocs.io/en/latest/
$endgroup$
add a comment |
$begingroup$
In R, nnet
does not come with a plot function, but code for that is provided here.
Alternatively, you can use the more recent and IMHO better package called neuralnet
which features a plot.neuralnet
function, so you can just do:
data(infert, package="datasets")
plot(neuralnet(case~parity+induced+spontaneous, infert))
neuralnet
is not used as much as nnet
because nnet
is much older and is shipped with r-cran. But neuralnet
has more training algorithms, including resilient backpropagation which is lacking even in packages like Tensorflow, and is much more robust to hyperparameter choices, and has more features overall.
$endgroup$
$begingroup$
You should add the updated link for the code of NNet in R beckmw.wordpress.com/2013/11/14/…
$endgroup$
– wacax
May 10 '18 at 16:47
add a comment |
$begingroup$
There are some novel alternative efforts on neural network visualization.
Please see these articles:
Stunning 'AI brain scans' reveal what machines see as they learn new skills
Inside an AI 'brain' - What does machine learning look like?
These approaches are more oriented towards visualizing neural network operation, however, NN architecture is also somewhat visible on the resulting diagrams.
Examples:
$endgroup$
16
$begingroup$
Please explain what we see here. It looks beautiful, but I don't understand how the fancy images support understanding the operation of the network.
$endgroup$
– Martin Thoma
Mar 27 '18 at 17:15
$begingroup$
I don't like your derogatory usage of "fancy images" term. @Martin
$endgroup$
– VividD
Mar 27 '18 at 17:25
8
$begingroup$
I didn't mean to attack you, but your overly defensive answer without actually answering my question speaks for itself. - I added an "interpretation" part to the "lego boxes" diagram.
$endgroup$
– Martin Thoma
Mar 28 '18 at 7:29
1
$begingroup$
By the way: The second link is dead.
$endgroup$
– Martin Thoma
Mar 28 '18 at 7:37
2
$begingroup$
@MartinThoma It's clearly data art, not data viz (vide lisacharlotterost.github.io/2015/12/19/…).
$endgroup$
– Piotr Migdal
Apr 2 '18 at 14:04
|
show 2 more comments
$begingroup$
Not per se nifty for papers, but very useful for showing people who don't know a lot of about neural networks what their topology may look like. This Javascript library (Neataptic) lets you visualise your network:
$endgroup$
add a comment |
$begingroup$
You can read the popular paper Understanding Neural Networks Through Deep Visualization which discusses visualization of convolutional nets. Its implementation not only displays each layer but also depicts the activations, weights, deconvolutions and many other things that are deeply discussed in the paper. It's code is in caffe'
. The interesting part is that you can replace the pre-trained model with your own.
$endgroup$
add a comment |
$begingroup$
Tensorspace-JS is a fantastic tool for 3d visualization of network architecture:
https://tensorspace.org/
and here is a nice post about how to write a program:
https://medium.freecodecamp.org/tensorspace-js-a-way-to-3d-visualize-neural-networks-in-browsers-2c0afd7648a8
$endgroup$
$begingroup$
Could you provide a link to this tool?
$endgroup$
– Piotr Migdal
Jan 25 at 14:34
1
$begingroup$
@PiotrMigdal I updated the answer.
$endgroup$
– Ali Mirzaei
Jan 26 at 14:43
add a comment |
$begingroup$
I've been working on a drag-and-drop neural network visualizer (and more). Here's an example of a visualization for a LeNet-like architecture.
Models with fan-out and fan-in are also quite easily modeled. You can visit the website at https://math.mit.edu/ennui/
New contributor
$endgroup$
add a comment |
$begingroup$
Netscope is my everyday tool for Caffe models.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f12851%2fhow-do-you-visualize-neural-network-architectures%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
15 Answers
15
active
oldest
votes
15 Answers
15
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Tensorflow, Keras, MXNet, PyTorch
If the neural network is given as a Tensorflow graph, then you can visualize this graph with TensorBoard: https://www.tensorflow.org/versions/r0.9/how_tos/graph_viz/index.html
Here is how the MNIST CNN looks like:
You can add names / scopes (like "dropout", "softmax", "fc1", "conv1", "conv2") yourself.
Interpretation
The following is only about the left graph. I ignore the 4 small graphs on the right half.
Each box is a layer with parameters that can be learned. For inference, information flows from bottom to the top. Ellipses are layers which do not contain learned parameters.
The color of the boxes does not have a meaning.
I'm not sure of the value of the dashed small boxes ("gradients", "Adam", "save").
$endgroup$
$begingroup$
it is good, I am trying to avoid the name like conv1, conv2 etc, I want to make all the name of conv later as CONV, How I will do??
$endgroup$
– Sudip Das
Mar 26 '18 at 13:03
$begingroup$
+1. It's not only for TF though: MXNet and Pytorch have some support too
$endgroup$
– Jakub Bartczuk
Jul 3 '18 at 16:08
$begingroup$
@SudipDas You can add names in the code to the layers, which will show up as you plot it.
$endgroup$
– Ben
Nov 27 '18 at 16:16
$begingroup$
How I will show the name of each layer as "CONV", if I write it as "CONV" of each layer then I will get error, cause each layer should have a unique name as tf rules, BUT I want to know, is there any other way to overcome this problem?? @Ben
$endgroup$
– Sudip Das
Nov 27 '18 at 16:22
$begingroup$
@SudipDas Oh well, that does not work. I think you can only overcome this problem by storing it and editing it yourself. Maybe you can cheat by adding invisible characters to the name, but I would recommend giving them unique names anyway.
$endgroup$
– Ben
Nov 27 '18 at 16:32
|
show 1 more comment
$begingroup$
Tensorflow, Keras, MXNet, PyTorch
If the neural network is given as a Tensorflow graph, then you can visualize this graph with TensorBoard: https://www.tensorflow.org/versions/r0.9/how_tos/graph_viz/index.html
Here is how the MNIST CNN looks like:
You can add names / scopes (like "dropout", "softmax", "fc1", "conv1", "conv2") yourself.
Interpretation
The following is only about the left graph. I ignore the 4 small graphs on the right half.
Each box is a layer with parameters that can be learned. For inference, information flows from bottom to the top. Ellipses are layers which do not contain learned parameters.
The color of the boxes does not have a meaning.
I'm not sure of the value of the dashed small boxes ("gradients", "Adam", "save").
$endgroup$
$begingroup$
it is good, I am trying to avoid the name like conv1, conv2 etc, I want to make all the name of conv later as CONV, How I will do??
$endgroup$
– Sudip Das
Mar 26 '18 at 13:03
$begingroup$
+1. It's not only for TF though: MXNet and Pytorch have some support too
$endgroup$
– Jakub Bartczuk
Jul 3 '18 at 16:08
$begingroup$
@SudipDas You can add names in the code to the layers, which will show up as you plot it.
$endgroup$
– Ben
Nov 27 '18 at 16:16
$begingroup$
How I will show the name of each layer as "CONV", if I write it as "CONV" of each layer then I will get error, cause each layer should have a unique name as tf rules, BUT I want to know, is there any other way to overcome this problem?? @Ben
$endgroup$
– Sudip Das
Nov 27 '18 at 16:22
$begingroup$
@SudipDas Oh well, that does not work. I think you can only overcome this problem by storing it and editing it yourself. Maybe you can cheat by adding invisible characters to the name, but I would recommend giving them unique names anyway.
$endgroup$
– Ben
Nov 27 '18 at 16:32
|
show 1 more comment
$begingroup$
Tensorflow, Keras, MXNet, PyTorch
If the neural network is given as a Tensorflow graph, then you can visualize this graph with TensorBoard: https://www.tensorflow.org/versions/r0.9/how_tos/graph_viz/index.html
Here is how the MNIST CNN looks like:
You can add names / scopes (like "dropout", "softmax", "fc1", "conv1", "conv2") yourself.
Interpretation
The following is only about the left graph. I ignore the 4 small graphs on the right half.
Each box is a layer with parameters that can be learned. For inference, information flows from bottom to the top. Ellipses are layers which do not contain learned parameters.
The color of the boxes does not have a meaning.
I'm not sure of the value of the dashed small boxes ("gradients", "Adam", "save").
$endgroup$
Tensorflow, Keras, MXNet, PyTorch
If the neural network is given as a Tensorflow graph, then you can visualize this graph with TensorBoard: https://www.tensorflow.org/versions/r0.9/how_tos/graph_viz/index.html
Here is how the MNIST CNN looks like:
You can add names / scopes (like "dropout", "softmax", "fc1", "conv1", "conv2") yourself.
Interpretation
The following is only about the left graph. I ignore the 4 small graphs on the right half.
Each box is a layer with parameters that can be learned. For inference, information flows from bottom to the top. Ellipses are layers which do not contain learned parameters.
The color of the boxes does not have a meaning.
I'm not sure of the value of the dashed small boxes ("gradients", "Adam", "save").
edited Jul 3 '18 at 17:46
answered Jul 18 '16 at 19:59
Martin ThomaMartin Thoma
6,6351656134
6,6351656134
$begingroup$
it is good, I am trying to avoid the name like conv1, conv2 etc, I want to make all the name of conv later as CONV, How I will do??
$endgroup$
– Sudip Das
Mar 26 '18 at 13:03
$begingroup$
+1. It's not only for TF though: MXNet and Pytorch have some support too
$endgroup$
– Jakub Bartczuk
Jul 3 '18 at 16:08
$begingroup$
@SudipDas You can add names in the code to the layers, which will show up as you plot it.
$endgroup$
– Ben
Nov 27 '18 at 16:16
$begingroup$
How I will show the name of each layer as "CONV", if I write it as "CONV" of each layer then I will get error, cause each layer should have a unique name as tf rules, BUT I want to know, is there any other way to overcome this problem?? @Ben
$endgroup$
– Sudip Das
Nov 27 '18 at 16:22
$begingroup$
@SudipDas Oh well, that does not work. I think you can only overcome this problem by storing it and editing it yourself. Maybe you can cheat by adding invisible characters to the name, but I would recommend giving them unique names anyway.
$endgroup$
– Ben
Nov 27 '18 at 16:32
|
show 1 more comment
$begingroup$
it is good, I am trying to avoid the name like conv1, conv2 etc, I want to make all the name of conv later as CONV, How I will do??
$endgroup$
– Sudip Das
Mar 26 '18 at 13:03
$begingroup$
+1. It's not only for TF though: MXNet and Pytorch have some support too
$endgroup$
– Jakub Bartczuk
Jul 3 '18 at 16:08
$begingroup$
@SudipDas You can add names in the code to the layers, which will show up as you plot it.
$endgroup$
– Ben
Nov 27 '18 at 16:16
$begingroup$
How I will show the name of each layer as "CONV", if I write it as "CONV" of each layer then I will get error, cause each layer should have a unique name as tf rules, BUT I want to know, is there any other way to overcome this problem?? @Ben
$endgroup$
– Sudip Das
Nov 27 '18 at 16:22
$begingroup$
@SudipDas Oh well, that does not work. I think you can only overcome this problem by storing it and editing it yourself. Maybe you can cheat by adding invisible characters to the name, but I would recommend giving them unique names anyway.
$endgroup$
– Ben
Nov 27 '18 at 16:32
$begingroup$
it is good, I am trying to avoid the name like conv1, conv2 etc, I want to make all the name of conv later as CONV, How I will do??
$endgroup$
– Sudip Das
Mar 26 '18 at 13:03
$begingroup$
it is good, I am trying to avoid the name like conv1, conv2 etc, I want to make all the name of conv later as CONV, How I will do??
$endgroup$
– Sudip Das
Mar 26 '18 at 13:03
$begingroup$
+1. It's not only for TF though: MXNet and Pytorch have some support too
$endgroup$
– Jakub Bartczuk
Jul 3 '18 at 16:08
$begingroup$
+1. It's not only for TF though: MXNet and Pytorch have some support too
$endgroup$
– Jakub Bartczuk
Jul 3 '18 at 16:08
$begingroup$
@SudipDas You can add names in the code to the layers, which will show up as you plot it.
$endgroup$
– Ben
Nov 27 '18 at 16:16
$begingroup$
@SudipDas You can add names in the code to the layers, which will show up as you plot it.
$endgroup$
– Ben
Nov 27 '18 at 16:16
$begingroup$
How I will show the name of each layer as "CONV", if I write it as "CONV" of each layer then I will get error, cause each layer should have a unique name as tf rules, BUT I want to know, is there any other way to overcome this problem?? @Ben
$endgroup$
– Sudip Das
Nov 27 '18 at 16:22
$begingroup$
How I will show the name of each layer as "CONV", if I write it as "CONV" of each layer then I will get error, cause each layer should have a unique name as tf rules, BUT I want to know, is there any other way to overcome this problem?? @Ben
$endgroup$
– Sudip Das
Nov 27 '18 at 16:22
$begingroup$
@SudipDas Oh well, that does not work. I think you can only overcome this problem by storing it and editing it yourself. Maybe you can cheat by adding invisible characters to the name, but I would recommend giving them unique names anyway.
$endgroup$
– Ben
Nov 27 '18 at 16:32
$begingroup$
@SudipDas Oh well, that does not work. I think you can only overcome this problem by storing it and editing it yourself. Maybe you can cheat by adding invisible characters to the name, but I would recommend giving them unique names anyway.
$endgroup$
– Ben
Nov 27 '18 at 16:32
|
show 1 more comment
$begingroup$
I recently created a tool for drawing NN architectures and exporting SVG, called NN-SVG
$endgroup$
1
$begingroup$
Download SVG doesn't work
$endgroup$
– image
Jan 23 at 16:13
$begingroup$
works for me 1/23/19. If you're still having an issue, please feel free to open an issue.
$endgroup$
– Alex Lenail
Jan 23 at 23:28
1
$begingroup$
this is the only right answer
$endgroup$
– ArtificiallyIntelligence
Feb 23 at 15:52
add a comment |
$begingroup$
I recently created a tool for drawing NN architectures and exporting SVG, called NN-SVG
$endgroup$
1
$begingroup$
Download SVG doesn't work
$endgroup$
– image
Jan 23 at 16:13
$begingroup$
works for me 1/23/19. If you're still having an issue, please feel free to open an issue.
$endgroup$
– Alex Lenail
Jan 23 at 23:28
1
$begingroup$
this is the only right answer
$endgroup$
– ArtificiallyIntelligence
Feb 23 at 15:52
add a comment |
$begingroup$
I recently created a tool for drawing NN architectures and exporting SVG, called NN-SVG
$endgroup$
I recently created a tool for drawing NN architectures and exporting SVG, called NN-SVG
answered May 10 '18 at 14:41
Alex LenailAlex Lenail
27122
27122
1
$begingroup$
Download SVG doesn't work
$endgroup$
– image
Jan 23 at 16:13
$begingroup$
works for me 1/23/19. If you're still having an issue, please feel free to open an issue.
$endgroup$
– Alex Lenail
Jan 23 at 23:28
1
$begingroup$
this is the only right answer
$endgroup$
– ArtificiallyIntelligence
Feb 23 at 15:52
add a comment |
1
$begingroup$
Download SVG doesn't work
$endgroup$
– image
Jan 23 at 16:13
$begingroup$
works for me 1/23/19. If you're still having an issue, please feel free to open an issue.
$endgroup$
– Alex Lenail
Jan 23 at 23:28
1
$begingroup$
this is the only right answer
$endgroup$
– ArtificiallyIntelligence
Feb 23 at 15:52
1
1
$begingroup$
Download SVG doesn't work
$endgroup$
– image
Jan 23 at 16:13
$begingroup$
Download SVG doesn't work
$endgroup$
– image
Jan 23 at 16:13
$begingroup$
works for me 1/23/19. If you're still having an issue, please feel free to open an issue.
$endgroup$
– Alex Lenail
Jan 23 at 23:28
$begingroup$
works for me 1/23/19. If you're still having an issue, please feel free to open an issue.
$endgroup$
– Alex Lenail
Jan 23 at 23:28
1
1
$begingroup$
this is the only right answer
$endgroup$
– ArtificiallyIntelligence
Feb 23 at 15:52
$begingroup$
this is the only right answer
$endgroup$
– ArtificiallyIntelligence
Feb 23 at 15:52
add a comment |
$begingroup$
In Caffe you can use caffe/draw.py to draw the NetParameter protobuffer:
In Matlab, you can use view(net)
Keras.js:
$endgroup$
add a comment |
$begingroup$
In Caffe you can use caffe/draw.py to draw the NetParameter protobuffer:
In Matlab, you can use view(net)
Keras.js:
$endgroup$
add a comment |
$begingroup$
In Caffe you can use caffe/draw.py to draw the NetParameter protobuffer:
In Matlab, you can use view(net)
Keras.js:
$endgroup$
In Caffe you can use caffe/draw.py to draw the NetParameter protobuffer:
In Matlab, you can use view(net)
Keras.js:
edited Oct 15 '16 at 0:45
answered Jul 19 '16 at 0:43
Franck DernoncourtFranck Dernoncourt
3,52622365
3,52622365
add a comment |
add a comment |
$begingroup$
There is an open source project called Netron
Netron is a viewer for neural network, deep learning and machine learning models.
Netron supports ONNX (.onnx, .pb), Keras (.h5, .keras), CoreML (.mlmodel) and TensorFlow Lite (.tflite). Netron has experimental support for Caffe (.caffemodel), Caffe2 (predict_net.pb), MXNet (-symbol.json), TensorFlow.js (model.json, .pb) and TensorFlow (.pb, .meta).
$endgroup$
add a comment |
$begingroup$
There is an open source project called Netron
Netron is a viewer for neural network, deep learning and machine learning models.
Netron supports ONNX (.onnx, .pb), Keras (.h5, .keras), CoreML (.mlmodel) and TensorFlow Lite (.tflite). Netron has experimental support for Caffe (.caffemodel), Caffe2 (predict_net.pb), MXNet (-symbol.json), TensorFlow.js (model.json, .pb) and TensorFlow (.pb, .meta).
$endgroup$
add a comment |
$begingroup$
There is an open source project called Netron
Netron is a viewer for neural network, deep learning and machine learning models.
Netron supports ONNX (.onnx, .pb), Keras (.h5, .keras), CoreML (.mlmodel) and TensorFlow Lite (.tflite). Netron has experimental support for Caffe (.caffemodel), Caffe2 (predict_net.pb), MXNet (-symbol.json), TensorFlow.js (model.json, .pb) and TensorFlow (.pb, .meta).
$endgroup$
There is an open source project called Netron
Netron is a viewer for neural network, deep learning and machine learning models.
Netron supports ONNX (.onnx, .pb), Keras (.h5, .keras), CoreML (.mlmodel) and TensorFlow Lite (.tflite). Netron has experimental support for Caffe (.caffemodel), Caffe2 (predict_net.pb), MXNet (-symbol.json), TensorFlow.js (model.json, .pb) and TensorFlow (.pb, .meta).
answered Apr 22 '18 at 13:48
han4wluchan4wluc
20112
20112
add a comment |
add a comment |
$begingroup$
Here is yet another way - dotnets, using Graphviz, heavily inspired by this post by Thiago G. Martins.
$endgroup$
add a comment |
$begingroup$
Here is yet another way - dotnets, using Graphviz, heavily inspired by this post by Thiago G. Martins.
$endgroup$
add a comment |
$begingroup$
Here is yet another way - dotnets, using Graphviz, heavily inspired by this post by Thiago G. Martins.
$endgroup$
Here is yet another way - dotnets, using Graphviz, heavily inspired by this post by Thiago G. Martins.
answered Dec 11 '17 at 12:33
bytesinflightbytesinflight
19113
19113
add a comment |
add a comment |
$begingroup$
I would add ASCII visualizations using keras-sequential-ascii (disclaimer: I am the author).
A small network for CIFAR-10 (from this tutorial) would be:
OPERATION DATA DIMENSIONS WEIGHTS(N) WEIGHTS(%)
Input ##### 32 32 3
Conv2D |/ ------------------- 896 2.1%
relu ##### 30 30 32
MaxPooling2D Y max ------------------- 0 0.0%
##### 15 15 32
Conv2D |/ ------------------- 18496 43.6%
relu ##### 13 13 64
MaxPooling2D Y max ------------------- 0 0.0%
##### 6 6 64
Flatten ||||| ------------------- 0 0.0%
##### 2304
Dense XXXXX ------------------- 23050 54.3%
softmax ##### 10
For VGG16 it would be:
OPERATION DATA DIMENSIONS WEIGHTS(N) WEIGHTS(%)
Input ##### 3 224 224
InputLayer | ------------------- 0 0.0%
##### 3 224 224
Convolution2D |/ ------------------- 1792 0.0%
relu ##### 64 224 224
Convolution2D |/ ------------------- 36928 0.0%
relu ##### 64 224 224
MaxPooling2D Y max ------------------- 0 0.0%
##### 64 112 112
Convolution2D |/ ------------------- 73856 0.1%
relu ##### 128 112 112
Convolution2D |/ ------------------- 147584 0.1%
relu ##### 128 112 112
MaxPooling2D Y max ------------------- 0 0.0%
##### 128 56 56
Convolution2D |/ ------------------- 295168 0.2%
relu ##### 256 56 56
Convolution2D |/ ------------------- 590080 0.4%
relu ##### 256 56 56
Convolution2D |/ ------------------- 590080 0.4%
relu ##### 256 56 56
MaxPooling2D Y max ------------------- 0 0.0%
##### 256 28 28
Convolution2D |/ ------------------- 1180160 0.9%
relu ##### 512 28 28
Convolution2D |/ ------------------- 2359808 1.7%
relu ##### 512 28 28
Convolution2D |/ ------------------- 2359808 1.7%
relu ##### 512 28 28
MaxPooling2D Y max ------------------- 0 0.0%
##### 512 14 14
Convolution2D |/ ------------------- 2359808 1.7%
relu ##### 512 14 14
Convolution2D |/ ------------------- 2359808 1.7%
relu ##### 512 14 14
Convolution2D |/ ------------------- 2359808 1.7%
relu ##### 512 14 14
MaxPooling2D Y max ------------------- 0 0.0%
##### 512 7 7
Flatten ||||| ------------------- 0 0.0%
##### 25088
Dense XXXXX ------------------- 102764544 74.3%
relu ##### 4096
Dense XXXXX ------------------- 16781312 12.1%
relu ##### 4096
Dense XXXXX ------------------- 4097000 3.0%
softmax ##### 1000
$endgroup$
add a comment |
$begingroup$
I would add ASCII visualizations using keras-sequential-ascii (disclaimer: I am the author).
A small network for CIFAR-10 (from this tutorial) would be:
OPERATION DATA DIMENSIONS WEIGHTS(N) WEIGHTS(%)
Input ##### 32 32 3
Conv2D |/ ------------------- 896 2.1%
relu ##### 30 30 32
MaxPooling2D Y max ------------------- 0 0.0%
##### 15 15 32
Conv2D |/ ------------------- 18496 43.6%
relu ##### 13 13 64
MaxPooling2D Y max ------------------- 0 0.0%
##### 6 6 64
Flatten ||||| ------------------- 0 0.0%
##### 2304
Dense XXXXX ------------------- 23050 54.3%
softmax ##### 10
For VGG16 it would be:
OPERATION DATA DIMENSIONS WEIGHTS(N) WEIGHTS(%)
Input ##### 3 224 224
InputLayer | ------------------- 0 0.0%
##### 3 224 224
Convolution2D |/ ------------------- 1792 0.0%
relu ##### 64 224 224
Convolution2D |/ ------------------- 36928 0.0%
relu ##### 64 224 224
MaxPooling2D Y max ------------------- 0 0.0%
##### 64 112 112
Convolution2D |/ ------------------- 73856 0.1%
relu ##### 128 112 112
Convolution2D |/ ------------------- 147584 0.1%
relu ##### 128 112 112
MaxPooling2D Y max ------------------- 0 0.0%
##### 128 56 56
Convolution2D |/ ------------------- 295168 0.2%
relu ##### 256 56 56
Convolution2D |/ ------------------- 590080 0.4%
relu ##### 256 56 56
Convolution2D |/ ------------------- 590080 0.4%
relu ##### 256 56 56
MaxPooling2D Y max ------------------- 0 0.0%
##### 256 28 28
Convolution2D |/ ------------------- 1180160 0.9%
relu ##### 512 28 28
Convolution2D |/ ------------------- 2359808 1.7%
relu ##### 512 28 28
Convolution2D |/ ------------------- 2359808 1.7%
relu ##### 512 28 28
MaxPooling2D Y max ------------------- 0 0.0%
##### 512 14 14
Convolution2D |/ ------------------- 2359808 1.7%
relu ##### 512 14 14
Convolution2D |/ ------------------- 2359808 1.7%
relu ##### 512 14 14
Convolution2D |/ ------------------- 2359808 1.7%
relu ##### 512 14 14
MaxPooling2D Y max ------------------- 0 0.0%
##### 512 7 7
Flatten ||||| ------------------- 0 0.0%
##### 25088
Dense XXXXX ------------------- 102764544 74.3%
relu ##### 4096
Dense XXXXX ------------------- 16781312 12.1%
relu ##### 4096
Dense XXXXX ------------------- 4097000 3.0%
softmax ##### 1000
$endgroup$
add a comment |
$begingroup$
I would add ASCII visualizations using keras-sequential-ascii (disclaimer: I am the author).
A small network for CIFAR-10 (from this tutorial) would be:
OPERATION DATA DIMENSIONS WEIGHTS(N) WEIGHTS(%)
Input ##### 32 32 3
Conv2D |/ ------------------- 896 2.1%
relu ##### 30 30 32
MaxPooling2D Y max ------------------- 0 0.0%
##### 15 15 32
Conv2D |/ ------------------- 18496 43.6%
relu ##### 13 13 64
MaxPooling2D Y max ------------------- 0 0.0%
##### 6 6 64
Flatten ||||| ------------------- 0 0.0%
##### 2304
Dense XXXXX ------------------- 23050 54.3%
softmax ##### 10
For VGG16 it would be:
OPERATION DATA DIMENSIONS WEIGHTS(N) WEIGHTS(%)
Input ##### 3 224 224
InputLayer | ------------------- 0 0.0%
##### 3 224 224
Convolution2D |/ ------------------- 1792 0.0%
relu ##### 64 224 224
Convolution2D |/ ------------------- 36928 0.0%
relu ##### 64 224 224
MaxPooling2D Y max ------------------- 0 0.0%
##### 64 112 112
Convolution2D |/ ------------------- 73856 0.1%
relu ##### 128 112 112
Convolution2D |/ ------------------- 147584 0.1%
relu ##### 128 112 112
MaxPooling2D Y max ------------------- 0 0.0%
##### 128 56 56
Convolution2D |/ ------------------- 295168 0.2%
relu ##### 256 56 56
Convolution2D |/ ------------------- 590080 0.4%
relu ##### 256 56 56
Convolution2D |/ ------------------- 590080 0.4%
relu ##### 256 56 56
MaxPooling2D Y max ------------------- 0 0.0%
##### 256 28 28
Convolution2D |/ ------------------- 1180160 0.9%
relu ##### 512 28 28
Convolution2D |/ ------------------- 2359808 1.7%
relu ##### 512 28 28
Convolution2D |/ ------------------- 2359808 1.7%
relu ##### 512 28 28
MaxPooling2D Y max ------------------- 0 0.0%
##### 512 14 14
Convolution2D |/ ------------------- 2359808 1.7%
relu ##### 512 14 14
Convolution2D |/ ------------------- 2359808 1.7%
relu ##### 512 14 14
Convolution2D |/ ------------------- 2359808 1.7%
relu ##### 512 14 14
MaxPooling2D Y max ------------------- 0 0.0%
##### 512 7 7
Flatten ||||| ------------------- 0 0.0%
##### 25088
Dense XXXXX ------------------- 102764544 74.3%
relu ##### 4096
Dense XXXXX ------------------- 16781312 12.1%
relu ##### 4096
Dense XXXXX ------------------- 4097000 3.0%
softmax ##### 1000
$endgroup$
I would add ASCII visualizations using keras-sequential-ascii (disclaimer: I am the author).
A small network for CIFAR-10 (from this tutorial) would be:
OPERATION DATA DIMENSIONS WEIGHTS(N) WEIGHTS(%)
Input ##### 32 32 3
Conv2D |/ ------------------- 896 2.1%
relu ##### 30 30 32
MaxPooling2D Y max ------------------- 0 0.0%
##### 15 15 32
Conv2D |/ ------------------- 18496 43.6%
relu ##### 13 13 64
MaxPooling2D Y max ------------------- 0 0.0%
##### 6 6 64
Flatten ||||| ------------------- 0 0.0%
##### 2304
Dense XXXXX ------------------- 23050 54.3%
softmax ##### 10
For VGG16 it would be:
OPERATION DATA DIMENSIONS WEIGHTS(N) WEIGHTS(%)
Input ##### 3 224 224
InputLayer | ------------------- 0 0.0%
##### 3 224 224
Convolution2D |/ ------------------- 1792 0.0%
relu ##### 64 224 224
Convolution2D |/ ------------------- 36928 0.0%
relu ##### 64 224 224
MaxPooling2D Y max ------------------- 0 0.0%
##### 64 112 112
Convolution2D |/ ------------------- 73856 0.1%
relu ##### 128 112 112
Convolution2D |/ ------------------- 147584 0.1%
relu ##### 128 112 112
MaxPooling2D Y max ------------------- 0 0.0%
##### 128 56 56
Convolution2D |/ ------------------- 295168 0.2%
relu ##### 256 56 56
Convolution2D |/ ------------------- 590080 0.4%
relu ##### 256 56 56
Convolution2D |/ ------------------- 590080 0.4%
relu ##### 256 56 56
MaxPooling2D Y max ------------------- 0 0.0%
##### 256 28 28
Convolution2D |/ ------------------- 1180160 0.9%
relu ##### 512 28 28
Convolution2D |/ ------------------- 2359808 1.7%
relu ##### 512 28 28
Convolution2D |/ ------------------- 2359808 1.7%
relu ##### 512 28 28
MaxPooling2D Y max ------------------- 0 0.0%
##### 512 14 14
Convolution2D |/ ------------------- 2359808 1.7%
relu ##### 512 14 14
Convolution2D |/ ------------------- 2359808 1.7%
relu ##### 512 14 14
Convolution2D |/ ------------------- 2359808 1.7%
relu ##### 512 14 14
MaxPooling2D Y max ------------------- 0 0.0%
##### 512 7 7
Flatten ||||| ------------------- 0 0.0%
##### 25088
Dense XXXXX ------------------- 102764544 74.3%
relu ##### 4096
Dense XXXXX ------------------- 16781312 12.1%
relu ##### 4096
Dense XXXXX ------------------- 4097000 3.0%
softmax ##### 1000
answered Mar 27 '18 at 16:04
Piotr MigdalPiotr Migdal
507411
507411
add a comment |
add a comment |
$begingroup$
Keras
The keras.utils.vis_utils module provides utility functions to plot a Keras model (using graphviz)
The following shows a network model that the first hidden layer has 50 neurons and expects 104 input variables.
plot_model(model, to_file='model.png', show_shapes=True, show_layer_names=True)
$endgroup$
add a comment |
$begingroup$
Keras
The keras.utils.vis_utils module provides utility functions to plot a Keras model (using graphviz)
The following shows a network model that the first hidden layer has 50 neurons and expects 104 input variables.
plot_model(model, to_file='model.png', show_shapes=True, show_layer_names=True)
$endgroup$
add a comment |
$begingroup$
Keras
The keras.utils.vis_utils module provides utility functions to plot a Keras model (using graphviz)
The following shows a network model that the first hidden layer has 50 neurons and expects 104 input variables.
plot_model(model, to_file='model.png', show_shapes=True, show_layer_names=True)
$endgroup$
Keras
The keras.utils.vis_utils module provides utility functions to plot a Keras model (using graphviz)
The following shows a network model that the first hidden layer has 50 neurons and expects 104 input variables.
plot_model(model, to_file='model.png', show_shapes=True, show_layer_names=True)
answered Jan 22 '18 at 10:48
mingxuemingxue
16112
16112
add a comment |
add a comment |
$begingroup$
The Python package conx
can visualize networks with activations with the function net.picture()
to produce SVG, PNG, or PIL Images like this:
Conx is built on Keras, and can read in Keras' models. The colormap at each bank can be changed, and it can show all bank types.
More information can be found at: http://conx.readthedocs.io/en/latest/
$endgroup$
add a comment |
$begingroup$
The Python package conx
can visualize networks with activations with the function net.picture()
to produce SVG, PNG, or PIL Images like this:
Conx is built on Keras, and can read in Keras' models. The colormap at each bank can be changed, and it can show all bank types.
More information can be found at: http://conx.readthedocs.io/en/latest/
$endgroup$
add a comment |
$begingroup$
The Python package conx
can visualize networks with activations with the function net.picture()
to produce SVG, PNG, or PIL Images like this:
Conx is built on Keras, and can read in Keras' models. The colormap at each bank can be changed, and it can show all bank types.
More information can be found at: http://conx.readthedocs.io/en/latest/
$endgroup$
The Python package conx
can visualize networks with activations with the function net.picture()
to produce SVG, PNG, or PIL Images like this:
Conx is built on Keras, and can read in Keras' models. The colormap at each bank can be changed, and it can show all bank types.
More information can be found at: http://conx.readthedocs.io/en/latest/
answered Mar 5 '18 at 12:34
Doug BlankDoug Blank
23122
23122
add a comment |
add a comment |
$begingroup$
In R, nnet
does not come with a plot function, but code for that is provided here.
Alternatively, you can use the more recent and IMHO better package called neuralnet
which features a plot.neuralnet
function, so you can just do:
data(infert, package="datasets")
plot(neuralnet(case~parity+induced+spontaneous, infert))
neuralnet
is not used as much as nnet
because nnet
is much older and is shipped with r-cran. But neuralnet
has more training algorithms, including resilient backpropagation which is lacking even in packages like Tensorflow, and is much more robust to hyperparameter choices, and has more features overall.
$endgroup$
$begingroup$
You should add the updated link for the code of NNet in R beckmw.wordpress.com/2013/11/14/…
$endgroup$
– wacax
May 10 '18 at 16:47
add a comment |
$begingroup$
In R, nnet
does not come with a plot function, but code for that is provided here.
Alternatively, you can use the more recent and IMHO better package called neuralnet
which features a plot.neuralnet
function, so you can just do:
data(infert, package="datasets")
plot(neuralnet(case~parity+induced+spontaneous, infert))
neuralnet
is not used as much as nnet
because nnet
is much older and is shipped with r-cran. But neuralnet
has more training algorithms, including resilient backpropagation which is lacking even in packages like Tensorflow, and is much more robust to hyperparameter choices, and has more features overall.
$endgroup$
$begingroup$
You should add the updated link for the code of NNet in R beckmw.wordpress.com/2013/11/14/…
$endgroup$
– wacax
May 10 '18 at 16:47
add a comment |
$begingroup$
In R, nnet
does not come with a plot function, but code for that is provided here.
Alternatively, you can use the more recent and IMHO better package called neuralnet
which features a plot.neuralnet
function, so you can just do:
data(infert, package="datasets")
plot(neuralnet(case~parity+induced+spontaneous, infert))
neuralnet
is not used as much as nnet
because nnet
is much older and is shipped with r-cran. But neuralnet
has more training algorithms, including resilient backpropagation which is lacking even in packages like Tensorflow, and is much more robust to hyperparameter choices, and has more features overall.
$endgroup$
In R, nnet
does not come with a plot function, but code for that is provided here.
Alternatively, you can use the more recent and IMHO better package called neuralnet
which features a plot.neuralnet
function, so you can just do:
data(infert, package="datasets")
plot(neuralnet(case~parity+induced+spontaneous, infert))
neuralnet
is not used as much as nnet
because nnet
is much older and is shipped with r-cran. But neuralnet
has more training algorithms, including resilient backpropagation which is lacking even in packages like Tensorflow, and is much more robust to hyperparameter choices, and has more features overall.
answered Jul 21 '16 at 17:32
Ricardo CruzRicardo Cruz
2,212727
2,212727
$begingroup$
You should add the updated link for the code of NNet in R beckmw.wordpress.com/2013/11/14/…
$endgroup$
– wacax
May 10 '18 at 16:47
add a comment |
$begingroup$
You should add the updated link for the code of NNet in R beckmw.wordpress.com/2013/11/14/…
$endgroup$
– wacax
May 10 '18 at 16:47
$begingroup$
You should add the updated link for the code of NNet in R beckmw.wordpress.com/2013/11/14/…
$endgroup$
– wacax
May 10 '18 at 16:47
$begingroup$
You should add the updated link for the code of NNet in R beckmw.wordpress.com/2013/11/14/…
$endgroup$
– wacax
May 10 '18 at 16:47
add a comment |
$begingroup$
There are some novel alternative efforts on neural network visualization.
Please see these articles:
Stunning 'AI brain scans' reveal what machines see as they learn new skills
Inside an AI 'brain' - What does machine learning look like?
These approaches are more oriented towards visualizing neural network operation, however, NN architecture is also somewhat visible on the resulting diagrams.
Examples:
$endgroup$
16
$begingroup$
Please explain what we see here. It looks beautiful, but I don't understand how the fancy images support understanding the operation of the network.
$endgroup$
– Martin Thoma
Mar 27 '18 at 17:15
$begingroup$
I don't like your derogatory usage of "fancy images" term. @Martin
$endgroup$
– VividD
Mar 27 '18 at 17:25
8
$begingroup$
I didn't mean to attack you, but your overly defensive answer without actually answering my question speaks for itself. - I added an "interpretation" part to the "lego boxes" diagram.
$endgroup$
– Martin Thoma
Mar 28 '18 at 7:29
1
$begingroup$
By the way: The second link is dead.
$endgroup$
– Martin Thoma
Mar 28 '18 at 7:37
2
$begingroup$
@MartinThoma It's clearly data art, not data viz (vide lisacharlotterost.github.io/2015/12/19/…).
$endgroup$
– Piotr Migdal
Apr 2 '18 at 14:04
|
show 2 more comments
$begingroup$
There are some novel alternative efforts on neural network visualization.
Please see these articles:
Stunning 'AI brain scans' reveal what machines see as they learn new skills
Inside an AI 'brain' - What does machine learning look like?
These approaches are more oriented towards visualizing neural network operation, however, NN architecture is also somewhat visible on the resulting diagrams.
Examples:
$endgroup$
16
$begingroup$
Please explain what we see here. It looks beautiful, but I don't understand how the fancy images support understanding the operation of the network.
$endgroup$
– Martin Thoma
Mar 27 '18 at 17:15
$begingroup$
I don't like your derogatory usage of "fancy images" term. @Martin
$endgroup$
– VividD
Mar 27 '18 at 17:25
8
$begingroup$
I didn't mean to attack you, but your overly defensive answer without actually answering my question speaks for itself. - I added an "interpretation" part to the "lego boxes" diagram.
$endgroup$
– Martin Thoma
Mar 28 '18 at 7:29
1
$begingroup$
By the way: The second link is dead.
$endgroup$
– Martin Thoma
Mar 28 '18 at 7:37
2
$begingroup$
@MartinThoma It's clearly data art, not data viz (vide lisacharlotterost.github.io/2015/12/19/…).
$endgroup$
– Piotr Migdal
Apr 2 '18 at 14:04
|
show 2 more comments
$begingroup$
There are some novel alternative efforts on neural network visualization.
Please see these articles:
Stunning 'AI brain scans' reveal what machines see as they learn new skills
Inside an AI 'brain' - What does machine learning look like?
These approaches are more oriented towards visualizing neural network operation, however, NN architecture is also somewhat visible on the resulting diagrams.
Examples:
$endgroup$
There are some novel alternative efforts on neural network visualization.
Please see these articles:
Stunning 'AI brain scans' reveal what machines see as they learn new skills
Inside an AI 'brain' - What does machine learning look like?
These approaches are more oriented towards visualizing neural network operation, however, NN architecture is also somewhat visible on the resulting diagrams.
Examples:
edited Jul 3 '18 at 15:10
EliaCereda
1032
1032
answered May 17 '17 at 19:02
VividDVividD
564518
564518
16
$begingroup$
Please explain what we see here. It looks beautiful, but I don't understand how the fancy images support understanding the operation of the network.
$endgroup$
– Martin Thoma
Mar 27 '18 at 17:15
$begingroup$
I don't like your derogatory usage of "fancy images" term. @Martin
$endgroup$
– VividD
Mar 27 '18 at 17:25
8
$begingroup$
I didn't mean to attack you, but your overly defensive answer without actually answering my question speaks for itself. - I added an "interpretation" part to the "lego boxes" diagram.
$endgroup$
– Martin Thoma
Mar 28 '18 at 7:29
1
$begingroup$
By the way: The second link is dead.
$endgroup$
– Martin Thoma
Mar 28 '18 at 7:37
2
$begingroup$
@MartinThoma It's clearly data art, not data viz (vide lisacharlotterost.github.io/2015/12/19/…).
$endgroup$
– Piotr Migdal
Apr 2 '18 at 14:04
|
show 2 more comments
16
$begingroup$
Please explain what we see here. It looks beautiful, but I don't understand how the fancy images support understanding the operation of the network.
$endgroup$
– Martin Thoma
Mar 27 '18 at 17:15
$begingroup$
I don't like your derogatory usage of "fancy images" term. @Martin
$endgroup$
– VividD
Mar 27 '18 at 17:25
8
$begingroup$
I didn't mean to attack you, but your overly defensive answer without actually answering my question speaks for itself. - I added an "interpretation" part to the "lego boxes" diagram.
$endgroup$
– Martin Thoma
Mar 28 '18 at 7:29
1
$begingroup$
By the way: The second link is dead.
$endgroup$
– Martin Thoma
Mar 28 '18 at 7:37
2
$begingroup$
@MartinThoma It's clearly data art, not data viz (vide lisacharlotterost.github.io/2015/12/19/…).
$endgroup$
– Piotr Migdal
Apr 2 '18 at 14:04
16
16
$begingroup$
Please explain what we see here. It looks beautiful, but I don't understand how the fancy images support understanding the operation of the network.
$endgroup$
– Martin Thoma
Mar 27 '18 at 17:15
$begingroup$
Please explain what we see here. It looks beautiful, but I don't understand how the fancy images support understanding the operation of the network.
$endgroup$
– Martin Thoma
Mar 27 '18 at 17:15
$begingroup$
I don't like your derogatory usage of "fancy images" term. @Martin
$endgroup$
– VividD
Mar 27 '18 at 17:25
$begingroup$
I don't like your derogatory usage of "fancy images" term. @Martin
$endgroup$
– VividD
Mar 27 '18 at 17:25
8
8
$begingroup$
I didn't mean to attack you, but your overly defensive answer without actually answering my question speaks for itself. - I added an "interpretation" part to the "lego boxes" diagram.
$endgroup$
– Martin Thoma
Mar 28 '18 at 7:29
$begingroup$
I didn't mean to attack you, but your overly defensive answer without actually answering my question speaks for itself. - I added an "interpretation" part to the "lego boxes" diagram.
$endgroup$
– Martin Thoma
Mar 28 '18 at 7:29
1
1
$begingroup$
By the way: The second link is dead.
$endgroup$
– Martin Thoma
Mar 28 '18 at 7:37
$begingroup$
By the way: The second link is dead.
$endgroup$
– Martin Thoma
Mar 28 '18 at 7:37
2
2
$begingroup$
@MartinThoma It's clearly data art, not data viz (vide lisacharlotterost.github.io/2015/12/19/…).
$endgroup$
– Piotr Migdal
Apr 2 '18 at 14:04
$begingroup$
@MartinThoma It's clearly data art, not data viz (vide lisacharlotterost.github.io/2015/12/19/…).
$endgroup$
– Piotr Migdal
Apr 2 '18 at 14:04
|
show 2 more comments
$begingroup$
Not per se nifty for papers, but very useful for showing people who don't know a lot of about neural networks what their topology may look like. This Javascript library (Neataptic) lets you visualise your network:
$endgroup$
add a comment |
$begingroup$
Not per se nifty for papers, but very useful for showing people who don't know a lot of about neural networks what their topology may look like. This Javascript library (Neataptic) lets you visualise your network:
$endgroup$
add a comment |
$begingroup$
Not per se nifty for papers, but very useful for showing people who don't know a lot of about neural networks what their topology may look like. This Javascript library (Neataptic) lets you visualise your network:
$endgroup$
Not per se nifty for papers, but very useful for showing people who don't know a lot of about neural networks what their topology may look like. This Javascript library (Neataptic) lets you visualise your network:
edited May 15 '17 at 18:41
answered Apr 10 '17 at 8:22
Thomas WThomas W
73346
73346
add a comment |
add a comment |
$begingroup$
You can read the popular paper Understanding Neural Networks Through Deep Visualization which discusses visualization of convolutional nets. Its implementation not only displays each layer but also depicts the activations, weights, deconvolutions and many other things that are deeply discussed in the paper. It's code is in caffe'
. The interesting part is that you can replace the pre-trained model with your own.
$endgroup$
add a comment |
$begingroup$
You can read the popular paper Understanding Neural Networks Through Deep Visualization which discusses visualization of convolutional nets. Its implementation not only displays each layer but also depicts the activations, weights, deconvolutions and many other things that are deeply discussed in the paper. It's code is in caffe'
. The interesting part is that you can replace the pre-trained model with your own.
$endgroup$
add a comment |
$begingroup$
You can read the popular paper Understanding Neural Networks Through Deep Visualization which discusses visualization of convolutional nets. Its implementation not only displays each layer but also depicts the activations, weights, deconvolutions and many other things that are deeply discussed in the paper. It's code is in caffe'
. The interesting part is that you can replace the pre-trained model with your own.
$endgroup$
You can read the popular paper Understanding Neural Networks Through Deep Visualization which discusses visualization of convolutional nets. Its implementation not only displays each layer but also depicts the activations, weights, deconvolutions and many other things that are deeply discussed in the paper. It's code is in caffe'
. The interesting part is that you can replace the pre-trained model with your own.
answered Jan 22 '18 at 12:00
VaalizaadehVaalizaadeh
7,55062263
7,55062263
add a comment |
add a comment |
$begingroup$
Tensorspace-JS is a fantastic tool for 3d visualization of network architecture:
https://tensorspace.org/
and here is a nice post about how to write a program:
https://medium.freecodecamp.org/tensorspace-js-a-way-to-3d-visualize-neural-networks-in-browsers-2c0afd7648a8
$endgroup$
$begingroup$
Could you provide a link to this tool?
$endgroup$
– Piotr Migdal
Jan 25 at 14:34
1
$begingroup$
@PiotrMigdal I updated the answer.
$endgroup$
– Ali Mirzaei
Jan 26 at 14:43
add a comment |
$begingroup$
Tensorspace-JS is a fantastic tool for 3d visualization of network architecture:
https://tensorspace.org/
and here is a nice post about how to write a program:
https://medium.freecodecamp.org/tensorspace-js-a-way-to-3d-visualize-neural-networks-in-browsers-2c0afd7648a8
$endgroup$
$begingroup$
Could you provide a link to this tool?
$endgroup$
– Piotr Migdal
Jan 25 at 14:34
1
$begingroup$
@PiotrMigdal I updated the answer.
$endgroup$
– Ali Mirzaei
Jan 26 at 14:43
add a comment |
$begingroup$
Tensorspace-JS is a fantastic tool for 3d visualization of network architecture:
https://tensorspace.org/
and here is a nice post about how to write a program:
https://medium.freecodecamp.org/tensorspace-js-a-way-to-3d-visualize-neural-networks-in-browsers-2c0afd7648a8
$endgroup$
Tensorspace-JS is a fantastic tool for 3d visualization of network architecture:
https://tensorspace.org/
and here is a nice post about how to write a program:
https://medium.freecodecamp.org/tensorspace-js-a-way-to-3d-visualize-neural-networks-in-browsers-2c0afd7648a8
edited Jan 26 at 14:43
answered Jan 25 at 7:56
Ali MirzaeiAli Mirzaei
1214
1214
$begingroup$
Could you provide a link to this tool?
$endgroup$
– Piotr Migdal
Jan 25 at 14:34
1
$begingroup$
@PiotrMigdal I updated the answer.
$endgroup$
– Ali Mirzaei
Jan 26 at 14:43
add a comment |
$begingroup$
Could you provide a link to this tool?
$endgroup$
– Piotr Migdal
Jan 25 at 14:34
1
$begingroup$
@PiotrMigdal I updated the answer.
$endgroup$
– Ali Mirzaei
Jan 26 at 14:43
$begingroup$
Could you provide a link to this tool?
$endgroup$
– Piotr Migdal
Jan 25 at 14:34
$begingroup$
Could you provide a link to this tool?
$endgroup$
– Piotr Migdal
Jan 25 at 14:34
1
1
$begingroup$
@PiotrMigdal I updated the answer.
$endgroup$
– Ali Mirzaei
Jan 26 at 14:43
$begingroup$
@PiotrMigdal I updated the answer.
$endgroup$
– Ali Mirzaei
Jan 26 at 14:43
add a comment |
$begingroup$
I've been working on a drag-and-drop neural network visualizer (and more). Here's an example of a visualization for a LeNet-like architecture.
Models with fan-out and fan-in are also quite easily modeled. You can visit the website at https://math.mit.edu/ennui/
New contributor
$endgroup$
add a comment |
$begingroup$
I've been working on a drag-and-drop neural network visualizer (and more). Here's an example of a visualization for a LeNet-like architecture.
Models with fan-out and fan-in are also quite easily modeled. You can visit the website at https://math.mit.edu/ennui/
New contributor
$endgroup$
add a comment |
$begingroup$
I've been working on a drag-and-drop neural network visualizer (and more). Here's an example of a visualization for a LeNet-like architecture.
Models with fan-out and fan-in are also quite easily modeled. You can visit the website at https://math.mit.edu/ennui/
New contributor
$endgroup$
I've been working on a drag-and-drop neural network visualizer (and more). Here's an example of a visualization for a LeNet-like architecture.
Models with fan-out and fan-in are also quite easily modeled. You can visit the website at https://math.mit.edu/ennui/
New contributor
New contributor
answered 10 hours ago
JesseJesse
112
112
New contributor
New contributor
add a comment |
add a comment |
$begingroup$
Netscope is my everyday tool for Caffe models.
$endgroup$
add a comment |
$begingroup$
Netscope is my everyday tool for Caffe models.
$endgroup$
add a comment |
$begingroup$
Netscope is my everyday tool for Caffe models.
$endgroup$
Netscope is my everyday tool for Caffe models.
answered Jan 25 at 13:31
Dmytro PrylipkoDmytro Prylipko
4667
4667
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f12851%2fhow-do-you-visualize-neural-network-architectures%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
See also: How can a neural network architecture be visualized with Keras?
$endgroup$
– Martin Thoma
Nov 17 '16 at 9:52
1
$begingroup$
Just found reddit.com/r/MachineLearning/comments/4sgsn9/…
$endgroup$
– Martin Thoma
Mar 9 '17 at 10:10
1
$begingroup$
I wrote Simple diagrams of convoluted neural networks with a survey of deep learning visualization approaches (both manual and automatic). I got a lot of inspiration, and links, from this thread - thx!
$endgroup$
– Piotr Migdal
Sep 17 '18 at 20:00