Why Gaussian latent variable (noise) for GAN?Can the Generative Adversarial Network useful for Outlier detection and Outlier explanation in a high dimentional numerical data?Strange patterns from GANWhy should I normalize also the output data?Generative adversarial networks for multiple distribution noise removalHow can I train Generative Adversarial Inverse Reinforcement Learning(GAIL) by feeding encoded state representations in the GAN architecture ?Architecture Advice for training a GANWhat mu and sigma vector really mean in VAE?EGAN Paper With Confusing NotationWhy do most GAN (Generative Adversarial Network) implementations have symmetric discriminator and generator architectures?What is the interpretation of the expectation notation in the GAN formulation?
PTIJ What is the inyan of the Konami code in Uncle Moishy's song?
Bash - pair each line of file
Why is indicated airspeed rather than ground speed used during the takeoff roll?
What is the English word for a graduation award?
Loading the leaflet Map in Lightning Web Component
Asserting that Atheism and Theism are both faith based positions
What exactly term 'companion plants' means?
What is the significance behind "40 days" that often appears in the Bible?
Print last inputted byte
PTIJ: Do Irish Jews have "the luck of the Irish"?
Violin - Can double stops be played when the strings are not next to each other?
Synchronized implementation of a bank account in Java
In Aliens, how many people were on LV-426 before the Marines arrived?
Turning a hard to access nut?
Can you move over difficult terrain with only 5 feet of movement?
Do I need to be arrogant to get ahead?
Volumetric fire looks cuboid
Do native speakers use "ultima" and "proxima" frequently in spoken English?
Help rendering a complicated sum/product formula
Could Sinn Fein swing any Brexit vote in Parliament?
What favor did Moody owe Dumbledore?
How can I create URL shortcuts/redirects for task/diff IDs in Phabricator?
Using Past-Perfect interchangeably with the Past Continuous
Relation between independence and correlation of uniform random variables
Why Gaussian latent variable (noise) for GAN?
Can the Generative Adversarial Network useful for Outlier detection and Outlier explanation in a high dimentional numerical data?Strange patterns from GANWhy should I normalize also the output data?Generative adversarial networks for multiple distribution noise removalHow can I train Generative Adversarial Inverse Reinforcement Learning(GAIL) by feeding encoded state representations in the GAN architecture ?Architecture Advice for training a GANWhat mu and sigma vector really mean in VAE?EGAN Paper With Confusing NotationWhy do most GAN (Generative Adversarial Network) implementations have symmetric discriminator and generator architectures?What is the interpretation of the expectation notation in the GAN formulation?
$begingroup$
When I was reading about GAN, the thing I don't understand is why people often choose the input to a GAN (z) to be samples from a Gaussian? - and then are there also potential problems associated with this?
deep-learning gan gaussian
New contributor
$endgroup$
add a comment |
$begingroup$
When I was reading about GAN, the thing I don't understand is why people often choose the input to a GAN (z) to be samples from a Gaussian? - and then are there also potential problems associated with this?
deep-learning gan gaussian
New contributor
$endgroup$
add a comment |
$begingroup$
When I was reading about GAN, the thing I don't understand is why people often choose the input to a GAN (z) to be samples from a Gaussian? - and then are there also potential problems associated with this?
deep-learning gan gaussian
New contributor
$endgroup$
When I was reading about GAN, the thing I don't understand is why people often choose the input to a GAN (z) to be samples from a Gaussian? - and then are there also potential problems associated with this?
deep-learning gan gaussian
deep-learning gan gaussian
New contributor
New contributor
edited 7 hours ago
Esmailian
1,346113
1,346113
New contributor
asked yesterday
asahi kibouasahi kibou
311
311
New contributor
New contributor
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Why people often choose the input to a GAN (z)
to be samples from a Gaussian?
Generally, for two reasons: (1) mathematical simplicity, (2) working well enough in practice. However, as we explain, under additional assumptions the choice of Gaussian could be more justified.
Compare to uniform distribution. Gaussian distribution is not as simple as uniform distribution but it is not that far off either. It adds "concentration around the mean" assumption to uniformity, which gives us the benefits of parameter regularization in practical problems.
The least known. Use of Gaussian is best justified for continuous quantities that are the least known to us, e.g. noise $epsilon$ or latent factor $z$. "The least known" could be formalized as "distribution that maximizes entropy for a given variance". The answer to this optimization is $N(mu, sigma^2)$ for arbitrary mean $mu$. Therefore, in this sense, if we assume that a quantity is the least known to us, the best choice is Gaussian. Of course, if we acquire more knowledge about that quantity, we can do better than "the least known" assumption, as will be illustrated in the following examples.
This would be the answer to "why we assume a Gaussian noise in probabilistic regression or Kalman filter?" too.
Are there also potential problems associated with this?
Yes. When we assume Gaussian, we are simplifying. If our simplification is unjustified, our model will under-perform. At this point, we should search for an alternative assumption which involves acquiring more knowledge about the quantity of interest, e.g. noise or latent factor.
When we make an assumption about the least known quantity (based on acquired knowledge or speculation), we could extract that assumption and introduce a new Gaussian one, instead of changing the Gaussian assumption. Here are two examples:
Example in regression (noise). Suppose we have no knowledge about observation $A$ (the least known), thus we assume $A sim N(mu, sigma^2)$. After fitting the model, we may observe that the estimated variance $hatsigma^2$ is high. After some investigation, we may assume that $A$ is a linear function of measurement $B$, thus we extract this assumption as $A = colorblueb_1B +c + epsilon_1$, where $epsilon_1 sim N(0, sigma_1^2)$ is the new "the least known". Later, we may find out that our linearity assumption is also weak since, after fitting the model, the observed $hatepsilon_1 = A - hatb_1B -hatc$ also has a high $hatsigma_1^2$. Then, we may extract a new assumption as $A = b_1B + colorblueb_2B^2 + c + epsilon_2$, where $epsilon_2 sim N(0, sigma_2^2)$ is the new "the least known", and so on.
Example in GAN (latent factor). Upon seeing unrealistic outputs from GAN (knowledge) we may add $colorbluetextmore layers$ between $z$ and the output (extract assumption), in the hope that the new network (or function) with the new $z_2 sim N(0, sigma_2^2)$ would lead to more realistic outputs, and so on.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
asahi kibou is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47437%2fwhy-gaussian-latent-variable-noise-for-gan%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Why people often choose the input to a GAN (z)
to be samples from a Gaussian?
Generally, for two reasons: (1) mathematical simplicity, (2) working well enough in practice. However, as we explain, under additional assumptions the choice of Gaussian could be more justified.
Compare to uniform distribution. Gaussian distribution is not as simple as uniform distribution but it is not that far off either. It adds "concentration around the mean" assumption to uniformity, which gives us the benefits of parameter regularization in practical problems.
The least known. Use of Gaussian is best justified for continuous quantities that are the least known to us, e.g. noise $epsilon$ or latent factor $z$. "The least known" could be formalized as "distribution that maximizes entropy for a given variance". The answer to this optimization is $N(mu, sigma^2)$ for arbitrary mean $mu$. Therefore, in this sense, if we assume that a quantity is the least known to us, the best choice is Gaussian. Of course, if we acquire more knowledge about that quantity, we can do better than "the least known" assumption, as will be illustrated in the following examples.
This would be the answer to "why we assume a Gaussian noise in probabilistic regression or Kalman filter?" too.
Are there also potential problems associated with this?
Yes. When we assume Gaussian, we are simplifying. If our simplification is unjustified, our model will under-perform. At this point, we should search for an alternative assumption which involves acquiring more knowledge about the quantity of interest, e.g. noise or latent factor.
When we make an assumption about the least known quantity (based on acquired knowledge or speculation), we could extract that assumption and introduce a new Gaussian one, instead of changing the Gaussian assumption. Here are two examples:
Example in regression (noise). Suppose we have no knowledge about observation $A$ (the least known), thus we assume $A sim N(mu, sigma^2)$. After fitting the model, we may observe that the estimated variance $hatsigma^2$ is high. After some investigation, we may assume that $A$ is a linear function of measurement $B$, thus we extract this assumption as $A = colorblueb_1B +c + epsilon_1$, where $epsilon_1 sim N(0, sigma_1^2)$ is the new "the least known". Later, we may find out that our linearity assumption is also weak since, after fitting the model, the observed $hatepsilon_1 = A - hatb_1B -hatc$ also has a high $hatsigma_1^2$. Then, we may extract a new assumption as $A = b_1B + colorblueb_2B^2 + c + epsilon_2$, where $epsilon_2 sim N(0, sigma_2^2)$ is the new "the least known", and so on.
Example in GAN (latent factor). Upon seeing unrealistic outputs from GAN (knowledge) we may add $colorbluetextmore layers$ between $z$ and the output (extract assumption), in the hope that the new network (or function) with the new $z_2 sim N(0, sigma_2^2)$ would lead to more realistic outputs, and so on.
$endgroup$
add a comment |
$begingroup$
Why people often choose the input to a GAN (z)
to be samples from a Gaussian?
Generally, for two reasons: (1) mathematical simplicity, (2) working well enough in practice. However, as we explain, under additional assumptions the choice of Gaussian could be more justified.
Compare to uniform distribution. Gaussian distribution is not as simple as uniform distribution but it is not that far off either. It adds "concentration around the mean" assumption to uniformity, which gives us the benefits of parameter regularization in practical problems.
The least known. Use of Gaussian is best justified for continuous quantities that are the least known to us, e.g. noise $epsilon$ or latent factor $z$. "The least known" could be formalized as "distribution that maximizes entropy for a given variance". The answer to this optimization is $N(mu, sigma^2)$ for arbitrary mean $mu$. Therefore, in this sense, if we assume that a quantity is the least known to us, the best choice is Gaussian. Of course, if we acquire more knowledge about that quantity, we can do better than "the least known" assumption, as will be illustrated in the following examples.
This would be the answer to "why we assume a Gaussian noise in probabilistic regression or Kalman filter?" too.
Are there also potential problems associated with this?
Yes. When we assume Gaussian, we are simplifying. If our simplification is unjustified, our model will under-perform. At this point, we should search for an alternative assumption which involves acquiring more knowledge about the quantity of interest, e.g. noise or latent factor.
When we make an assumption about the least known quantity (based on acquired knowledge or speculation), we could extract that assumption and introduce a new Gaussian one, instead of changing the Gaussian assumption. Here are two examples:
Example in regression (noise). Suppose we have no knowledge about observation $A$ (the least known), thus we assume $A sim N(mu, sigma^2)$. After fitting the model, we may observe that the estimated variance $hatsigma^2$ is high. After some investigation, we may assume that $A$ is a linear function of measurement $B$, thus we extract this assumption as $A = colorblueb_1B +c + epsilon_1$, where $epsilon_1 sim N(0, sigma_1^2)$ is the new "the least known". Later, we may find out that our linearity assumption is also weak since, after fitting the model, the observed $hatepsilon_1 = A - hatb_1B -hatc$ also has a high $hatsigma_1^2$. Then, we may extract a new assumption as $A = b_1B + colorblueb_2B^2 + c + epsilon_2$, where $epsilon_2 sim N(0, sigma_2^2)$ is the new "the least known", and so on.
Example in GAN (latent factor). Upon seeing unrealistic outputs from GAN (knowledge) we may add $colorbluetextmore layers$ between $z$ and the output (extract assumption), in the hope that the new network (or function) with the new $z_2 sim N(0, sigma_2^2)$ would lead to more realistic outputs, and so on.
$endgroup$
add a comment |
$begingroup$
Why people often choose the input to a GAN (z)
to be samples from a Gaussian?
Generally, for two reasons: (1) mathematical simplicity, (2) working well enough in practice. However, as we explain, under additional assumptions the choice of Gaussian could be more justified.
Compare to uniform distribution. Gaussian distribution is not as simple as uniform distribution but it is not that far off either. It adds "concentration around the mean" assumption to uniformity, which gives us the benefits of parameter regularization in practical problems.
The least known. Use of Gaussian is best justified for continuous quantities that are the least known to us, e.g. noise $epsilon$ or latent factor $z$. "The least known" could be formalized as "distribution that maximizes entropy for a given variance". The answer to this optimization is $N(mu, sigma^2)$ for arbitrary mean $mu$. Therefore, in this sense, if we assume that a quantity is the least known to us, the best choice is Gaussian. Of course, if we acquire more knowledge about that quantity, we can do better than "the least known" assumption, as will be illustrated in the following examples.
This would be the answer to "why we assume a Gaussian noise in probabilistic regression or Kalman filter?" too.
Are there also potential problems associated with this?
Yes. When we assume Gaussian, we are simplifying. If our simplification is unjustified, our model will under-perform. At this point, we should search for an alternative assumption which involves acquiring more knowledge about the quantity of interest, e.g. noise or latent factor.
When we make an assumption about the least known quantity (based on acquired knowledge or speculation), we could extract that assumption and introduce a new Gaussian one, instead of changing the Gaussian assumption. Here are two examples:
Example in regression (noise). Suppose we have no knowledge about observation $A$ (the least known), thus we assume $A sim N(mu, sigma^2)$. After fitting the model, we may observe that the estimated variance $hatsigma^2$ is high. After some investigation, we may assume that $A$ is a linear function of measurement $B$, thus we extract this assumption as $A = colorblueb_1B +c + epsilon_1$, where $epsilon_1 sim N(0, sigma_1^2)$ is the new "the least known". Later, we may find out that our linearity assumption is also weak since, after fitting the model, the observed $hatepsilon_1 = A - hatb_1B -hatc$ also has a high $hatsigma_1^2$. Then, we may extract a new assumption as $A = b_1B + colorblueb_2B^2 + c + epsilon_2$, where $epsilon_2 sim N(0, sigma_2^2)$ is the new "the least known", and so on.
Example in GAN (latent factor). Upon seeing unrealistic outputs from GAN (knowledge) we may add $colorbluetextmore layers$ between $z$ and the output (extract assumption), in the hope that the new network (or function) with the new $z_2 sim N(0, sigma_2^2)$ would lead to more realistic outputs, and so on.
$endgroup$
Why people often choose the input to a GAN (z)
to be samples from a Gaussian?
Generally, for two reasons: (1) mathematical simplicity, (2) working well enough in practice. However, as we explain, under additional assumptions the choice of Gaussian could be more justified.
Compare to uniform distribution. Gaussian distribution is not as simple as uniform distribution but it is not that far off either. It adds "concentration around the mean" assumption to uniformity, which gives us the benefits of parameter regularization in practical problems.
The least known. Use of Gaussian is best justified for continuous quantities that are the least known to us, e.g. noise $epsilon$ or latent factor $z$. "The least known" could be formalized as "distribution that maximizes entropy for a given variance". The answer to this optimization is $N(mu, sigma^2)$ for arbitrary mean $mu$. Therefore, in this sense, if we assume that a quantity is the least known to us, the best choice is Gaussian. Of course, if we acquire more knowledge about that quantity, we can do better than "the least known" assumption, as will be illustrated in the following examples.
This would be the answer to "why we assume a Gaussian noise in probabilistic regression or Kalman filter?" too.
Are there also potential problems associated with this?
Yes. When we assume Gaussian, we are simplifying. If our simplification is unjustified, our model will under-perform. At this point, we should search for an alternative assumption which involves acquiring more knowledge about the quantity of interest, e.g. noise or latent factor.
When we make an assumption about the least known quantity (based on acquired knowledge or speculation), we could extract that assumption and introduce a new Gaussian one, instead of changing the Gaussian assumption. Here are two examples:
Example in regression (noise). Suppose we have no knowledge about observation $A$ (the least known), thus we assume $A sim N(mu, sigma^2)$. After fitting the model, we may observe that the estimated variance $hatsigma^2$ is high. After some investigation, we may assume that $A$ is a linear function of measurement $B$, thus we extract this assumption as $A = colorblueb_1B +c + epsilon_1$, where $epsilon_1 sim N(0, sigma_1^2)$ is the new "the least known". Later, we may find out that our linearity assumption is also weak since, after fitting the model, the observed $hatepsilon_1 = A - hatb_1B -hatc$ also has a high $hatsigma_1^2$. Then, we may extract a new assumption as $A = b_1B + colorblueb_2B^2 + c + epsilon_2$, where $epsilon_2 sim N(0, sigma_2^2)$ is the new "the least known", and so on.
Example in GAN (latent factor). Upon seeing unrealistic outputs from GAN (knowledge) we may add $colorbluetextmore layers$ between $z$ and the output (extract assumption), in the hope that the new network (or function) with the new $z_2 sim N(0, sigma_2^2)$ would lead to more realistic outputs, and so on.
edited 6 hours ago
answered 20 hours ago
EsmailianEsmailian
1,346113
1,346113
add a comment |
add a comment |
asahi kibou is a new contributor. Be nice, and check out our Code of Conduct.
asahi kibou is a new contributor. Be nice, and check out our Code of Conduct.
asahi kibou is a new contributor. Be nice, and check out our Code of Conduct.
asahi kibou is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47437%2fwhy-gaussian-latent-variable-noise-for-gan%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown