How to collect all variables as a list in tensorflow grouped as a function Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern) 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsTensorFlow and Categorical variablesTensorflow skewed cost functionTensorflow RNN not learning when output included in training variablesTensorflow Adjusting Cost Function for Imbalanced Datawhat is the loss function in char recognition using Tensorflow?Tensorflow regression predicting 1 for all inputsTensorFlow: how to print out all elements in a tensorValueError: Error when checking target: expected dense_2 to have shape (1,) but got array with shape (0,)how to implement infoGAN's loss function in Keras' functional API

2001: A Space Odyssey's use of the song "Daisy Bell" (Bicycle Built for Two); life imitates art or vice-versa?

Why are there no cargo aircraft with "flying wing" design?

How do pianists reach extremely loud dynamics?

Why are the trig functions versine, haversine, exsecant, etc, rarely used in modern mathematics?

Uniqueness of spanning tree on a grid.

When the Haste spell ends on a creature, do attackers have advantage against that creature?

Is safe to use va_start macro with this as parameter?

If a VARCHAR(MAX) column is included in an index, is the entire value always stored in the index page(s)?

How to compare two different files line by line in unix?

Is "Reachable Object" really an NP-complete problem?

Is it a good idea to use CNN to classify 1D signal?

What causes the direction of lightning flashes?

How to convince students of the implication truth values?

How to tell that you are a giant?

Amount of permutations on an NxNxN Rubik's Cube

How can I use the Python library networkx from Mathematica?

Irreducible of finite Krull dimension implies quasi-compact?

What does "lightly crushed" mean for cardamon pods?

How to find all the available tools in mac terminal?

Using audio cues to encourage good posture

Is CEO the profession with the most psychopaths?

What does the "x" in "x86" represent?

How could we fake a moon landing now?

Did MS DOS itself ever use blinking text?



How to collect all variables as a list in tensorflow grouped as a function



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsTensorFlow and Categorical variablesTensorflow skewed cost functionTensorflow RNN not learning when output included in training variablesTensorflow Adjusting Cost Function for Imbalanced Datawhat is the loss function in char recognition using Tensorflow?Tensorflow regression predicting 1 for all inputsTensorFlow: how to print out all elements in a tensorValueError: Error when checking target: expected dense_2 to have shape (1,) but got array with shape (0,)how to implement infoGAN's loss function in Keras' functional API










1












$begingroup$


I am trying to reproduce the cGAN network architecture introduced on the recent paper deep video portrait(2018, Standford)



I have defined Generator as T(x) following the notation of the paper.



And T(x) refer the above listed operation blocks, such as conv_down(), conv_upsample(), biLinearDown() and finalTanH().



I had notated their scope with 'with tf.variable_scope()' syntax.



While I am comprising a loss and optimizer, found that I need to collect those Generator related variables all together since we are going to train with two differet optimizers, one for the discriminator and one for the generator.



Discriminator is upto my co-lleague so it's not my concern, so I just remains it as psheudo.



However, I 'd like to make a list of variables defined in T(x) in my code.



How can I do this? Any help?



import tensorflow as tf
import numpy as np

# hyper-params
learning_rate = 0.0002
epochs = 250
batch_size = 16
N_w = 11 #number of frames concatenated together
channels = 9*N_w
drop_out = [0.5, 0.5, 0.5, 0, 0, 0, 0, 0]
lambda_ = 100 #for Weighting of T_loss

tf.reset_default_graph()

with tf.Graph().as_default():

def conv_down(x, N, count): #Conv [4x4, str_2] > Batch_Normalization > Leaky_ReLU
with tf.variable_scope("conv_down__count".format(N, count)): #N == depth of tensor
x = tf.layers.conv2d(x, N, kernel_size=4, strides=2, padding='same', kernel_initializer=tf.truncated_normal_initializer(stddev=np.sqrt(0.2)))
x = tf.contrib.layers.batch_norm(x)
x = tf.nn.leaky_relu(x) #for conv_down, implement leakyReLU
return x

def conv_upsample(x, N, drop_rate, count):
with tf.variable_scope("conv_upsamp__count".format(N,count)) :
#up
with tf.variable_scope("conv_up_count".format(count)):
x = tf.layers.conv2d_transpose(x, N, kernel_size=4, strides=2, padding='same', kernel_initializer=tf.truncated_normal_initializer(stddev=np.sqrt(0.2)))
x = tf.contrib.layers.batch_norm(x)
with tf.variable_scope("convdrop_".format(count)):
if drop_rate is not 0:
x = tf.nn.dropout(x, keep_prob=drop_rate)
x = tf.nn.relu(x)

#refine1
with tf.variable_scope("refine1"):
x = tf.layers.conv2d(x, N, kernel_size=3, strides=1, padding='same', kernel_initializer=tf.truncated_normal_initializer(stddev=np.sqrt(0.2)))
x = tf.contrib.layers.batch_norm(x)
with tf.variable_scope("rf1drop_out_".format(count)):
if drop_rate is not 0:
x = tf.nn.dropout(x, keep_prob=drop_rate)
x = tf.nn.relu(x)

#refine2
with tf.variable_scope("refine2"):
x = tf.layers.conv2d(x, N, kernel_size=3, strides=1, padding='same', kernel_initializer=tf.truncated_normal_initializer(stddev=np.sqrt(0.2)))
x = tf.contrib.layers.batch_norm(x)
with tf.variable_scope("rf2drop_out".format(count)):
if drop_rate is not 0:
x = tf.nn.dropout(x, keep_prob=drop_rate)
x = tf.nn.relu(x)

return x

def biLinearDown(x, N):
return tf.image.resize_images(x, [N, N])

def finalTanH(x):
with tf.variable_scope("tanh"):
x = tf.nn.tanh(x)
return x

def T(x):
#channel_output_structure
down_channel_output = [64, 128, 256, 512, 512, 512, 512, 512]
up_channel_output= [512, 512, 512, 512, 256, 128, 64, 3]
biLinearDown_output= [32, 64, 128] #for skip-connection

#down_sampling
conv1 = conv_down(x, down_channel_output[0], 1)
conv2 = conv_down(conv1, down_channel_output[1], 2)
conv3 = conv_down(conv2, down_channel_output[2], 3)
conv4 = conv_down(conv3, down_channel_output[3], 4)
conv5 = conv_down(conv4, down_channel_output[4], 5)
conv6 = conv_down(conv5, down_channel_output[5], 6)
conv7 = conv_down(conv6, down_channel_output[6], 7)
conv8 = conv_down(conv7, down_channel_output[7], 8)

#upsampling
dconv1 = conv_upsample(conv8, up_channel_output[0], drop_out[0], 1)
dconv2 = conv_upsample(dconv1, up_channel_output[1], drop_out[1], 2)
dconv3 = conv_upsample(dconv2, up_channel_output[2], drop_out[2], 3)
dconv4 = conv_upsample(dconv3, up_channel_output[3], drop_out[3], 4)
dconv5 = conv_upsample(dconv4, up_channel_output[4], drop_out[4], 5)
dconv6 = conv_upsample(tf.concat([dconv5, biLinearDown(x, biLinearDown_output[0])], axis=3), up_channel_output[5], drop_out[5], 6)
dconv7 = conv_upsample(tf.concat([dconv6, biLinearDown(x, biLinearDown_output[1])], axis=3), up_channel_output[6], drop_out[6], 7)
dconv8 = conv_upsample(tf.concat([dconv7, biLinearDown(x, biLinearDown_output[2])], axis=3), up_channel_output[7], drop_out[7], 8)

#final_tanh
T_x = finalTanH(dconv8)

return T_x

# input_tensor x : to feed as Fake
x = tf.placeholder(tf.float32, [batch_size, 256, 256, channels]) # batch_size x Height x Width x N_w

# generated tensor T(x)
T_x = T(x)

# Ground_truth tensor Y : to feed as Real
Y = tf.placeholder(tf.float32, [batch_size, 256, 256, 3]) # just a capture of video frame


# define sheudo Discriminator
def D(x, to_be_discriminated): #truth is either T(x) or GroudnTruth with a shape [256 x 256 x 3]
sheudo_prob = np.float32(np.random.uniform(low=0., high=1.))
return sheudo_prob

theta_D = [] #tf.Variables of Discriminator

# Discrminated Result
D_real = D(Y)
D_fake = D(T_x)

# Define loss
E_cGAN = tf.reduce_mean(tf.log(D_real)+ tf.log(1. - D_fake))
E_l1 = tf.reduce_mean(tf.norm((Y-T_x)))
Loss = EcGAN + lambda_*E_l1

# Optimizer
D_solver = tf.train.AdamOptimizer().minimize(-Loss, var_list=theta_D) # Only update D(X)'s parameters, so var_list = theta_D
T_solver = tf.train.AdamOptimizer().minimize(Loss, var_list=theta_T) # Only update G(X)'s parameters, so var_list = theta_T


####TEST####
# define sheudo_input for testing
sheudo_x = np.float32(np.random.uniform(low=-1., high=1., size=[16, 256,256, 99]))
sheudo_Y = np.float32(np.random.uniform(low=-1., high=1., size=[16, 256,256, 3]))


####Run####

init_g = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init_g)
sess.run(output_tensor, feed_dict=x: sheudo_input Y: sheudo_Y)









share|improve this question











$endgroup$




bumped to the homepage by Community 34 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.



















    1












    $begingroup$


    I am trying to reproduce the cGAN network architecture introduced on the recent paper deep video portrait(2018, Standford)



    I have defined Generator as T(x) following the notation of the paper.



    And T(x) refer the above listed operation blocks, such as conv_down(), conv_upsample(), biLinearDown() and finalTanH().



    I had notated their scope with 'with tf.variable_scope()' syntax.



    While I am comprising a loss and optimizer, found that I need to collect those Generator related variables all together since we are going to train with two differet optimizers, one for the discriminator and one for the generator.



    Discriminator is upto my co-lleague so it's not my concern, so I just remains it as psheudo.



    However, I 'd like to make a list of variables defined in T(x) in my code.



    How can I do this? Any help?



    import tensorflow as tf
    import numpy as np

    # hyper-params
    learning_rate = 0.0002
    epochs = 250
    batch_size = 16
    N_w = 11 #number of frames concatenated together
    channels = 9*N_w
    drop_out = [0.5, 0.5, 0.5, 0, 0, 0, 0, 0]
    lambda_ = 100 #for Weighting of T_loss

    tf.reset_default_graph()

    with tf.Graph().as_default():

    def conv_down(x, N, count): #Conv [4x4, str_2] > Batch_Normalization > Leaky_ReLU
    with tf.variable_scope("conv_down__count".format(N, count)): #N == depth of tensor
    x = tf.layers.conv2d(x, N, kernel_size=4, strides=2, padding='same', kernel_initializer=tf.truncated_normal_initializer(stddev=np.sqrt(0.2)))
    x = tf.contrib.layers.batch_norm(x)
    x = tf.nn.leaky_relu(x) #for conv_down, implement leakyReLU
    return x

    def conv_upsample(x, N, drop_rate, count):
    with tf.variable_scope("conv_upsamp__count".format(N,count)) :
    #up
    with tf.variable_scope("conv_up_count".format(count)):
    x = tf.layers.conv2d_transpose(x, N, kernel_size=4, strides=2, padding='same', kernel_initializer=tf.truncated_normal_initializer(stddev=np.sqrt(0.2)))
    x = tf.contrib.layers.batch_norm(x)
    with tf.variable_scope("convdrop_".format(count)):
    if drop_rate is not 0:
    x = tf.nn.dropout(x, keep_prob=drop_rate)
    x = tf.nn.relu(x)

    #refine1
    with tf.variable_scope("refine1"):
    x = tf.layers.conv2d(x, N, kernel_size=3, strides=1, padding='same', kernel_initializer=tf.truncated_normal_initializer(stddev=np.sqrt(0.2)))
    x = tf.contrib.layers.batch_norm(x)
    with tf.variable_scope("rf1drop_out_".format(count)):
    if drop_rate is not 0:
    x = tf.nn.dropout(x, keep_prob=drop_rate)
    x = tf.nn.relu(x)

    #refine2
    with tf.variable_scope("refine2"):
    x = tf.layers.conv2d(x, N, kernel_size=3, strides=1, padding='same', kernel_initializer=tf.truncated_normal_initializer(stddev=np.sqrt(0.2)))
    x = tf.contrib.layers.batch_norm(x)
    with tf.variable_scope("rf2drop_out".format(count)):
    if drop_rate is not 0:
    x = tf.nn.dropout(x, keep_prob=drop_rate)
    x = tf.nn.relu(x)

    return x

    def biLinearDown(x, N):
    return tf.image.resize_images(x, [N, N])

    def finalTanH(x):
    with tf.variable_scope("tanh"):
    x = tf.nn.tanh(x)
    return x

    def T(x):
    #channel_output_structure
    down_channel_output = [64, 128, 256, 512, 512, 512, 512, 512]
    up_channel_output= [512, 512, 512, 512, 256, 128, 64, 3]
    biLinearDown_output= [32, 64, 128] #for skip-connection

    #down_sampling
    conv1 = conv_down(x, down_channel_output[0], 1)
    conv2 = conv_down(conv1, down_channel_output[1], 2)
    conv3 = conv_down(conv2, down_channel_output[2], 3)
    conv4 = conv_down(conv3, down_channel_output[3], 4)
    conv5 = conv_down(conv4, down_channel_output[4], 5)
    conv6 = conv_down(conv5, down_channel_output[5], 6)
    conv7 = conv_down(conv6, down_channel_output[6], 7)
    conv8 = conv_down(conv7, down_channel_output[7], 8)

    #upsampling
    dconv1 = conv_upsample(conv8, up_channel_output[0], drop_out[0], 1)
    dconv2 = conv_upsample(dconv1, up_channel_output[1], drop_out[1], 2)
    dconv3 = conv_upsample(dconv2, up_channel_output[2], drop_out[2], 3)
    dconv4 = conv_upsample(dconv3, up_channel_output[3], drop_out[3], 4)
    dconv5 = conv_upsample(dconv4, up_channel_output[4], drop_out[4], 5)
    dconv6 = conv_upsample(tf.concat([dconv5, biLinearDown(x, biLinearDown_output[0])], axis=3), up_channel_output[5], drop_out[5], 6)
    dconv7 = conv_upsample(tf.concat([dconv6, biLinearDown(x, biLinearDown_output[1])], axis=3), up_channel_output[6], drop_out[6], 7)
    dconv8 = conv_upsample(tf.concat([dconv7, biLinearDown(x, biLinearDown_output[2])], axis=3), up_channel_output[7], drop_out[7], 8)

    #final_tanh
    T_x = finalTanH(dconv8)

    return T_x

    # input_tensor x : to feed as Fake
    x = tf.placeholder(tf.float32, [batch_size, 256, 256, channels]) # batch_size x Height x Width x N_w

    # generated tensor T(x)
    T_x = T(x)

    # Ground_truth tensor Y : to feed as Real
    Y = tf.placeholder(tf.float32, [batch_size, 256, 256, 3]) # just a capture of video frame


    # define sheudo Discriminator
    def D(x, to_be_discriminated): #truth is either T(x) or GroudnTruth with a shape [256 x 256 x 3]
    sheudo_prob = np.float32(np.random.uniform(low=0., high=1.))
    return sheudo_prob

    theta_D = [] #tf.Variables of Discriminator

    # Discrminated Result
    D_real = D(Y)
    D_fake = D(T_x)

    # Define loss
    E_cGAN = tf.reduce_mean(tf.log(D_real)+ tf.log(1. - D_fake))
    E_l1 = tf.reduce_mean(tf.norm((Y-T_x)))
    Loss = EcGAN + lambda_*E_l1

    # Optimizer
    D_solver = tf.train.AdamOptimizer().minimize(-Loss, var_list=theta_D) # Only update D(X)'s parameters, so var_list = theta_D
    T_solver = tf.train.AdamOptimizer().minimize(Loss, var_list=theta_T) # Only update G(X)'s parameters, so var_list = theta_T


    ####TEST####
    # define sheudo_input for testing
    sheudo_x = np.float32(np.random.uniform(low=-1., high=1., size=[16, 256,256, 99]))
    sheudo_Y = np.float32(np.random.uniform(low=-1., high=1., size=[16, 256,256, 3]))


    ####Run####

    init_g = tf.global_variables_initializer()
    with tf.Session() as sess:
    sess.run(init_g)
    sess.run(output_tensor, feed_dict=x: sheudo_input Y: sheudo_Y)









    share|improve this question











    $endgroup$




    bumped to the homepage by Community 34 mins ago


    This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.

















      1












      1








      1


      2



      $begingroup$


      I am trying to reproduce the cGAN network architecture introduced on the recent paper deep video portrait(2018, Standford)



      I have defined Generator as T(x) following the notation of the paper.



      And T(x) refer the above listed operation blocks, such as conv_down(), conv_upsample(), biLinearDown() and finalTanH().



      I had notated their scope with 'with tf.variable_scope()' syntax.



      While I am comprising a loss and optimizer, found that I need to collect those Generator related variables all together since we are going to train with two differet optimizers, one for the discriminator and one for the generator.



      Discriminator is upto my co-lleague so it's not my concern, so I just remains it as psheudo.



      However, I 'd like to make a list of variables defined in T(x) in my code.



      How can I do this? Any help?



      import tensorflow as tf
      import numpy as np

      # hyper-params
      learning_rate = 0.0002
      epochs = 250
      batch_size = 16
      N_w = 11 #number of frames concatenated together
      channels = 9*N_w
      drop_out = [0.5, 0.5, 0.5, 0, 0, 0, 0, 0]
      lambda_ = 100 #for Weighting of T_loss

      tf.reset_default_graph()

      with tf.Graph().as_default():

      def conv_down(x, N, count): #Conv [4x4, str_2] > Batch_Normalization > Leaky_ReLU
      with tf.variable_scope("conv_down__count".format(N, count)): #N == depth of tensor
      x = tf.layers.conv2d(x, N, kernel_size=4, strides=2, padding='same', kernel_initializer=tf.truncated_normal_initializer(stddev=np.sqrt(0.2)))
      x = tf.contrib.layers.batch_norm(x)
      x = tf.nn.leaky_relu(x) #for conv_down, implement leakyReLU
      return x

      def conv_upsample(x, N, drop_rate, count):
      with tf.variable_scope("conv_upsamp__count".format(N,count)) :
      #up
      with tf.variable_scope("conv_up_count".format(count)):
      x = tf.layers.conv2d_transpose(x, N, kernel_size=4, strides=2, padding='same', kernel_initializer=tf.truncated_normal_initializer(stddev=np.sqrt(0.2)))
      x = tf.contrib.layers.batch_norm(x)
      with tf.variable_scope("convdrop_".format(count)):
      if drop_rate is not 0:
      x = tf.nn.dropout(x, keep_prob=drop_rate)
      x = tf.nn.relu(x)

      #refine1
      with tf.variable_scope("refine1"):
      x = tf.layers.conv2d(x, N, kernel_size=3, strides=1, padding='same', kernel_initializer=tf.truncated_normal_initializer(stddev=np.sqrt(0.2)))
      x = tf.contrib.layers.batch_norm(x)
      with tf.variable_scope("rf1drop_out_".format(count)):
      if drop_rate is not 0:
      x = tf.nn.dropout(x, keep_prob=drop_rate)
      x = tf.nn.relu(x)

      #refine2
      with tf.variable_scope("refine2"):
      x = tf.layers.conv2d(x, N, kernel_size=3, strides=1, padding='same', kernel_initializer=tf.truncated_normal_initializer(stddev=np.sqrt(0.2)))
      x = tf.contrib.layers.batch_norm(x)
      with tf.variable_scope("rf2drop_out".format(count)):
      if drop_rate is not 0:
      x = tf.nn.dropout(x, keep_prob=drop_rate)
      x = tf.nn.relu(x)

      return x

      def biLinearDown(x, N):
      return tf.image.resize_images(x, [N, N])

      def finalTanH(x):
      with tf.variable_scope("tanh"):
      x = tf.nn.tanh(x)
      return x

      def T(x):
      #channel_output_structure
      down_channel_output = [64, 128, 256, 512, 512, 512, 512, 512]
      up_channel_output= [512, 512, 512, 512, 256, 128, 64, 3]
      biLinearDown_output= [32, 64, 128] #for skip-connection

      #down_sampling
      conv1 = conv_down(x, down_channel_output[0], 1)
      conv2 = conv_down(conv1, down_channel_output[1], 2)
      conv3 = conv_down(conv2, down_channel_output[2], 3)
      conv4 = conv_down(conv3, down_channel_output[3], 4)
      conv5 = conv_down(conv4, down_channel_output[4], 5)
      conv6 = conv_down(conv5, down_channel_output[5], 6)
      conv7 = conv_down(conv6, down_channel_output[6], 7)
      conv8 = conv_down(conv7, down_channel_output[7], 8)

      #upsampling
      dconv1 = conv_upsample(conv8, up_channel_output[0], drop_out[0], 1)
      dconv2 = conv_upsample(dconv1, up_channel_output[1], drop_out[1], 2)
      dconv3 = conv_upsample(dconv2, up_channel_output[2], drop_out[2], 3)
      dconv4 = conv_upsample(dconv3, up_channel_output[3], drop_out[3], 4)
      dconv5 = conv_upsample(dconv4, up_channel_output[4], drop_out[4], 5)
      dconv6 = conv_upsample(tf.concat([dconv5, biLinearDown(x, biLinearDown_output[0])], axis=3), up_channel_output[5], drop_out[5], 6)
      dconv7 = conv_upsample(tf.concat([dconv6, biLinearDown(x, biLinearDown_output[1])], axis=3), up_channel_output[6], drop_out[6], 7)
      dconv8 = conv_upsample(tf.concat([dconv7, biLinearDown(x, biLinearDown_output[2])], axis=3), up_channel_output[7], drop_out[7], 8)

      #final_tanh
      T_x = finalTanH(dconv8)

      return T_x

      # input_tensor x : to feed as Fake
      x = tf.placeholder(tf.float32, [batch_size, 256, 256, channels]) # batch_size x Height x Width x N_w

      # generated tensor T(x)
      T_x = T(x)

      # Ground_truth tensor Y : to feed as Real
      Y = tf.placeholder(tf.float32, [batch_size, 256, 256, 3]) # just a capture of video frame


      # define sheudo Discriminator
      def D(x, to_be_discriminated): #truth is either T(x) or GroudnTruth with a shape [256 x 256 x 3]
      sheudo_prob = np.float32(np.random.uniform(low=0., high=1.))
      return sheudo_prob

      theta_D = [] #tf.Variables of Discriminator

      # Discrminated Result
      D_real = D(Y)
      D_fake = D(T_x)

      # Define loss
      E_cGAN = tf.reduce_mean(tf.log(D_real)+ tf.log(1. - D_fake))
      E_l1 = tf.reduce_mean(tf.norm((Y-T_x)))
      Loss = EcGAN + lambda_*E_l1

      # Optimizer
      D_solver = tf.train.AdamOptimizer().minimize(-Loss, var_list=theta_D) # Only update D(X)'s parameters, so var_list = theta_D
      T_solver = tf.train.AdamOptimizer().minimize(Loss, var_list=theta_T) # Only update G(X)'s parameters, so var_list = theta_T


      ####TEST####
      # define sheudo_input for testing
      sheudo_x = np.float32(np.random.uniform(low=-1., high=1., size=[16, 256,256, 99]))
      sheudo_Y = np.float32(np.random.uniform(low=-1., high=1., size=[16, 256,256, 3]))


      ####Run####

      init_g = tf.global_variables_initializer()
      with tf.Session() as sess:
      sess.run(init_g)
      sess.run(output_tensor, feed_dict=x: sheudo_input Y: sheudo_Y)









      share|improve this question











      $endgroup$




      I am trying to reproduce the cGAN network architecture introduced on the recent paper deep video portrait(2018, Standford)



      I have defined Generator as T(x) following the notation of the paper.



      And T(x) refer the above listed operation blocks, such as conv_down(), conv_upsample(), biLinearDown() and finalTanH().



      I had notated their scope with 'with tf.variable_scope()' syntax.



      While I am comprising a loss and optimizer, found that I need to collect those Generator related variables all together since we are going to train with two differet optimizers, one for the discriminator and one for the generator.



      Discriminator is upto my co-lleague so it's not my concern, so I just remains it as psheudo.



      However, I 'd like to make a list of variables defined in T(x) in my code.



      How can I do this? Any help?



      import tensorflow as tf
      import numpy as np

      # hyper-params
      learning_rate = 0.0002
      epochs = 250
      batch_size = 16
      N_w = 11 #number of frames concatenated together
      channels = 9*N_w
      drop_out = [0.5, 0.5, 0.5, 0, 0, 0, 0, 0]
      lambda_ = 100 #for Weighting of T_loss

      tf.reset_default_graph()

      with tf.Graph().as_default():

      def conv_down(x, N, count): #Conv [4x4, str_2] > Batch_Normalization > Leaky_ReLU
      with tf.variable_scope("conv_down__count".format(N, count)): #N == depth of tensor
      x = tf.layers.conv2d(x, N, kernel_size=4, strides=2, padding='same', kernel_initializer=tf.truncated_normal_initializer(stddev=np.sqrt(0.2)))
      x = tf.contrib.layers.batch_norm(x)
      x = tf.nn.leaky_relu(x) #for conv_down, implement leakyReLU
      return x

      def conv_upsample(x, N, drop_rate, count):
      with tf.variable_scope("conv_upsamp__count".format(N,count)) :
      #up
      with tf.variable_scope("conv_up_count".format(count)):
      x = tf.layers.conv2d_transpose(x, N, kernel_size=4, strides=2, padding='same', kernel_initializer=tf.truncated_normal_initializer(stddev=np.sqrt(0.2)))
      x = tf.contrib.layers.batch_norm(x)
      with tf.variable_scope("convdrop_".format(count)):
      if drop_rate is not 0:
      x = tf.nn.dropout(x, keep_prob=drop_rate)
      x = tf.nn.relu(x)

      #refine1
      with tf.variable_scope("refine1"):
      x = tf.layers.conv2d(x, N, kernel_size=3, strides=1, padding='same', kernel_initializer=tf.truncated_normal_initializer(stddev=np.sqrt(0.2)))
      x = tf.contrib.layers.batch_norm(x)
      with tf.variable_scope("rf1drop_out_".format(count)):
      if drop_rate is not 0:
      x = tf.nn.dropout(x, keep_prob=drop_rate)
      x = tf.nn.relu(x)

      #refine2
      with tf.variable_scope("refine2"):
      x = tf.layers.conv2d(x, N, kernel_size=3, strides=1, padding='same', kernel_initializer=tf.truncated_normal_initializer(stddev=np.sqrt(0.2)))
      x = tf.contrib.layers.batch_norm(x)
      with tf.variable_scope("rf2drop_out".format(count)):
      if drop_rate is not 0:
      x = tf.nn.dropout(x, keep_prob=drop_rate)
      x = tf.nn.relu(x)

      return x

      def biLinearDown(x, N):
      return tf.image.resize_images(x, [N, N])

      def finalTanH(x):
      with tf.variable_scope("tanh"):
      x = tf.nn.tanh(x)
      return x

      def T(x):
      #channel_output_structure
      down_channel_output = [64, 128, 256, 512, 512, 512, 512, 512]
      up_channel_output= [512, 512, 512, 512, 256, 128, 64, 3]
      biLinearDown_output= [32, 64, 128] #for skip-connection

      #down_sampling
      conv1 = conv_down(x, down_channel_output[0], 1)
      conv2 = conv_down(conv1, down_channel_output[1], 2)
      conv3 = conv_down(conv2, down_channel_output[2], 3)
      conv4 = conv_down(conv3, down_channel_output[3], 4)
      conv5 = conv_down(conv4, down_channel_output[4], 5)
      conv6 = conv_down(conv5, down_channel_output[5], 6)
      conv7 = conv_down(conv6, down_channel_output[6], 7)
      conv8 = conv_down(conv7, down_channel_output[7], 8)

      #upsampling
      dconv1 = conv_upsample(conv8, up_channel_output[0], drop_out[0], 1)
      dconv2 = conv_upsample(dconv1, up_channel_output[1], drop_out[1], 2)
      dconv3 = conv_upsample(dconv2, up_channel_output[2], drop_out[2], 3)
      dconv4 = conv_upsample(dconv3, up_channel_output[3], drop_out[3], 4)
      dconv5 = conv_upsample(dconv4, up_channel_output[4], drop_out[4], 5)
      dconv6 = conv_upsample(tf.concat([dconv5, biLinearDown(x, biLinearDown_output[0])], axis=3), up_channel_output[5], drop_out[5], 6)
      dconv7 = conv_upsample(tf.concat([dconv6, biLinearDown(x, biLinearDown_output[1])], axis=3), up_channel_output[6], drop_out[6], 7)
      dconv8 = conv_upsample(tf.concat([dconv7, biLinearDown(x, biLinearDown_output[2])], axis=3), up_channel_output[7], drop_out[7], 8)

      #final_tanh
      T_x = finalTanH(dconv8)

      return T_x

      # input_tensor x : to feed as Fake
      x = tf.placeholder(tf.float32, [batch_size, 256, 256, channels]) # batch_size x Height x Width x N_w

      # generated tensor T(x)
      T_x = T(x)

      # Ground_truth tensor Y : to feed as Real
      Y = tf.placeholder(tf.float32, [batch_size, 256, 256, 3]) # just a capture of video frame


      # define sheudo Discriminator
      def D(x, to_be_discriminated): #truth is either T(x) or GroudnTruth with a shape [256 x 256 x 3]
      sheudo_prob = np.float32(np.random.uniform(low=0., high=1.))
      return sheudo_prob

      theta_D = [] #tf.Variables of Discriminator

      # Discrminated Result
      D_real = D(Y)
      D_fake = D(T_x)

      # Define loss
      E_cGAN = tf.reduce_mean(tf.log(D_real)+ tf.log(1. - D_fake))
      E_l1 = tf.reduce_mean(tf.norm((Y-T_x)))
      Loss = EcGAN + lambda_*E_l1

      # Optimizer
      D_solver = tf.train.AdamOptimizer().minimize(-Loss, var_list=theta_D) # Only update D(X)'s parameters, so var_list = theta_D
      T_solver = tf.train.AdamOptimizer().minimize(Loss, var_list=theta_T) # Only update G(X)'s parameters, so var_list = theta_T


      ####TEST####
      # define sheudo_input for testing
      sheudo_x = np.float32(np.random.uniform(low=-1., high=1., size=[16, 256,256, 99]))
      sheudo_Y = np.float32(np.random.uniform(low=-1., high=1., size=[16, 256,256, 3]))


      ####Run####

      init_g = tf.global_variables_initializer()
      with tf.Session() as sess:
      sess.run(init_g)
      sess.run(output_tensor, feed_dict=x: sheudo_input Y: sheudo_Y)






      machine-learning neural-network deep-learning tensorflow gan






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Jul 21 '18 at 8:17









      Vaalizaadeh

      7,60562264




      7,60562264










      asked Jul 21 '18 at 8:05









      BeverlieBeverlie

      1216




      1216





      bumped to the homepage by Community 34 mins ago


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







      bumped to the homepage by Community 34 mins ago


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.






















          1 Answer
          1






          active

          oldest

          votes


















          0












          $begingroup$

          In GANs you have to train some parameters, freeze them and train some other which this operation may occur multiple times. You can do the following sequence of operations.



          Specify all generator related variables inside their corresponding variable scope and after that access them using tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='my_scope'). After that, during training, you can pass these variables as the trainable parameters of your optimiser by setting the var_list of the minimize method. You can also take a look at here.



          If you want to get all the trainable variables, you can get all of them inside of a list using tf.trainable_variables method.



          Maybe it worth looking here for other aspects for freezing variables.



          You can also take a look at Hvass-Labs's implementation of adversarial networks. Take a look at here too.






          share|improve this answer











          $endgroup$













            Your Answer








            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "557"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f35824%2fhow-to-collect-all-variables-as-a-list-in-tensorflow-grouped-as-a-function%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0












            $begingroup$

            In GANs you have to train some parameters, freeze them and train some other which this operation may occur multiple times. You can do the following sequence of operations.



            Specify all generator related variables inside their corresponding variable scope and after that access them using tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='my_scope'). After that, during training, you can pass these variables as the trainable parameters of your optimiser by setting the var_list of the minimize method. You can also take a look at here.



            If you want to get all the trainable variables, you can get all of them inside of a list using tf.trainable_variables method.



            Maybe it worth looking here for other aspects for freezing variables.



            You can also take a look at Hvass-Labs's implementation of adversarial networks. Take a look at here too.






            share|improve this answer











            $endgroup$

















              0












              $begingroup$

              In GANs you have to train some parameters, freeze them and train some other which this operation may occur multiple times. You can do the following sequence of operations.



              Specify all generator related variables inside their corresponding variable scope and after that access them using tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='my_scope'). After that, during training, you can pass these variables as the trainable parameters of your optimiser by setting the var_list of the minimize method. You can also take a look at here.



              If you want to get all the trainable variables, you can get all of them inside of a list using tf.trainable_variables method.



              Maybe it worth looking here for other aspects for freezing variables.



              You can also take a look at Hvass-Labs's implementation of adversarial networks. Take a look at here too.






              share|improve this answer











              $endgroup$















                0












                0








                0





                $begingroup$

                In GANs you have to train some parameters, freeze them and train some other which this operation may occur multiple times. You can do the following sequence of operations.



                Specify all generator related variables inside their corresponding variable scope and after that access them using tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='my_scope'). After that, during training, you can pass these variables as the trainable parameters of your optimiser by setting the var_list of the minimize method. You can also take a look at here.



                If you want to get all the trainable variables, you can get all of them inside of a list using tf.trainable_variables method.



                Maybe it worth looking here for other aspects for freezing variables.



                You can also take a look at Hvass-Labs's implementation of adversarial networks. Take a look at here too.






                share|improve this answer











                $endgroup$



                In GANs you have to train some parameters, freeze them and train some other which this operation may occur multiple times. You can do the following sequence of operations.



                Specify all generator related variables inside their corresponding variable scope and after that access them using tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='my_scope'). After that, during training, you can pass these variables as the trainable parameters of your optimiser by setting the var_list of the minimize method. You can also take a look at here.



                If you want to get all the trainable variables, you can get all of them inside of a list using tf.trainable_variables method.



                Maybe it worth looking here for other aspects for freezing variables.



                You can also take a look at Hvass-Labs's implementation of adversarial networks. Take a look at here too.







                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited Jul 21 '18 at 10:07

























                answered Jul 21 '18 at 8:16









                VaalizaadehVaalizaadeh

                7,60562264




                7,60562264



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Data Science Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f35824%2fhow-to-collect-all-variables-as-a-list-in-tensorflow-grouped-as-a-function%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    На ростанях Змест Гісторыя напісання | Месца дзеяння | Час дзеяння | Назва | Праблематыка трылогіі | Аўтабіяграфічнасць | Трылогія ў тэатры і кіно | Пераклады | У культуры | Зноскі Літаратура | Спасылкі | НавігацыяДагледжаная версіяправерана1 зменаДагледжаная версіяправерана1 зменаАкадэмік МІЦКЕВІЧ Канстанцін Міхайлавіч (Якуб Колас) Прадмова М. І. Мушынскага, доктара філалагічных навук, члена-карэспандэнта Нацыянальнай акадэміі навук Рэспублікі Беларусь, прафесараНашаніўцы ў трылогіі Якуба Коласа «На ростанях»: вобразы і прататыпы125 лет Янке МавруКнижно-документальная выставка к 125-летию со дня рождения Якуба Коласа (1882—1956)Колас Якуб. Новая зямля (паэма), На ростанях (трылогія). Сулкоўскі Уладзімір. Радзіма Якуба Коласа (серыял жывапісных палотнаў)Вокладка кнігіІлюстрацыя М. С. БасалыгіНа ростаняхАўдыёверсія трылогііВ. Жолтак У Люсiнскай школе 1959

                    Францішак Багушэвіч Змест Сям'я | Біяграфія | Творчасць | Мова Багушэвіча | Ацэнкі дзейнасці | Цікавыя факты | Спадчына | Выбраная бібліяграфія | Ушанаванне памяці | У філатэліі | Зноскі | Літаратура | Спасылкі | НавігацыяЛяхоўскі У. Рупіўся дзеля Бога і людзей: Жыццёвы шлях Лявона Вітан-Дубейкаўскага // Вольскі і Памідораў з песняй пра немца Адвакат, паэт, народны заступнік Ашмянскі веснікВ Минске появится площадь Богушевича и улица Сырокомли, Белорусская деловая газета, 19 июля 2001 г.Айцец беларускай нацыянальнай ідэі паўстаў у бронзе Сяргей Аляксандравіч Адашкевіч (1918, Мінск). 80-я гады. Бюст «Францішак Багушэвіч».Яўген Мікалаевіч Ціхановіч. «Партрэт Францішка Багушэвіча»Мікола Мікалаевіч Купава. «Партрэт зачынальніка новай беларускай літаратуры Францішка Багушэвіча»Уладзімір Іванавіч Мелехаў. На помніку «Змагарам за родную мову» Барэльеф «Францішак Багушэвіч»Памяць пра Багушэвіча на Віленшчыне Страчаная сталіца. Беларускія шыльды на вуліцах Вільні«Krynica». Ideologia i przywódcy białoruskiego katolicyzmuФранцішак БагушэвічТворы на knihi.comТворы Францішка Багушэвіча на bellib.byСодаль Уладзімір. Францішак Багушэвіч на Лідчыне;Луцкевіч Антон. Жыцьцё і творчасьць Фр. Багушэвіча ў успамінах ягоных сучасьнікаў // Запісы Беларускага Навуковага таварыства. Вільня, 1938. Сшытак 1. С. 16-34.Большая российская1188761710000 0000 5537 633Xn9209310021619551927869394п

                    Беларусь Змест Назва Гісторыя Геаграфія Сімволіка Дзяржаўны лад Палітычныя партыі Міжнароднае становішча і знешняя палітыка Адміністрацыйны падзел Насельніцтва Эканоміка Культура і грамадства Сацыяльная сфера Узброеныя сілы Заўвагі Літаратура Спасылкі НавігацыяHGЯOiТоп-2011 г. (па версіі ej.by)Топ-2013 г. (па версіі ej.by)Топ-2016 г. (па версіі ej.by)Топ-2017 г. (па версіі ej.by)Нацыянальны статыстычны камітэт Рэспублікі БеларусьШчыльнасць насельніцтва па краінахhttp://naviny.by/rubrics/society/2011/09/16/ic_articles_116_175144/А. Калечыц, У. Ксяндзоў. Спробы засялення краю неандэртальскім чалавекам.І ў Менску былі мамантыА. Калечыц, У. Ксяндзоў. Старажытны каменны век (палеаліт). Першапачатковае засяленне тэрыторыіГ. Штыхаў. Балты і славяне ў VI—VIII стст.М. Клімаў. Полацкае княства ў IX—XI стст.Г. Штыхаў, В. Ляўко. Палітычная гісторыя Полацкай зямліГ. Штыхаў. Дзяржаўны лад у землях-княствахГ. Штыхаў. Дзяржаўны лад у землях-княствахБеларускія землі ў складзе Вялікага Княства ЛітоўскагаЛюблінская унія 1569 г."The Early Stages of Independence"Zapomniane prawdy25 гадоў таму было аб'яўлена, што Язэп Пілсудскі — беларус (фота)Наша вадаДакументы ЧАЭС: Забруджванне тэрыторыі Беларусі « ЧАЭС Зона адчужэнняСведения о политических партиях, зарегистрированных в Республике Беларусь // Министерство юстиции Республики БеларусьСтатыстычны бюлетэнь „Полаўзроставая структура насельніцтва Рэспублікі Беларусь на 1 студзеня 2012 года і сярэднегадовая колькасць насельніцтва за 2011 год“Индекс человеческого развития Беларуси — не было бы нижеБеларусь занимает первое место в СНГ по индексу развития с учетом гендерного факцёраНацыянальны статыстычны камітэт Рэспублікі БеларусьКанстытуцыя РБ. Артыкул 17Трансфармацыйныя задачы БеларусіВыйсце з крызісу — далейшае рэфармаванне Беларускі рубель — сусветны лідар па дэвальвацыяхПра змену коштаў у кастрычніку 2011 г.Бядней за беларусаў у СНД толькі таджыкіСярэдні заробак у верасні дасягнуў 2,26 мільёна рублёўЭканомікаГаласуем за ТОП-100 беларускай прозыСучасныя беларускія мастакіАрхитектура Беларуси BELARUS.BYА. Каханоўскі. Культура Беларусі ўсярэдзіне XVII—XVIII ст.Анталогія беларускай народнай песні, гуказапісы спеваўБеларускія Музычныя IнструментыБеларускі рок, які мы страцілі. Топ-10 гуртоў«Мясцовы час» — нязгаслая легенда беларускай рок-музыкіСЯРГЕЙ БУДКІН. МЫ НЯ ЗНАЕМ СВАЁЙ МУЗЫКІМ. А. Каладзінскі. НАРОДНЫ ТЭАТРМагнацкія культурныя цэнтрыПублічная дыскусія «Беларуская новая пьеса: без беларускай мовы ці беларуская?»Беларускія драматургі па-ранейшаму лепш ставяцца за мяжой, чым на радзіме«Працэс незалежнага кіно пайшоў, і дзяржаву турбуе яго непадкантрольнасць»Беларускія філосафы ў пошуках прасторыВсе идём в библиотекуАрхіваванаАб Нацыянальнай праграме даследавання і выкарыстання касмічнай прасторы ў мірных мэтах на 2008—2012 гадыУ космас — разам.У суседнім з Барысаўскім раёне пабудуюць Камандна-вымяральны пунктСвяты і абрады беларусаў«Мірныя бульбашы з малой краіны» — 5 непраўдзівых стэрэатыпаў пра БеларусьМ. Раманюк. Беларускае народнае адзеннеУ Беларусі скарачаецца колькасць злачынстваўЛукашэнка незадаволены мінскімі ўладамі Крадзяжы складаюць у Мінску каля 70% злачынстваў Узровень злачыннасці ў Мінскай вобласці — адзін з самых высокіх у краіне Генпракуратура аналізуе стан са злачыннасцю ў Беларусі па каэфіцыенце злачыннасці У Беларусі стабілізавалася крымінагеннае становішча, лічыць генпракурорЗамежнікі сталі здзяйсняць у Беларусі больш злачынстваўМУС Беларусі турбуе рост рэцыдыўнай злачыннасціЯ з ЖЭСа. Дазволіце вас абкрасці! Рэйтынг усіх службаў і падраздзяленняў ГУУС Мінгарвыканкама вырасАб КДБ РБГісторыя Аператыўна-аналітычнага цэнтра РБГісторыя ДКФРТаможняagentura.ruБеларусьBelarus.by — Афіцыйны сайт Рэспублікі БеларусьСайт урада БеларусіRadzima.org — Збор архітэктурных помнікаў, гісторыя Беларусі«Глобус Беларуси»Гербы и флаги БеларусиАсаблівасці каменнага веку на БеларусіА. Калечыц, У. Ксяндзоў. Старажытны каменны век (палеаліт). Першапачатковае засяленне тэрыторыіУ. Ксяндзоў. Сярэдні каменны век (мезаліт). Засяленне краю плямёнамі паляўнічых, рыбакоў і збіральнікаўА. Калечыц, М. Чарняўскі. Плямёны на тэрыторыі Беларусі ў новым каменным веку (неаліце)А. Калечыц, У. Ксяндзоў, М. Чарняўскі. Гаспадарчыя заняткі ў каменным векуЭ. Зайкоўскі. Духоўная культура ў каменным векуАсаблівасці бронзавага веку на БеларусіФарміраванне супольнасцей ранняга перыяду бронзавага векуФотографии БеларусиРоля беларускіх зямель ва ўтварэнні і ўмацаванні ВКЛВ. Фадзеева. З гісторыі развіцця беларускай народнай вышыўкіDMOZGran catalanaБольшая российскаяBritannica (анлайн)Швейцарскі гістарычны15325917611952699xDA123282154079143-90000 0001 2171 2080n9112870100577502ge128882171858027501086026362074122714179пппппп