KEMBAR78
Inconsistent behavior for tf.variable_scope · Issue #6189 · tensorflow/tensorflow · GitHub
Skip to content

Inconsistent behavior for tf.variable_scope #6189

@nasimrahaman

Description

@nasimrahaman

Problem:

Tensorflow doesn't place ops (e.g. mul) in pre-existing variable scopes (and automatically creates a new scope instead).

Minimal Reproducible Example

with tf.variable_scope('layer123'):
    v = tf.get_variable('v', [], initializer=tf.constant_initializer(42., tf.float32))
    w = v * 2
print(w.name)    # Prints 'layer123/mul:0'

However,

with tf.variable_scope('layer123'):
    v = tf.get_variable('v', [], initializer=tf.constant_initializer(42., tf.float32))

with tf.variable_scope('layer123'):
    w = v * 2

print(w.name)    # Prints 'layer123_1/mul:0'

Observe that for the latter, the op w is placed in a different variable scope, auto-named layer123_1.

I've tried the following, to the same effect:

with tf.variable_scope('layer123') as scope:
    v = tf.get_variable('v', [], initializer=tf.constant_initializer(42., tf.float32))

with tf.variable_scope(scope):
    w = v * 2

print(w.name)    # Prints 'layer123_1/mul:0'
with tf.variable_scope('layer123'):
    v = tf.get_variable('v', [], initializer=tf.constant_initializer(42., tf.float32))

with tf.variable_scope('layer123', reuse=True):
    w = v * 2

print(w.name)    # Prints 'layer123_1/mul:0'

VersionSpec

Tensorflow version: 0.11.0 (GPU)
OS: Ubuntu 14.04 (w/ CUDA 8)

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions