What's new in Tensorflow 2.0

1 December 2018

Paolo Galeone

Tensorflow 2.0: why and when?

Why?

Tensorflow 1.x has a steep learning curve, because it uses dataflow graphs to represent computation.

If you come from an imperative programming language (C, Python, ...) you're not used to think as Tensorflow 1.x needs:

1. Define the computational graph
2. Execute the described computation inside a Session

In Tensorflow 2.0 the eager mode will be the default.

Eager mode

TensorFlow’s eager execution is an imperative programming environment that evaluates operations immediately, without building graphs: operations return concrete values instead of constructing a computational graph to run later.

TL;DR

> People are used to think in a imperative way
> Everybody want's to do Machine Learning
> Nobody wants to struggle with a new way of thinking
> PyTorch - the Tensorflow main competitor - uses eager by default

Tensorflow is becoming PyTorch.

TL;DR

Fortunately, it will be possibile to switch between graph mode and eager mode.

In Tensorflow 1.x it is possibile to enable eager mode.
In Tensorflow 2.x it will be possibile to enable graph mode.

In short: the main reason is to lower the entry barrier (and thus increase the userbase - marketing move?)

When?

The very first, unstable, draft will be available by the end of 2018.

The release date of the official 2.0 release is not clear, but it will be more or less by the end of Spring 2019.

It's just eager mode enabled by default?

Of course not.

There will be a lot of changes, most of them will require a complete rewriting of the existing codebases.

1. Transition from a graph-based way of thinking to an Object Oriented approach: from tf.layers to Keras
2. Complete removal of tf.contrib and creation of separate projects
3. Removal of deprecated and duplicated APIs
4. Public design process: further changes are still discussed in the Tensorflow Discussion Group where everyone can participate.

Object Oriented approach: Keras layers

In Tensorflow 1.x you have to worry about variables scoping, reusing, gathering.

Using a GAN models definition as an example:

def generator(inputs):
    with tf.variable_scope("generator"):
        fc1 = tf.layers.dense(inputs, units=64, activation=tf.nn.elu, name="fc1")
        fc2 = tf.layers.dense(fc1, units=64, activation=tf.nn.elu, name="fc2")
        G = tf.layers.dense(fc2, units=1, name="G")
    return G

def discriminator(inputs, reuse=False):
    with tf.variable_scope("discriminator", reuse=reuse):
        fc1 = tf.layers.dense(inputs, units=32, activation=tf.nn.elu, name="fc1")
        D = tf.layers.dense(fc1, units=1, name="D")
    return D
[...]
D_real = discriminator(real_input)
G = generator(input_noise)
# note the REUSE = True
D_fake = discriminator(G, True)

Object Oriented approach: Keras layers

[losses definition]
# Gather D and G variables
D_vars = tf.trainable_variables(scope="discriminator")
G_vars = tf.trainable_variables(scope="generator")

# Define the optimizers and the train operations
train_D = tf.train.AdamOptimizer(1e-5).minimize(D_loss, var_list=D_vars)
train_G = tf.train.AdamOptimizer(1e-5).minimize(G_loss, var_list=G_vars)

Use of global collections / global variables (tf.trainable_variables): not a good software engineering practice.

Object Oriented approach: Keras layers

In Tensorflow 2.0 you won't have to worry about: anything, just define the models using tf.keras.layers instead of tf.layers:

def generator(input_shape):
    inputs = tf.keras.layers.Input(input_shape)
    net = tf.keras.layers.Dense(units=64, activation=tf.nn.elu, name="fc1")(inputs)
    net = tf.keras.layers.Dense(units=64, activation=tf.nn.elu, name="fc2")(net)
    net = tf.keras.layers.Dense(units=1, name="G")(net)
    G = tf.keras.Model(inputs=inputs, outputs=net)
    return G

def discriminator(input_shape):
    inputs = tf.keras.layers.Input(input_shape)
    net = tf.keras.layers.Dense(units=32, activation=tf.nn.elu, name="fc1")(inputs)
    net = tf.keras.layers.Dense(units=1, name="D")(net)
    D = tf.keras.Model(inputs=inputs, outputs=net)
    return D
[...]
D = discriminator(input_shape)
G = generator(noise_shape)
D_real = D(real_input)
D_fake = D(G(input_noise))

Object Oriented approach: Keras layers

[losses definition]
# Define the optimizers and the train operations
train_D = tf.train.AdamOptimizer(1e-5).minimize(D_loss, var_list=D.trainable_variables)
train_G = tf.train.AdamOptimizer(1e-5).minimize(G_loss, var_list=G.trainable_variables)

Way better approach: the model itself carries its own variables.

No more global collections / variables.

Object Oriented approach: Keras layers

WARNING: do NOT migrate your models to Keras right now.

tf.keras.layers ARE NOT a drop-in replacement for tf.layers

tf.contrib sunset

This is something good.

tf.contrib has 107 indipendent projects inside: this is not maintenable at all and it's complete nonsense having 107 different projects inside the Tensorflow project iself.

If you're using tf.contrib inside your codebase, you can start updating your codebase iff you're using a method with a counterpart inside tf.

Otherwise, you have to look at this HUGE list, in order to know if the tf.contrib.project you're using:

Aleases Clean-Up

A migration tool will be provided.

However, conversion tools are not perfect hence manual intervention could be required.

What's next?

The API of the 2.0 version is still under active development.

All the changes showed here are the one labeled as 2.0 rfc:accepted on the Tensorflow Community repository:

Everyone can join the Tensorflow Developer Group and give suggestions.

References

References

Thank you

Use the left and right arrow keys or click the left and right edges of the page to navigate between slides.
(Press 'H' or navigate to hide this message.)