Hands-on Tensorflow 2.0: writing a GAN from scratch
4 June 2019
Paolo Galeone
Paolo Galeone
Computer engineer | Head of ML & CV @ ZURU Tech, Italy | Machine Learning GDE
Blog: pgaleone.eu
Github: github.com/galeone
Twitter: @paolo_galeone
Tensorflow 1.x has a steep learning curve, because it uses dataflow graphs to represent computation.
If you come from an imperative programming language (C, Python, ...) you're not used to think as Tensorflow 1.x needs:
1. Define the computational graph, first.
2. Execute the described computation inside a Session.
In Tensorflow 2.0 the eager mode will be the default.
TensorFlow eager execution is an imperative programming environment that evaluates operations immediately, without building graphs: operations return concrete values instead of constructing a computational graph to run later.
In short: it is now possibile to use Tensorflow as a replacement for the most common numpy operations.
Of course not.
There will be a lot of changes, most of them will require a complete rewriting of the existing codebases.
tf.layers
to KerasAPI clean-up
tf.contrib
and creation of separate projectsBetter software design
Theory
Practice
Tensorflow 2.0 is still in the early stages of the development, and the API can still change. However:
it is possibile to develop Machine Learning applications really easily.
From the software engineering point of view, Tensorflow 2.0 is a huge imporvement.
I'm authoring a book about Tensorflow 2.0 and Neural Networks!
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Hands-On Neural Networks with Tensorflow 2.0
Understanding the Tensorflow architecture, from static graph to eager execution, designing Deep Neural Networks.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you want to receive an email when the book is out, subscribe to the newsletter!