Flax Basics#
This notebook will walk you through the following workflow:
Instantiating a model from Flax built-in layers or third-party models.
Initializing parameters of the model and manually written training.
Using optimizers provided by Flax to ease training.
Serialization of parameters and other objects.
Creating your own models and managing state.
Setting up our environment#
Here we provide the code needed to set up the environment for our notebook.
# Install the latest JAXlib version.
!pip install --upgrade -q pip jax jaxlib
# Install Flax at head:
!pip install --upgrade -q git+https://github.com/google/flax.git
WARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv
WARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv
import jax
from typing import Any, Callable, Sequence
from jax import random, numpy as jnp
import flax
from flax import linen as nn
Linear regression with Flax#
In the previous JAX for the impatient notebook, we finished up with a linear regression example. As we know, linear regression can also be written as a single dense neural network layer, which we will show in the following so that we can compare how it’s done.
A dense layer is a layer that has a kernel parameter \(W\in\mathcal{M}_{m,n}(\mathbb{R})\) where \(m\) is the number of features as an output of the model, and \(n\) the dimensionality of the input, and a bias parameter \(b\in\mathbb{R}^m\). The dense layers returns \(Wx+b\) from an input \(x\in\mathbb{R}^n\).
This dense layer is already provided by Flax in the flax.linen
module (here imported as nn
).
# We create one dense layer instance (taking 'features' parameter as input)
model = nn.Dense(features=5)
Layers (and models in general, we’ll use that word from now on) are subclasses of the linen.Module
class.
Model parameters & initialization#
Parameters are not stored with the models themselves. You need to initialize parameters by calling the init
function, using a PRNGKey and dummy input data.
key1, key2 = random.split(random.key(0))
x = random.normal(key1, (10,)) # Dummy input data
params = model.init(key2, x) # Initialization call
jax.tree_util.tree_map(lambda x: x.shape, params) # Checking output shapes
{'params': {'bias': (5,), 'kernel': (10, 5)}}
Note: JAX and Flax, like NumPy, are row-based systems, meaning that vectors are represented as row vectors and not column vectors. This can be seen in the shape of the kernel here.
The result is what we expect: bias and kernel parameters of the correct size. Under the hood:
The dummy input data
x
is used to trigger shape inference: we only declared the number of features we wanted in the output of the model, not the size of the input. Flax finds out by itself the correct size of the kernel.The random PRNG key is used to trigger the initialization functions (those have default values provided by the module here).
Initialization functions are called to generate the initial set of parameters that the model will use. Those are functions that take as arguments
(PRNG Key, shape, dtype)
and return an Array of shapeshape
.The init function returns the initialized set of parameters (you can also get the output of the forward pass on the dummy input with the same syntax by using the
init_with_output
method instead ofinit
.
To conduct a forward pass with the model with a given set of parameters (which are never stored with the model), we just use the apply
method by providing it the parameters to use as well as the input:
model.apply(params, x)
Array([-1.3721193 , 0.61131495, 0.6442836 , 2.2192965 , -1.1271116 ], dtype=float32)
Gradient descent#
If you jumped here directly without going through the JAX part, here is the linear regression formulation we’re going to use: from a set of data points \(\{(x_i,y_i), i\in \{1,\ldots, k\}, x_i\in\mathbb{R}^n,y_i\in\mathbb{R}^m\}\), we try to find a set of parameters \(W\in \mathcal{M}_{m,n}(\mathbb{R}), b\in\mathbb{R}^m\) such that the function \(f_{W,b}(x)=Wx+b\) minimizes the mean squared error:
Here, we see that the tuple \((W,b)\) matches the parameters of the Dense layer. We’ll perform gradient descent using those. Let’s first generate the fake data we’ll use. The data is exactly the same as in the JAX part’s linear regression pytree example.
# Set problem dimensions.
n_samples = 20
x_dim = 10
y_dim = 5
# Generate random ground truth W and b.
key = random.key(0)
k1, k2 = random.split(key)
W = random.normal(k1, (x_dim, y_dim))
b = random.normal(k2, (y_dim,))
# Store the parameters in a FrozenDict pytree.
true_params = flax.core.freeze({'params': {'bias': b, 'kernel': W}})
# Generate samples with additional noise.
key_sample, key_noise = random.split(k1)
x_samples = random.normal(key_sample, (n_samples, x_dim))
y_samples = jnp.dot(x_samples, W) + b + 0.1 * random.normal(key_noise,(n_samples, y_dim))
print('x shape:', x_samples.shape, '; y shape:', y_samples.shape)
x shape: (20, 10) ; y shape: (20, 5)
We copy the same training loop that we used in the JAX pytree linear regression example with jax.value_and_grad()
, but here we can use model.apply()
instead of having to define our own feed-forward function (predict_pytree()
in the JAX example).
# Same as JAX version but using model.apply().
@jax.jit
def mse(params, x_batched, y_batched):
# Define the squared loss for a single pair (x,y)
def squared_error(x, y):
pred = model.apply(params, x)
return jnp.inner(y-pred, y-pred) / 2.0
# Vectorize the previous to compute the average of the loss on all samples.
return jnp.mean(jax.vmap(squared_error)(x_batched,y_batched), axis=0)
And finally perform the gradient descent.
learning_rate = 0.3 # Gradient step size.
print('Loss for "true" W,b: ', mse(true_params, x_samples, y_samples))
loss_grad_fn = jax.value_and_grad(mse)
@jax.jit
def update_params(params, learning_rate, grads):
params = jax.tree_util.tree_map(
lambda p, g: p - learning_rate * g, params, grads)
return params
for i in range(101):
# Perform one gradient update.
loss_val, grads = loss_grad_fn(params, x_samples, y_samples)
params = update_params(params, learning_rate, grads)
if i % 10 == 0:
print(f'Loss step {i}: ', loss_val)
Loss for "true" W,b: 0.023639796
Loss step 0: 35.343876
Loss step 10: 0.51434684
Loss step 20: 0.11384157
Loss step 30: 0.039326735
Loss step 40: 0.019916197
Loss step 50: 0.014209114
Loss step 60: 0.012425648
Loss step 70: 0.011850391
Loss step 80: 0.011661778
Loss step 90: 0.011599409
Loss step 100: 0.011578697
Optimizing with Optax#
Flax used to use its own flax.optim
package for optimization, but with
FLIP #1009
this was deprecated in favor of
Optax.
Basic usage of Optax is straightforward:
Choose an optimization method (e.g.
optax.adam
).Create optimizer state from parameters (for the Adam optimizer, this state will contain the momentum values).
Compute the gradients of your loss with
jax.value_and_grad()
.At every iteration, call the Optax
update
function to update the internal optimizer state and create an update to the parameters. Then add the update to the parameters with Optax’sapply_updates
method.
Note that Optax can do a lot more: it’s designed for composing simple gradient transformations into more complex transformations that allows to implement a wide range of optimizers. There is also support for changing optimizer hyperparameters over time (“schedules”), applying different updates to different parts of the parameter tree (“masking”) and much more. For details please refer to the official documentation.
import optax
tx = optax.adam(learning_rate=learning_rate)
opt_state = tx.init(params)
loss_grad_fn = jax.value_and_grad(mse)
for i in range(101):
loss_val, grads = loss_grad_fn(params, x_samples, y_samples)
updates, opt_state = tx.update(grads, opt_state)
params = optax.apply_updates(params, updates)
if i % 10 == 0:
print('Loss step {}: '.format(i), loss_val)
Loss step 0: 0.011577629
Loss step 10: 0.2614313
Loss step 20: 0.076747075
Loss step 30: 0.036439072
Loss step 40: 0.022011759
Loss step 50: 0.01617833
Loss step 60: 0.013002962
Loss step 70: 0.01202613
Loss step 80: 0.0117645
Loss step 90: 0.011646037
Loss step 100: 0.011585514
Serializing the result#
Now that we’re happy with the result of our training, we might want to save the model parameters to load them back later. Flax provides a serialization package to enable you to do that.
from flax import serialization
bytes_output = serialization.to_bytes(params)
dict_output = serialization.to_state_dict(params)
print('Dict output')
print(dict_output)
print('Bytes output')
print(bytes_output)
Dict output
{'params': {'bias': Array([-1.4555763, -2.027799 , 2.0790977, 1.2186142, -0.9980988], dtype=float32), 'kernel': Array([[ 1.0098811 , 0.1893436 , 0.04455061, -0.92802244, 0.34784058],
[ 1.7298452 , 0.9879369 , 1.1640465 , 1.1006078 , -0.1065392 ],
[-1.202946 , 0.28635207, 1.415598 , 0.11870954, -1.3141488 ],
[-1.1941487 , -0.18958527, 0.03413866, 1.3169426 , 0.08060387],
[ 0.13852389, 1.371304 , -1.3187188 , 0.5315267 , -2.2404993 ],
[ 0.5629402 , 0.8122313 , 0.31751987, 0.534551 , 0.9050044 ],
[-0.37925997, 1.7410395 , 1.0790284 , -0.5039832 , 0.92830735],
[ 0.970649 , -1.3153405 , 0.33681503, 0.80993414, -1.2018454 ],
[ 1.0194316 , -0.62024766, 1.081883 , -1.8389739 , -0.4580481 ],
[-0.6436535 , 0.45666716, -1.1329136 , -0.6853864 , 0.1682897 ]], dtype=float32)}}
Bytes output
b'\x81\xa6params\x82\xa4bias\xc7!\x01\x93\x91\x05\xa7float32\xc4\x14SP\xba\xbfu\xc7\x01\xc0\xf0\x0f\x05@\x8d\xfb\x9b?g\x83\x7f\xbf\xa6kernel\xc7\xd6\x01\x93\x92\n\x05\xa7float32\xc4\xc8\xc9C\x81?J\xe3A>\xb2z6=\xe1\x92m\xbf)\x18\xb2>\x91k\xdd?o\xe9|?z\xff\x94?\xb7\xe0\x8c?:1\xda\xbd"\xfa\x99\xbf\xbd\x9c\x92>Q2\xb5?\xfd\x1d\xf3=\x076\xa8\xbf\xdd\xd9\x98\xbf\xa4"B\xbe\xfc\xd4\x0b=\x93\x91\xa8?\xa4\x13\xa5=5\xd9\r>\xe4\x86\xaf?\xc7\xcb\xa8\xbf"\x12\x08?Wd\x0f\xc0\xd9\x1c\x10?d\xeeO?\xf7\x91\xa2>V\xd8\x08?^\xaeg?].\xc2\xbeb\xda\xde?\x9a\x1d\x8a?\x0b\x05\x01\xbf\x8d\xa5m?t|x?\x14]\xa8\xbf\x05s\xac>\xd8WO?\x12\xd6\x99\xbf\xbc|\x82?\x8d\xc8\x1e\xbf${\x8a?\x7fc\xeb\xbfH\x85\xea\xbez\xc6$\xbfG\xd0\xe9>P\x03\x91\xbf|u/\xbf#T,>'
To load the model back, you’ll need to use a template of the model parameter structure, like the one you would get from the model initialization. Here, we use the previously generated params
as a template. Note that this will produce a new variable structure, and not mutate in-place.
The point of enforcing structure through template is to avoid users issues downstream, so you need to first have the right model that generates the parameters structure.
serialization.from_bytes(params, bytes_output)
{'params': {'bias': array([-1.4555763, -2.027799 , 2.0790977, 1.2186142, -0.9980988],
dtype=float32),
'kernel': array([[ 1.0098811 , 0.1893436 , 0.04455061, -0.92802244, 0.34784058],
[ 1.7298452 , 0.9879369 , 1.1640465 , 1.1006078 , -0.1065392 ],
[-1.202946 , 0.28635207, 1.415598 , 0.11870954, -1.3141488 ],
[-1.1941487 , -0.18958527, 0.03413866, 1.3169426 , 0.08060387],
[ 0.13852389, 1.371304 , -1.3187188 , 0.5315267 , -2.2404993 ],
[ 0.5629402 , 0.8122313 , 0.31751987, 0.534551 , 0.9050044 ],
[-0.37925997, 1.7410395 , 1.0790284 , -0.5039832 , 0.92830735],
[ 0.970649 , -1.3153405 , 0.33681503, 0.80993414, -1.2018454 ],
[ 1.0194316 , -0.62024766, 1.081883 , -1.8389739 , -0.4580481 ],
[-0.6436535 , 0.45666716, -1.1329136 , -0.6853864 , 0.1682897 ]],
dtype=float32)}}
Defining your own models#
Flax allows you to define your own models, which should be a bit more complicated than a linear regression. In this section, we’ll show you how to build simple models. To do so, you’ll need to create subclasses of the base nn.Module
class.
Keep in mind that we imported linen as nn
and this only works with the new linen API
Module basics#
The base abstraction for models is the nn.Module
class, and every type of predefined layers in Flax (like the previous Dense
) is a subclass of nn.Module
. Let’s take a look and start by defining a simple but custom multi-layer perceptron i.e. a sequence of Dense layers interleaved with calls to a non-linear activation function.
class ExplicitMLP(nn.Module):
features: Sequence[int]
def setup(self):
# we automatically know what to do with lists, dicts of submodules
self.layers = [nn.Dense(feat) for feat in self.features]
# for single submodules, we would just write:
# self.layer1 = nn.Dense(feat1)
def __call__(self, inputs):
x = inputs
for i, lyr in enumerate(self.layers):
x = lyr(x)
if i != len(self.layers) - 1:
x = nn.relu(x)
return x
key1, key2 = random.split(random.key(0), 2)
x = random.uniform(key1, (4,4))
model = ExplicitMLP(features=[3,4,5])
params = model.init(key2, x)
y = model.apply(params, x)
print('initialized parameter shapes:\n', jax.tree_util.tree_map(jnp.shape, flax.core.unfreeze(params)))
print('output:\n', y)
initialized parameter shapes:
{'params': {'layers_0': {'bias': (3,), 'kernel': (4, 3)}, 'layers_1': {'bias': (4,), 'kernel': (3, 4)}, 'layers_2': {'bias': (5,), 'kernel': (4, 5)}}}
output:
[[ 0. 0. 0. 0. 0. ]
[ 0.0072379 -0.00810347 -0.02550939 0.02151716 -0.01261241]
[ 0. 0. 0. 0. 0. ]
[ 0. 0. 0. 0. 0. ]]
As we can see, a nn.Module
subclass is made of:
A collection of data fields (
nn.Module
are Python dataclasses) - here we only have thefeatures
field of typeSequence[int]
.A
setup()
method that is being called at the end of the__postinit__
where you can register submodules, variables, parameters you will need in your model.A
__call__
function that returns the output of the model from a given input.The model structure defines a pytree of parameters following the same tree structure as the model: the params tree contains one
layers_n
sub dict per layer, and each of those contain the parameters of the associated Dense layer. The layout is very explicit.
Note: lists are mostly managed as you would expect (WIP), there are corner cases you should be aware of as pointed out here
Since the module structure and its parameters are not tied to each other, you can’t directly call model(x)
on a given input as it will return an error. The __call__
function is being wrapped up in the apply
one, which is the one to call on an input:
try:
y = model(x) # Returns an error
except AttributeError as e:
print(e)
"ExplicitMLP" object has no attribute "layers". If "layers" is defined in '.setup()', remember these fields are only accessible from inside 'init' or 'apply'.
Since here we have a very simple model, we could have used an alternative (but equivalent) way of declaring the submodules inline in the __call__
using the @nn.compact
annotation like so:
class SimpleMLP(nn.Module):
features: Sequence[int]
@nn.compact
def __call__(self, inputs):
x = inputs
for i, feat in enumerate(self.features):
x = nn.Dense(feat, name=f'layers_{i}')(x)
if i != len(self.features) - 1:
x = nn.relu(x)
# providing a name is optional though!
# the default autonames would be "Dense_0", "Dense_1", ...
return x
key1, key2 = random.split(random.key(0), 2)
x = random.uniform(key1, (4,4))
model = SimpleMLP(features=[3,4,5])
params = model.init(key2, x)
y = model.apply(params, x)
print('initialized parameter shapes:\n', jax.tree_util.tree_map(jnp.shape, flax.core.unfreeze(params)))
print('output:\n', y)
initialized parameter shapes:
{'params': {'layers_0': {'bias': (3,), 'kernel': (4, 3)}, 'layers_1': {'bias': (4,), 'kernel': (3, 4)}, 'layers_2': {'bias': (5,), 'kernel': (4, 5)}}}
output:
[[ 0. 0. 0. 0. 0. ]
[ 0.0072379 -0.00810347 -0.02550939 0.02151716 -0.01261241]
[ 0. 0. 0. 0. 0. ]
[ 0. 0. 0. 0. 0. ]]
There are, however, a few differences you should be aware of between the two declaration modes:
In
setup
, you are able to name some sublayers and keep them around for further use (e.g. encoder/decoder methods in autoencoders).If you want to have multiple methods, then you need to declare the module using
setup
, as the@nn.compact
annotation only allows one method to be annotated.The last initialization will be handled differently. See these notes for more details (TODO: add notes link).
Module parameters#
In the previous MLP example, we relied only on predefined layers and operators (Dense
, relu
). Let’s imagine that you didn’t have a Dense layer provided by Flax and you wanted to write it on your own. Here is what it would look like using the @nn.compact
way to declare a new modules:
class SimpleDense(nn.Module):
features: int
kernel_init: Callable = nn.initializers.lecun_normal()
bias_init: Callable = nn.initializers.zeros_init()
@nn.compact
def __call__(self, inputs):
kernel = self.param('kernel',
self.kernel_init, # Initialization function
(inputs.shape[-1], self.features)) # shape info.
y = jnp.dot(inputs, kernel)
bias = self.param('bias', self.bias_init, (self.features,))
y = y + bias
return y
key1, key2 = random.split(random.key(0), 2)
x = random.uniform(key1, (4,4))
model = SimpleDense(features=3)
params = model.init(key2, x)
y = model.apply(params, x)
print('initialized parameters:\n', params)
print('output:\n', y)
initialized parameters:
{'params': {'kernel': Array([[ 0.61506 , -0.22728713, 0.6054702 ],
[-0.29617992, 1.1232013 , -0.879759 ],
[-0.35162622, 0.3806491 , 0.6893246 ],
[-0.1151355 , 0.04567898, -1.091212 ]], dtype=float32), 'bias': Array([0., 0., 0.], dtype=float32)}}
output:
[[-0.02996204 1.102088 -0.6660265 ]
[-0.31092793 0.6323942 -0.53678817]
[ 0.01424007 0.9424717 -0.6356147 ]
[ 0.36818963 0.3586519 -0.00459214]]
Here, we see how to both declare and assign a parameter to the model using the self.param
method. It takes as input (name, init_fn, *init_args, **init_kwargs)
:
name
is simply the name of the parameter that will end up in the parameter structure.init_fn
is a function with input(PRNGKey, *init_args, **init_kwargs)
returning an Array, withinit_args
andinit_kwargs
being the arguments needed to call the initialisation function.init_args
andinit_kwargs
are the arguments to provide to the initialization function.
Such params can also be declared in the setup
method; it won’t be able to use shape inference because Flax is using lazy initialization at the first call site.
Variables and collections of variables#
As we’ve seen so far, working with models means working with:
A subclass of
nn.Module
;A pytree of parameters for the model (typically from
model.init()
);
However this is not enough to cover everything that we would need for machine learning, especially neural networks. In some cases, you might want your neural network to keep track of some internal state while it runs (e.g. batch normalization layers). There is a way to declare variables beyond the parameters of the model with the variable
method.
For demonstration purposes, we’ll implement a simplified but similar mechanism to batch normalization: we’ll store running averages and subtract those to the input at training time. For proper batchnorm, you should use (and look at) the implementation here.
class BiasAdderWithRunningMean(nn.Module):
decay: float = 0.99
@nn.compact
def __call__(self, x):
# easy pattern to detect if we're initializing via empty variable tree
is_initialized = self.has_variable('batch_stats', 'mean')
ra_mean = self.variable('batch_stats', 'mean',
lambda s: jnp.zeros(s),
x.shape[1:])
bias = self.param('bias', lambda rng, shape: jnp.zeros(shape), x.shape[1:])
if is_initialized:
ra_mean.value = self.decay * ra_mean.value + (1.0 - self.decay) * jnp.mean(x, axis=0, keepdims=True)
return x - ra_mean.value + bias
key1, key2 = random.split(random.key(0), 2)
x = jnp.ones((10,5))
model = BiasAdderWithRunningMean()
variables = model.init(key1, x)
print('initialized variables:\n', variables)
y, updated_state = model.apply(variables, x, mutable=['batch_stats'])
print('updated state:\n', updated_state)
initialized variables:
{'batch_stats': {'mean': Array([0., 0., 0., 0., 0.], dtype=float32)}, 'params': {'bias': Array([0., 0., 0., 0., 0.], dtype=float32)}}
updated state:
{'batch_stats': {'mean': Array([[0.01, 0.01, 0.01, 0.01, 0.01]], dtype=float32)}}
Here, updated_state
returns only the state variables that are being mutated by the model while applying it on data. To update the variables and get the new parameters of the model, we can use the following pattern:
for val in [1.0, 2.0, 3.0]:
x = val * jnp.ones((10,5))
y, updated_state = model.apply(variables, x, mutable=['batch_stats'])
old_state, params = flax.core.pop(variables, 'params')
variables = flax.core.freeze({'params': params, **updated_state})
print('updated state:\n', updated_state) # Shows only the mutable part
updated state:
{'batch_stats': {'mean': Array([[0.01, 0.01, 0.01, 0.01, 0.01]], dtype=float32)}}
updated state:
{'batch_stats': {'mean': Array([[0.0299, 0.0299, 0.0299, 0.0299, 0.0299]], dtype=float32)}}
updated state:
{'batch_stats': {'mean': Array([[0.059601, 0.059601, 0.059601, 0.059601, 0.059601]], dtype=float32)}}
From this simplified example, you should be able to derive a full BatchNorm implementation, or any layer involving a state. To finish, let’s add an optimizer to see how to play with both parameters updated by an optimizer and state variables.
This example isn’t doing anything and is only for demonstration purposes.
from functools import partial
@partial(jax.jit, static_argnums=(0, 1))
def update_step(tx, apply_fn, x, opt_state, params, state):
def loss(params):
y, updated_state = apply_fn({'params': params, **state},
x, mutable=list(state.keys()))
l = ((x - y) ** 2).sum()
return l, updated_state
(l, state), grads = jax.value_and_grad(loss, has_aux=True)(params)
updates, opt_state = tx.update(grads, opt_state)
params = optax.apply_updates(params, updates)
return opt_state, params, state
x = jnp.ones((10,5))
variables = model.init(random.key(0), x)
state, params = flax.core.pop(variables, 'params')
del variables
tx = optax.sgd(learning_rate=0.02)
opt_state = tx.init(params)
for _ in range(3):
opt_state, params, state = update_step(tx, model.apply, x, opt_state, params, state)
print('Updated state: ', state)
Updated state: {'batch_stats': {'mean': Array([[0.01, 0.01, 0.01, 0.01, 0.01]], dtype=float32)}}
Updated state: {'batch_stats': {'mean': Array([[0.0199, 0.0199, 0.0199, 0.0199, 0.0199]], dtype=float32)}}
Updated state: {'batch_stats': {'mean': Array([[0.029701, 0.029701, 0.029701, 0.029701, 0.029701]], dtype=float32)}}
Note that the above function has a quite verbose signature and it would not actually
work with jax.jit()
because the function arguments are not “valid JAX types”.
Flax provides a handy wrapper - TrainState
- that simplifies the above code. Check out flax.training.train_state.TrainState
to learn more.
Exporting to Tensorflow’s SavedModel with jax2tf#
JAX released an experimental converter called jax2tf, which allows converting trained Flax models into Tensorflow’s SavedModel format (so it can be used for TF Hub, TF.lite, TF.js, or other downstream applications). The repository contains more documentation and has various examples for Flax.