Skip to content

Commit

Permalink
replace branch master by main
Browse files Browse the repository at this point in the history
PiperOrigin-RevId: 598596788
  • Loading branch information
fabianp authored and OptaxDev committed Jan 15, 2024
1 parent 18ba6a5 commit 0cb437f
Show file tree
Hide file tree
Showing 16 changed files with 26 additions and 26 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@ name: tests

on:
push:
branches: ["master"]
branches: ["main"]
pull_request:
branches: ["master"]
branches: ["main"]
schedule:
- cron: '0 3 * * *'

Expand Down
16 changes: 8 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ updates, opt_state = optimizer.update(grads, opt_state)
params = optax.apply_updates(params, updates)
```

You can continue the quick start in [the Optax quickstart notebook.](https://github.com/deepmind/optax/blob/master/examples/quick_start.ipynb)
You can continue the quick start in [the Optax quickstart notebook.](https://github.com/deepmind/optax/blob/main/examples/quick_start.ipynb)


## Components
Expand All @@ -86,7 +86,7 @@ We refer to the [docs](https://optax.readthedocs.io/en/latest/index.html)
for a detailed list of available Optax components. Here, we highlight
the main categories of building blocks provided by Optax.

### Gradient Transformations ([transform.py](https://github.com/deepmind/optax/blob/master/optax/_src/transform.py))
### Gradient Transformations ([transform.py](https://github.com/deepmind/optax/blob/main/optax/_src/transform.py))

One of the key building blocks of `optax` is a `GradientTransformation`.

Expand All @@ -107,7 +107,7 @@ state = tx.init(params) # init stats
grads, state = tx.update(grads, state, params) # transform & update stats.
```

### Composing Gradient Transformations ([combine.py](https://github.com/deepmind/optax/blob/master/optax/_src/combine.py))
### Composing Gradient Transformations ([combine.py](https://github.com/deepmind/optax/blob/main/optax/_src/combine.py))

The fact that transformations take candidate gradients as input and return
processed gradients as output (in contrast to returning the updated parameters)
Expand All @@ -127,7 +127,7 @@ my_optimiser = chain(
scale(-learning_rate))
```

### Wrapping Gradient Transformations ([wrappers.py](https://github.com/deepmind/optax/blob/master/optax/_src/wrappers.py))
### Wrapping Gradient Transformations ([wrappers.py](https://github.com/deepmind/optax/blob/main/optax/_src/wrappers.py))

Optax also provides several wrappers that take a `GradientTransformation` as
input and return a new `GradientTransformation` that modifies the behaviour
Expand All @@ -148,7 +148,7 @@ Other examples of wrappers include accumulating gradients over multiple steps
or applying the inner transformation only to specific parameters or at
specific steps.

### Schedules ([schedule.py](https://github.com/deepmind/optax/blob/master/optax/_src/schedule.py))
### Schedules ([schedule.py](https://github.com/deepmind/optax/blob/main/optax/_src/schedule.py))

Many popular transformations use time-dependent components, e.g. to anneal
some hyper-parameter (e.g. the learning rate). Optax provides for this purpose
Expand Down Expand Up @@ -176,7 +176,7 @@ optimiser = chain(
scale_by_schedule(schedule_fn))
```

### Popular optimisers ([alias.py](https://github.com/deepmind/optax/blob/master/optax/_src/alias.py))
### Popular optimisers ([alias.py](https://github.com/deepmind/optax/blob/main/optax/_src/alias.py))

In addition to the low-level building blocks, we also provide aliases for popular
optimisers built using these components (e.g. RMSProp, Adam, AdamW, etc, ...).
Expand All @@ -192,7 +192,7 @@ def adamw(learning_rate, b1, b2, eps, weight_decay):
scale_and_decay(-learning_rate, weight_decay=weight_decay))
```

### Applying updates ([update.py](https://github.com/deepmind/optax/blob/master/optax/_src/update.py))
### Applying updates ([update.py](https://github.com/deepmind/optax/blob/main/optax/_src/update.py))

After transforming an update using a `GradientTransformation` or any custom
manipulation of the update, you will typically apply the update to a set
Expand Down Expand Up @@ -236,7 +236,7 @@ typically intractable due to the quadratic memory requirements. Solving for the
diagonals of these matrices is often a better solution. The library offers
functions for computing these diagonals with sub-quadratic memory requirements.

### Stochastic gradient estimators ([stochastic_gradient_estimators.py](https://github.com/google-deepmind/optax/blob/master/optax/monte_carlo/stochastic_gradient_estimators.py))
### Stochastic gradient estimators ([stochastic_gradient_estimators.py](https://github.com/google-deepmind/optax/blob/main/optax/monte_carlo/stochastic_gradient_estimators.py))

Stochastic gradient estimators compute Monte Carlo estimates of gradients of
the expectation of a function under a distribution with respect to the
Expand Down
2 changes: 1 addition & 1 deletion docs/contributors.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ discussion on the best way to land new features, and can also provide
opportunities for collaborations with other contributors.

Some more details on contributing code are provided in the
[CONTRIBUTING.md](https://github.com/deepmind/optax/blob/master/CONTRIBUTING.md)
[CONTRIBUTING.md](https://github.com/deepmind/optax/blob/main/CONTRIBUTING.md)
file in the source tree.

#### Design Documents
Expand Down
2 changes: 1 addition & 1 deletion examples/adversarial_training.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@
"# Adversarial training\n",
"\n",
"\n",
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google-deepmind/optax/blob/master/examples/adversarial_training.ipynb)\n",
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google-deepmind/optax/blob/main/examples/adversarial_training.ipynb)\n",
"\n",
"\n",
"The following code trains a convolutional neural network (CNN) to be robust\n",
Expand Down
2 changes: 1 addition & 1 deletion examples/cifar10_resnet.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@
"source": [
"# ResNet on CIFAR10 with Flax and Optax.\n",
"\n",
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.sandbox.google.com/github/google-deepmind/optax/blob/master/examples/cifar10_resnet.ipynb)\n",
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.sandbox.google.com/github/google-deepmind/optax/blob/main/examples/cifar10_resnet.ipynb)\n",
"\n",
"This notebook trains a residual network (ResNet) with optax on CIFAR10 or CIFAR100."
]
Expand Down
4 changes: 2 additions & 2 deletions examples/contrib/differentially_private_sgd.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -30,11 +30,11 @@
"source": [
"# Differentially private convolutional neural network on MNIST.\n",
"\n",
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.sandbox.google.com/github/google-deepmind/optax/blob/master/examples/differentially_private_sgd.ipynb)\n",
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.sandbox.google.com/github/google-deepmind/optax/blob/main/examples/differentially_private_sgd.ipynb)\n",
"\n",
"A large portion of this code is forked from the differentially private SGD\n",
"example in the [JAX repo](\n",
"https://github.com/google/jax/blob/master/examples/differentially_private_sgd.py).\n",
"https://github.com/google/jax/blob/main/examples/differentially_private_sgd.py).\n",
"\n",
"[Differentially Private Stochastic Gradient Descent](https://arxiv.org/abs/1607.00133) requires clipping the per-example parameter\n",
"gradients, which is non-trivial to implement efficiently for convolutional\n",
Expand Down
2 changes: 1 addition & 1 deletion examples/contrib/reduce_on_plateau.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@
"source": [
"# MLP FASHION MNIST with reduce_on_plateu learning rate scheduler\n",
"\n",
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.sandbox.google.com/github/google-deepmind/optax/blob/master/examples/contrib/reduce_on_plateau.ipynb)\n",
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.sandbox.google.com/github/google-deepmind/optax/blob/main/examples/contrib/reduce_on_plateau.ipynb)\n",
"\n",
"In this notebook, we explore the power of `reduce_on_plateau` scheduler, that reduces the learning rate when a metric has stopped improving. We will be solving a classification task by training a simple Multilayer Perceptron (MLP) on the fashion MNIST dataset."
]
Expand Down
2 changes: 1 addition & 1 deletion examples/contrib/sam.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
"source": [
"# Sharpness-Aware Minimization (SAM)\n",
"\n",
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.sandbox.google.com/github/google-deepmind/optax/blob/master/examples/sam.ipynb)\n",
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.sandbox.google.com/github/google-deepmind/optax/blob/main/examples/sam.ipynb)\n",
"\n",
"\n",
"This serves a testing ground for a simple SAM type optimizer implementation in JAX with existing apis."
Expand Down
4 changes: 2 additions & 2 deletions examples/flax_example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,9 @@
"source": [
"# Simple NN with Flax.\n",
"\n",
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.sandbox.google.com/github/google-deepmind/optax/blob/master/examples/flax_example.ipynb)\n",
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.sandbox.google.com/github/google-deepmind/optax/blob/main/examples/flax_example.ipynb)\n",
"\n",
"This notebook trains a simple one-layer NN with Optax and Flax. For more advanced applications of those two libraries, we recommend checking out the [`cifar10_resnet`](https://colab.sandbox.google.com/github/google-deepmind/optax/blob/master/examples/cifar10_resnet.ipynb) example."
"This notebook trains a simple one-layer NN with Optax and Flax. For more advanced applications of those two libraries, we recommend checking out the [`cifar10_resnet`](https://colab.sandbox.google.com/github/google-deepmind/optax/blob/main/examples/cifar10_resnet.ipynb) example."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion examples/gradient_accumulation.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
"source": [
"# Gradient Accumulation\n",
"\n",
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.sandbox.google.com/github/google-deepmind/optax/blob/master/examples/gradient_accumulation.ipynb)"
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.sandbox.google.com/github/google-deepmind/optax/blob/main/examples/gradient_accumulation.ipynb)"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion examples/haiku_example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@
"source": [
"# Simple NN with Haiku.\n",
"\n",
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.sandbox.google.com/github/google-deepmind/optax/blob/master/examples/haiku_example.ipynb)\n",
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.sandbox.google.com/github/google-deepmind/optax/blob/main/examples/haiku_example.ipynb)\n",
"\n",
"This notebook trains a simple one-layer NN with Optax and Haiku."
]
Expand Down
2 changes: 1 addition & 1 deletion examples/lookahead_mnist.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@
"source": [
"# MNIST\n",
"\n",
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.sandbox.google.com/github/google-deepmind/optax/blob/master/examples/lookahead_mnist.ipynb)\n",
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.sandbox.google.com/github/google-deepmind/optax/blob/main/examples/lookahead_mnist.ipynb)\n",
"\n",
"This notebook trains a simple Convolution Neural Network (CNN) for hand-written digit recognition (MNIST dataset) using the [Lookahead optimizer](https://arxiv.org/pdf/1907.08610v1.pdf)."
]
Expand Down
2 changes: 1 addition & 1 deletion examples/meta_learning.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
"source": [
"# Meta-Learning\n",
"\n",
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.sandbox.google.com/github/google-deepmind/optax/blob/master/examples/meta_learning.ipynb)\n"
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.sandbox.google.com/github/google-deepmind/optax/blob/main/examples/meta_learning.ipynb)\n"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion examples/mlp_mnist.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@
"source": [
"# MLP MNIST\n",
"\n",
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.sandbox.google.com/github/google-deepmind/optax/blob/master/examples/mlp_mnist.ipynb)\n",
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.sandbox.google.com/github/google-deepmind/optax/blob/main/examples/mlp_mnist.ipynb)\n",
"\n",
"This notebook trains a simple Multilayer Perceptron (MLP) classifier for hand-written digit recognition (MNIST dataset)."
]
Expand Down
2 changes: 1 addition & 1 deletion examples/ogda_example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
"source": [
"# Optimistic Gradient Descent in a Bilinear Min-Max Problem\n",
"\n",
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.sandbox.google.com/github/google-deepmind/optax/blob/master/examples/ogda_example.ipynb)\n",
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.sandbox.google.com/github/google-deepmind/optax/blob/main/examples/ogda_example.ipynb)\n",
"\n"
]
},
Expand Down
2 changes: 1 addition & 1 deletion examples/quick_start.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
"source": [
"# Quickstart with Optax.\n",
"\n",
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.sandbox.google.com/github/google-deepmind/optax/blob/master/examples/quick_start.ipynb)\n",
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.sandbox.google.com/github/google-deepmind/optax/blob/main/examples/quick_start.ipynb)\n",
"\n",
"Optax is a simple optimization library for [Jax](https://jax.readthedocs.io/). The main object is the `GradientTransformation`, which can be chained\n",
"with other transformations to obtain the final update operation and the optimizer state.\n",
Expand Down

0 comments on commit 0cb437f

Please sign in to comment.