Parallelization in Jupyter Notebooks
Using ipyparallel¶
This document shows how to use the ipyparallel
package to run code in parallel within a Jupyter Python notebook.
First, start a Jupyter server via Open OnDemand using the "Jupyter Server - compute via Slurm using Savio partitions" app. (You shouldn't use the standalone Open OnDemand server as that only provides a single compute core.)
To run code in parallel across cores on one node, you can start up with workers and run your parallel code all within your notebook, as described here.
If you'd like to run workers in parallel across multiple nodes, this may be possible and feel free to contact us to discuss further. Alternatively, you might run your code non-interactively outside of a notebook, as discussed here.
Former Usage of IPython Clusters¶
In the past, we guided users to make use of IPython Clusters to allow one to use parallelization in a Jupyter Python notebook. We no longer recommend this approach, and in fact, use of the "IPython Clusters" tab no longer works in notebooks run via Open OnDemand on Savio.