Skip to content

Parallelization in Jupyter Notebooks

This document shows how to use IPython Clusters, which allow you to use parallelization in a Jupyter IPython notebook.

We'll start by showing how to use IPython Clusters to exploit the parallelization capabilities of the `ipyparallel` package using the default parallel profile for use on a single node. We also give some information on how create your own parallel profile to allow more customization, but we don't expect most users to need to do this. At the end of this document, we also provide some brief comments on how you could modify the setup to do other types of parallelization in your notebook.

Please note that the “profiles” discussed here are cluster profiles for IPython Clusters, and are distinct from the job profiles discussed in the basic Jupyter documentation.

Using IPython Clusters

One-time setup

Please follow these steps to set up an IPython Cluster:

First, login to Savio via a terminal (or in Open OnDemand start a "Terminal" session via the "Clusters -> BRC Shell Access" dropdown menu selection), and enter the following command:

module load python
ipcluster nbextension enable --user

Now, after starting your Jupyter server, on the main Jupyter page, you should see that the name of the "Clusters" tab has been changed to "IPython Clusters". (Note: you may need to refresh your browser page to see this change.) If this change doesn’t appear, you will need to stop the current Jupyter server using the 'Delete' button under the "My Interactive Sessions" dropdown.

Using an IPython Cluster in your notebook

To use an IPython Cluster, start a Jupyter server via Open OnDemand, specifying your desired partition (don't use the standalone Open OnDemand server as that only provides a single compute core).

When you click the "IPython Clusters" tab of your Jupyter server, you will see a "default" cluster profile that allows you to start a local IPython Cluster with a user-specified number of engines, up to the number of cores you requested when you started your Jupyter server.


While it may be possible to run an IPython Cluster across cores on multiple nodes, this documentation is for running a Cluster on cores on a single node. Please contact us if you'd like to use multiple nodes.

Now go to a running Notebook (or start one). You can get started using the following Python code and use the ‘rc’ object to interact with your cluster.

import ipyparallel as ipp
rc = ipp.Client(profile='default', cluster_id='')
rc.ids # this should show the number of workers equal to the number you requested

To begin working with your new IPython Cluster, please see the ipyparallel Documentation, our basic demo of ipyparallel, or the information from our Intermediate / Parallel training.


Note that we don’t recommend using a Python 2 notebook because the IPython workers will be running Python 3 unless you set things up to use Python 2 workers, which is one of the things you can do if you create your own cluster profile, as discussed next.

Advanced usage: creating your own cluster profile

In unusual cases you might need to create your own cluster profile to manage the submission of the underlying Slurm job in which your IPython Cluster will run and to which your Jupyter notebook will connect. In that case we expect that you would run your Jupyter notebook on the Open OnDemand standalone server rather than on a compute node. The following information is provided to give an idea of what you might need to do, but please feel free to contact us to discuss your needs further.

Please follow these steps:

  1. Login to the Savio via a terminal (or in Open OnDemand start a "Terminal" session via the "Clusters -> BRC Shell Access" dropdown menu selection), and enter the following command:
    module load python
    ipython profile create --parallel --profile=myNewProfile

    (The cluster profile name can be anything; “myNewProfile” is used as an example here, and in several of the steps below.)

    For Python 2.7, enter the following instead:
    module load python/2.7
    ipython profile create --parallel --profile=myNewProfile
  1. Within the same terminal, enter the following command:cd $HOME/.ipython/profile_myNewProfile (In the above command, the cluster profile name following the underscore has to exactly match the one that was just created in step 1, above.)
  1. Add the following contents to the end of the "" file:
    import netifaces
    c.IPControllerApp.location = netifaces.ifaddresses('eth0')[netifaces.AF_INET][0]['addr']
    c.HubFactory.ip = '*'
  1. Add the following contents to the end of the "" file:
    #import uuid
    #c.BaseParallelApplication.cluster_id = str(uuid.uuid4())
    c.IPClusterStart.controller_launcher_class = 'SlurmControllerLauncher'
    c.IPClusterEngines.engine_launcher_class = 'SlurmEngineSetLauncher'
    c.IPClusterEngines.n = 12
    c.SlurmLauncher.queue = 'savio2'
    c.SlurmLauncher.account = 'fc_xyz'
    c.SlurmLauncher.qos = 'savio_normal'
    c.SlurmLauncher.timelimit = '8:0:0'
    #c.SlurmLauncher.options = '--export=ALL --mem=10g'
    c.SlurmControllerLauncher.batch_template = '''#!/bin/bash -l
    #SBATCH --job-name=ipcontroller-fake
    #SBATCH --partition={queue}
    #SBATCH --account={account}
    #SBATCH --qos={qos}
    #SBATCH --ntasks=1
    #SBATCH --time={timelimit}
    c.SlurmEngineSetLauncher.batch_template = '''#!/bin/bash -l
    #SBATCH --job-name=ipcluster-{cluster_id}
    #SBATCH --partition={queue}
    #SBATCH --account={account}
    #SBATCH --qos={qos}
    #SBATCH --ntasks={n}
    #SBATCH --time={timelimit}
    module load python
    ipcontroller --profile-dir={profile_dir} --cluster-id="{cluster_id}" & sleep 10
    srun ipengine --profile-dir={profile_dir} --cluster-id="{cluster_id}"

    Note that the commented lines above are optional (except for the #SBATCH lines) and users could choose to uncomment them and modify them; all other lines (including the #SBATCH lines) are necessary.

    In particular, you will need to examine and change the values of at least one or more of the following four entries, to specify your Savio scheduler account name (e.g., 'fc_something', 'co_something' ...), the partition (which is called 'queue' by the Python SlurmLauncher object) and QoS on which you want to launch the cluster, and the wall clock time that the cluster will be active:

    c.SlurmLauncher.account =
    c.SlurmLauncher.queue =
    c.SlurmLauncher.qos =
    c.SlurmLauncher.timelimit =

    For Python 2.7 workers simply load the python/2.7 module rather than the python module and then use the same ipcontroller and srun lines.

  1. After adding and configuring all of these various settings, via the steps above, you can go back to the Jupyter "IPython Clusters" tab to start a new IPython Cluster using your newly created cluster profile, with a selected number of engines.
  1. Once your cluster is started, start or go to an existing Jupyter notebook and follow the instructions at step 4 under the basic usage section above, but making sure to provide the correct ‘profile’ and ‘cluster_id’ arguments when calling ‘Client()’. ‘profile’ should be the name chosen in step 1. ‘cluster_id’ should be the value that you set c.BaseParallelApplication.cluster_id to in step 4 of this section. If you do not set it in step 4 (as is the case above where it is commented out) then ‘cluster_id’ should be an empty string, as in the basic usage section.For example,
    import ipyparallel as ipp
    rc = ipp.Client(profile='myNewProfile', cluster_id='')

    Finally note that it makes sense to use the standalone Open OnDemand server (discussed in the basic Jupyter documentation) when starting the Notebook from which you control the IPython Cluster, provided all your heavy computation will occur via the IPython Cluster workers.

  1. You should be able to monitor the Slurm job controlling your cluster via the squeue command, looking for the Slurm job name indicated in the “” file.

Parallel workflows using other approaches

The customization done to create your own cluster profile can be readily modified for other parallel workflows. Since there are many workflows one might set up, we’ll simply point out the parts of the instructions that you’ll need to modify.

First, in step 4 you’ll want to modify the Slurm parameters in the “” file to fit your needs. Second, you’ll want to replace ipcontroller and srun ipengine commands indicated in step 4 with the commands that need to be run in the Slurm job script to set up your parallel context. Finally, of course, the code you use in your Jupyter notebook (step 7) will change.