Dask jobqueue
amltk.scheduling.executors.dask_jobqueue
#
Dask Jobqueue Executors.
These are essentially wrappers around the dask_jobqueue classes. We use them to provide a consistent interface for all the different jobqueue implementations and get access to their executors.
Documentation from dask_jobqueue
See the dask jobqueue documentation specifically:
DaskJobqueueExecutor
#
DaskJobqueueExecutor(
cluster: _JQC,
*,
n_workers: int,
adaptive: bool = False,
submit_command: str | None = None,
cancel_command: str | None = None
)
Bases: Executor
, Generic[_JQC]
A concurrent.futures Executor that executes tasks on a dask_jobqueue cluster.
Implementations
Prefer to use the class methods to create an instance of this class.
PARAMETER | DESCRIPTION |
---|---|
cluster |
The implementation of a dask_jobqueue.JobQueueCluster.
TYPE:
|
n_workers |
The number of workers to maximally adapt to on the cluster.
TYPE:
|
adaptive |
Whether to use the adaptive scaling of the cluster or fixed allocate all workers. This will specifically use the dask_jobqueue.SLURMCluster.adapt method to dynamically scale the cluster to the number of workers specified.
TYPE:
|
submit_command |
To overwrite the submission command if necessary.
TYPE:
|
cancel_command |
To overwrite the cancel command if necessary.
TYPE:
|
Source code in src/amltk/scheduling/executors/dask_jobqueue.py
HTCondor
classmethod
#
HTCondor(
*,
n_workers: int,
adaptive: bool = False,
submit_command: str | None = None,
cancel_command: str | None = None,
**kwargs: Any
) -> DaskJobqueueExecutor[HTCondorCluster]
Create a DaskJobqueueExecutor for a HTCondor cluster.
See the dask_jobqueue.HTCondorCluster documentation for more information on the available keyword arguments.
Source code in src/amltk/scheduling/executors/dask_jobqueue.py
LSF
classmethod
#
LSF(
*,
n_workers: int,
adaptive: bool = False,
submit_command: str | None = None,
cancel_command: str | None = None,
**kwargs: Any
) -> DaskJobqueueExecutor[LSFCluster]
Create a DaskJobqueueExecutor for a LSF cluster.
See the dask_jobqueue.LSFCluster documentation for more information on the available keyword arguments.
Source code in src/amltk/scheduling/executors/dask_jobqueue.py
Moab
classmethod
#
Moab(
*,
n_workers: int,
adaptive: bool = False,
submit_command: str | None = None,
cancel_command: str | None = None,
**kwargs: Any
) -> DaskJobqueueExecutor[MoabCluster]
Create a DaskJobqueueExecutor for a Moab cluster.
See the dask_jobqueue.MoabCluster documentation for more information on the available keyword arguments.
Source code in src/amltk/scheduling/executors/dask_jobqueue.py
OAR
classmethod
#
OAR(
*,
n_workers: int,
adaptive: bool = False,
submit_command: str | None = None,
cancel_command: str | None = None,
**kwargs: Any
) -> DaskJobqueueExecutor[OARCluster]
Create a DaskJobqueueExecutor for a OAR cluster.
See the dask_jobqueue.OARCluster documentation for more information on the available keyword arguments.
Source code in src/amltk/scheduling/executors/dask_jobqueue.py
PBS
classmethod
#
PBS(
*,
n_workers: int,
adaptive: bool = False,
submit_command: str | None = None,
cancel_command: str | None = None,
**kwargs: Any
) -> DaskJobqueueExecutor[PBSCluster]
Create a DaskJobqueueExecutor for a PBS cluster.
See the dask_jobqueue.PBSCluster documentation for more information on the available keyword arguments.
Source code in src/amltk/scheduling/executors/dask_jobqueue.py
SGE
classmethod
#
SGE(
*,
n_workers: int,
adaptive: bool = False,
submit_command: str | None = None,
cancel_command: str | None = None,
**kwargs: Any
) -> DaskJobqueueExecutor[SGECluster]
Create a DaskJobqueueExecutor for a SGE cluster.
See the dask_jobqueue.SGECluster documentation for more information on the available keyword arguments.
Source code in src/amltk/scheduling/executors/dask_jobqueue.py
SLURM
classmethod
#
SLURM(
*,
n_workers: int,
adaptive: bool = False,
submit_command: str | None = None,
cancel_command: str | None = None,
**kwargs: Any
) -> DaskJobqueueExecutor[SLURMCluster]
Create a DaskJobqueueExecutor for a SLURM cluster.
See the dask_jobqueue.SLURMCluster documentation for more information on the available keyword arguments.
Source code in src/amltk/scheduling/executors/dask_jobqueue.py
from_str
classmethod
#
from_str(
name: DJQ_NAMES,
*,
n_workers: int,
adaptive: bool = False,
submit_command: str | None = None,
cancel_command: str | None = None,
**kwargs: Any
) -> DaskJobqueueExecutor
Create a DaskJobqueueExecutor using a string lookup.
PARAMETER | DESCRIPTION |
---|---|
name |
The name of cluster to create, must be one of ["slurm", "htcondor", "lsf", "oar", "pbs", "sge", "moab"].
TYPE:
|
n_workers |
The number of workers to maximally adapt to on the cluster.
TYPE:
|
adaptive |
Whether to use the adaptive scaling of the cluster or fixed allocate all workers. This will specifically use the dask_jobqueue.SLURMCluster.adapt method to dynamically scale the cluster to the number of workers specified.
TYPE:
|
submit_command |
Overwrite the submit command of workers if necessary.
TYPE:
|
cancel_command |
Overwrite the cancel command of workers if necessary.
TYPE:
|
kwargs |
The keyword arguments to pass to the cluster constructor.
TYPE:
|
RAISES | DESCRIPTION |
---|---|
KeyError
|
If |
RETURNS | DESCRIPTION |
---|---|
DaskJobqueueExecutor
|
A DaskJobqueueExecutor for the requested cluster type. |
Source code in src/amltk/scheduling/executors/dask_jobqueue.py
map
#
map(
fn: Callable[..., R],
*iterables: Iterable,
timeout: float | None = None,
chunksize: int = 1
) -> Iterator[R]
See concurrent.futures.Executor.map.
Source code in src/amltk/scheduling/executors/dask_jobqueue.py
shutdown
#
submit
#
See concurrent.futures.Executor.submit.