Launch API¶
Defines how candidate programs are executed once generated.
LocalJobConfig¶
Local execution, optionally with a sourced environment or explicit Conda environment.
LocalJobConfig
dataclass
¶
LocalJobConfig(
eval_program_path: Optional[str] = "evaluate.py",
extra_cmd_args: Dict[str, Any] = dict(),
time: Optional[str] = None,
conda_env: Optional[str] = None,
activate_script: Optional[str] = None,
python_executable: Optional[str] = None,
numeric_threads_per_job: Optional[int] = None,
)
Bases: JobConfig
Configuration for local jobs
SlurmCondaJobConfig¶
SLURM-backed execution with a Conda environment or sourced activation script.
SlurmCondaJobConfig
dataclass
¶
SlurmCondaJobConfig(
eval_program_path: Optional[str] = "evaluate.py",
extra_cmd_args: Dict[str, Any] = dict(),
conda_env: str = "",
activate_script: Optional[str] = None,
modules: Optional[List[str]] = None,
partition: str = "gpu",
time: str = "01:00:00",
cpus: int = 1,
gpus: int = 1,
mem: Optional[str] = "8G",
)
Bases: JobConfig
Configuration for SLURM jobs using Conda environment
SlurmDockerJobConfig¶
SLURM-backed execution where the evaluator runs in a container.
SlurmDockerJobConfig
dataclass
¶
SlurmDockerJobConfig(
eval_program_path: Optional[str] = "evaluate.py",
extra_cmd_args: Dict[str, Any] = dict(),
image: str = "ubuntu:latest",
image_tar_path: Optional[str] = None,
docker_flags: str = "",
partition: str = "gpu",
time: str = "01:00:00",
cpus: int = 1,
gpus: int = 1,
mem: Optional[str] = "8G",
)
Bases: JobConfig
Configuration for SLURM jobs using Docker
JobScheduler¶
Lower-level scheduler abstraction for submitting and monitoring evaluation jobs across local and SLURM modes.