A library to communicate with a slurm scheduler/engine on HPC
npm install nslurm
-------------------------
#### JOB ID
A job is defined by its input(s) and the jobID.json file. The jobID.json file is like :
```
{
'script' : '/path/to/a/777_coreScript.sh',
'exportVar' : {
'flags' : ' --option1 --option2 ',
'moduleScript' : '/path/to/this/script'
},
'modules' : ['blast'],
'tagTask' : 'blast' // 'blast' is just an example. It can be 'clustal', 'naccess', etc.
}script
where :
- is a path to the _coreScript.sh (see the JOB CACHE CONTENT part)exportVar
- is a JSONmodules
- is an arraytagTask
-
#### JOB VARIABLES (into job.js)
- engineHeader file, containing all the variables to parameter the scheduler.sbatch
- submitBin file.sbatch
- cmd (like echo "toto" for example)_coreScript.sh
- script (to be copied into the cache directory){'myVar': 'titi', 'aFile': '/path/'}
- exportVar {'nameInput1': 'contentInput1', 'nameInput2': 'contentInput2'}
- inputs _coreScript.sh
- tagTask
- emulated
- port
- adress
- workDir
- namespace
- cwd <> =
- cwdClone <> =
- ttl <> = deprecated ???
- ERR_jokers
- MIA_jokers
- modules
- debugBool
- inputSymbols <> =
#### JOB FUNCTIONS (into job.js)
- start() =
- getSerialIdentity() = to create the jobID.json.sbtach
- submit() = create a process to execute the
#### JOB CACHE CONTENT
When a job is created, the JM create a directory where all files related to this job will be written. The minimalist content (where 777 is a uuid) :
- 777_coreScript.sh 777.batch
- jobID.json
- 777.err
- 777.out
- input
- myInput1.inp
- myInput2.inp
- myInput1.inp
Here, the job needs two input files, named and myInput2.inp.
>Note
>These two names (without the extension) are described as variables into the 777.batch file, to indicate their paths. Thus, the path to the input files of the job are known in 777.batch`.