Skip to content
Snippets Groups Projects
README.md 3.46 KiB
Newer Older
# ESS Beam Physics Python Tools

This project provides useful Python tools for executing TraceWin, and reading/writing TraceWin data files.

The files are written primarily for usage at European Spallation Source, but might be useful elsewhere.

You are free to use this code, but no warranty whatsoever is given.

To install the package, run the command below. It is assumed that you have git and pip installed on your system.
pip install git+https://gitlab01.esss.lu.se/ess-bp/ess-python-tools.git -U

## Usage

Here we will briefly explain how to use the library.

### Library

To be done..

### Error study script

First you need to use the GUI to create an error study folder in your output directory (see functionality `Copy set of data`). You may copy as many jobs as you want to run directly in the GUI, but it is very slow and we recommend to rather define the number of runs in the submit command provided here. This submit command use hard links when possible to also save space.

You then send the jobs to the queue manager using the command `tracewin_errorstudy`. The help function of this script already says most of what you need to know:
```
usage: tracewin_errorstudy [-h] [-s SEED] [-c CALC_DIR] [-n NUM_JOBS]
                           [-t STUDY] [-m MULTI] [-l LATTICE] [-q SETTINGS]
                           [-p PRIORITY]

Simple setup script to run TraceWin on HT Condor.. Default values in square
brackets

optional arguments:
  -h, --help   show this help message and exit
  -s SEED      Define seed [None]
  -c CALC_DIR  Path to calculation folder [temp/]
  -n NUM_JOBS  Number of jobs [Count number of copied runs in TW]
  -t STUDY     Statistical study number [1]
  -m MULTI     Multi-dynamic study [1]
  -l LATTICE   The lattice file for the project [lattice.dat]
  -q SETTINGS  The settings for the project template (yaml) [settings.yml]
  -p PRIORITY  Job priority
```

A few things are worth mentioning:

 - We provide the option of having the lattice file as a template, with settings specified in a YAML file. It is then assumed that the template has the ending .tmp (so if the default lattice.dat, the template file lattice.dat.tmp is expected to exist). Variables are then replaced as per [str.format](https://docs.python.org/2/library/stdtypes.html#str.format).
 - The default Condor script templates should suffice for most. However we do provide an option to define template files, again to be replaced according to Python's str.format. If any of the files exist in the working directory they will be used in favour of the built-in templates.
    - For standard jobs, the templates are *head.tmp* and *queue.tmp*
    - For multi-jobs, the templates are *head.multi.tmp* and *queue.multi.tmp*
 - We provide the option to run multi-tynamic study which is perhaps a bit confusing. In this mode, it is assumed that you have already run a number of simulations before. For each new job, we change the seed but read the same error table for dynamical errors. This allows us to understand better if we are limited by static or dynamic tolerances.

#### Example 1
Standard simulation of 100 machines in the folder calc
```
tracewin_errorstudy -n 100 -c calc
```

#### Example 2
Simulating 50 machines, then run 20 machines with different static errors but same dynamic errors for each of those (1000 machines total)
```
tracewin_errorstudy -n 50 -c calc
# wait until simulations have finished (check condor_q, condor_status, condor_history)
tracewin_errorstudy -n 50 -m 40 -c calc
```