- made IprCluster target host work. This target host provides access to intel ifort and intel mkl packages, which are available as environment modules.
- added a unit test that validates the IprCluster (only works on ipr cluster node)
- also improved unit tests to use TMPDIR environment variable if present, in order to avoid permission issues in case the test is run by different users on the same system.
- made the benchmark display an error in case starbench fails
work related to [https://bugzilla.ipr.univ-rennes.fr/show_bug.cgi?id=3958]
- the user can now choose the host type id. This mechanism will allow the benchmarks to be run on ipr cluster nodes, taking advantage of the specific use of environment modules to discover and activate packages.
- note: at the moment, the implementation of host type fr.univ-rennes.ipr.cluster-node is not yet finished
work related to [https://bugzilla.ipr.univ-rennes.fr/show_bug.cgi?id=3958]
- added a the blas_library parameter to mamul1, for this:
- added support for the `default-<packagetype>` keyword as package_id, which makes the parameter system to find the blas flavour of the default blas.
- made the package default version retrieval more generic (replaces a gfortran specific code).
warning: these discovery mechanisms have only been implemented for debian hosts at the moment.
work related to [https://bugzilla.ipr.univ-rennes.fr/show_bug.cgi?id=3958]
- added a the blas_library parameter to mamul1, for this:
- added support for the `default-<packagetype>` keyword as package_id, which makes the parameter system to find the blas flavour of the default blas.
- made the package default version retrieval more generic (replaces a gfortran specific code).
warning: these discovery mechanisms have only been implemented for debian hosts at the moment.
work related to [https://bugzilla.ipr.univ-rennes.fr/show_bug.cgi?id=3958]
- split tests into 3:
- test_benchmarks.py: tests all benchmarks with the most basic resultsdb backend
- test_resultsdb: tests all resultsdb backends with the most basic benchmark
- test_clusterbench: tests clusterbench_submit
- made tests more robust (deletes the results foder if it already exists)
work related to [https://bugzilla.ipr.univ-rennes.fr/show_bug.cgi?id=3958]
note: as the production benchmarks are currently stored at IPR on a database server accessed via ssh, this allows iprbench to store its results to IPR benchmark database.
work rleated to [https://bugzilla.ipr.univ-rennes.fr/show_bug.cgi?id=3958]
This decoupling allows to write benchmarks as modules that can be used in various situations (from a benchmark job or directly from a user), but this design will allow automatic registering of the benchmark results in a user selectable form (sql database, stdout, etc.)
- separated `hibenchonphysix.py` into `clusterbench.py` (tool to run a benchmark on a cluster) and `hibench.py` (hibridon benchmark module) so that `clusterbench.py` no longer has a knowledge about hibridon.
- there are currently 2 ways to run a bechmark:
1. as a simple run through `clusterbench-run` command (which will eventually be renamed as iprbench-run since it might be completely independent from the concept of cluster)
2. as cluster jobs through `clusterbench-submit` command
- added unit test
- added another benchmark `mamul1` that is used as a unittest because it has 2 benefits over `hibench` benchmark:
1. it's standalone (no external resources needed)
2. it's quicker to execute
note: this refactoring work is not complete yet, but the concept proof is complete (the 2 unittests pass):
- still need to provide the user a way to switch between IpRCluster and DummyCluster(which is only intended to only be used for testing clusterbench))
- still need to run multiple configs of the same benchmark in one run (as hibenchonphysix did)
work related to [https://bugzilla.ipr.univ-rennes.fr/show_bug.cgi?id=3958] and [https://bugzilla.ipr.univ-rennes.fr/show_bug.cgi?id=3372]