- added a the blas_library parameter to mamul1, for this:
- added support for the `default-<packagetype>` keyword as package_id, which makes the parameter system to find the blas flavour of the default blas.
- made the package default version retrieval more generic (replaces a gfortran specific code).
warning: these discovery mechanisms have only been implemented for debian hosts at the moment.
work related to [https://bugzilla.ipr.univ-rennes.fr/show_bug.cgi?id=3958]
- added a the blas_library parameter to mamul1, for this:
- added support for the `default-<packagetype>` keyword as package_id, which makes the parameter system to find the blas flavour of the default blas.
- made the package default version retrieval more generic (replaces a gfortran specific code).
warning: these discovery mechanisms have only been implemented for debian hosts at the moment.
work related to [https://bugzilla.ipr.univ-rennes.fr/show_bug.cgi?id=3958]
- split tests into 3:
- test_benchmarks.py: tests all benchmarks with the most basic resultsdb backend
- test_resultsdb: tests all resultsdb backends with the most basic benchmark
- test_clusterbench: tests clusterbench_submit
- made tests more robust (deletes the results foder if it already exists)
work related to [https://bugzilla.ipr.univ-rennes.fr/show_bug.cgi?id=3958]
note: as the production benchmarks are currently stored at IPR on a database server accessed via ssh, this allows iprbench to store its results to IPR benchmark database.
work rleated to [https://bugzilla.ipr.univ-rennes.fr/show_bug.cgi?id=3958]
This decoupling allows to write benchmarks as modules that can be used in various situations (from a benchmark job or directly from a user), but this design will allow automatic registering of the benchmark results in a user selectable form (sql database, stdout, etc.)
- separated `hibenchonphysix.py` into `clusterbench.py` (tool to run a benchmark on a cluster) and `hibench.py` (hibridon benchmark module) so that `clusterbench.py` no longer has a knowledge about hibridon.
- there are currently 2 ways to run a bechmark:
1. as a simple run through `clusterbench-run` command (which will eventually be renamed as iprbench-run since it might be completely independent from the concept of cluster)
2. as cluster jobs through `clusterbench-submit` command
- added unit test
- added another benchmark `mamul1` that is used as a unittest because it has 2 benefits over `hibench` benchmark:
1. it's standalone (no external resources needed)
2. it's quicker to execute
note: this refactoring work is not complete yet, but the concept proof is complete (the 2 unittests pass):
- still need to provide the user a way to switch between IpRCluster and DummyCluster(which is only intended to only be used for testing clusterbench))
- still need to run multiple configs of the same benchmark in one run (as hibenchonphysix did)
work related to [https://bugzilla.ipr.univ-rennes.fr/show_bug.cgi?id=3958] and [https://bugzilla.ipr.univ-rennes.fr/show_bug.cgi?id=3372]