HDF5 is a suite of data-centered technology: structures, file formats, APIs, applications.
There are two ways to build HDF5: using the traditional "configure and make", or using CMake.
Using the configure-make method, a few of the tests may fail if run on an NFS filesystem. These tests are use_append_chunk and use_append_mchunks. The test programs first create a file (successfully), and then try to open them for read, where they fail. The error output looks like:
157778: continue as the writer process
dataset rank 3, dimensions 0 x 256 x 256
157778: child process exited with non-zero code (1)
Error(s) encountered
HDF5-DIAG: Error detected in HDF5 (1.10.2) thread 0:
#000: H5F.c line 511 in H5Fopen(): unable to open file
major: File accessibilty
minor: Unable to open file
#001: H5Fint.c line 1604 in H5F_open(): unable to read superblock
major: File accessibilty
minor: Read failed
#002: H5Fsuper.c line 630 in H5F__super_read(): truncated file: eof = 479232, sblock->base_addr = 0, stored_eof = 33559007
major: File accessibilty
minor: File has been truncated
H5Fopen failed
read_uc_file encountered error
The "Error detected in … thread 0" first led me to think that it was a threading issue. So, I re-configured with thread-safety on, which meant that C++ and Fortran APIs were not built, nor the high-level library. The tests still failed.
However, running the tests (with original config, i.e. without thread-safety but with C++, Fortran, and high-level library) on a local disk resulted in success.
Using CMake to build, all tests pass, even when doing them on an NFS volume.
UPDATE This fact that some tests fail on NFS mounts is documented at the HDF5 downloads page: "Please be aware! On UNIX platforms the HDF5 tests must be run on a local file system or a parallel file system running GPFS or Lustre in order for the SWMR tests to complete properly."