IOR(Integrated Benchmark of Parallel I/O)是一种基准测试应用程序,旨在测试并评估并行文件系统的性能。它包含多个测试模式,可以测量一系列文件 I / O 操作的速度和可扩展性。IOR 主要使用 MPI 软件库来实现并行 I/O 操作。它可以测试并行读取和写入各种文件访问模式(例如单个大文件、多个小文件)时的并行文件系统性能。
$ make check Making check in src Making check in . Making check in test make testlib testexample CC lib.o CCLD testlib CC example.o CCLD testexample make check-TESTS PASS: testlib PASS: testexample ============================================================================ Testsuite summary for ior 4.0.0rc2+dev ============================================================================ # TOTAL: 2 # PASS: 2 # SKIP: 0 # XFAIL: 0 # FAIL: 0 # XPASS: 0 # ERROR: 0 ============================================================================ Making check in doc make[1]: Nothing to be done for 'check'. Making check in contrib make[1]: Nothing to be done for 'check'. make[1]: Nothing to be done for 'check-am'.
Flags -c, --collective Use collective I/O -C reorderTasks -- changes task ordering for readback (useful to avoid client cache) -e fsync -- perform a fsync() operation at the end of each read/write phase -E useExistingTestFile -- do not remove test file before write access -F filePerProc -- file-per-process -g intraTestBarriers -- use barriers between open, write/read, and close -k keepFile -- don't remove the test file(s) on program exit -K keepFileWithError -- keep error-filled file(s) after data-checking -m multiFile -- use number of reps (-i) for multiple file count -r readFile -- read existing file -R checkRead -- verify that the output of read matches the expected signature (used with -G) -u uniqueDir -- use unique directory name for each file-per-process -v verbose -- output information (repeating flag increases level) -w writeFile -- write file -W checkWrite -- check read after write -x singleXferAttempt -- do not retry transfer if incomplete -y dualMount -- use dual mount points for a filesystem -Y fsyncPerWrite -- perform sync operation after every write operation -z randomOffset -- access is to random, not sequential, offsets within a file -Z reorderTasksRandom -- changes task ordering to random ordering for readback --warningAsErrors Any warning should lead to an error. --dryRun do not perform any I/Os just run evtl. inputs print dummy output
Optional arguments -a=POSIX API for I/O [POSIX|DUMMY|MPIIO|MMAP] -A=0 refNum -- user supplied reference number to include in the summary -b=1048576 blockSize -- contiguous bytes to write per task (e.g.: 8, 4k, 2m, 1g) -d=0 interTestDelay -- delay between reps in seconds -D=0 deadlineForStonewalling -- seconds before stopping write or read phase -O stoneWallingWearOut=1 -- once the stonewalling timeout is over, all process finish to access the amount of data -O stoneWallingWearOutIterations=N -- stop after processing this number of iterations, needed for reading data back written with stoneWallingWearOut -O stoneWallingStatusFile=FILE -- this file keeps the number of iterations from stonewalling during write and allows to use them for read -O minTimeDuration=0 -- minimum Runtime for the run (will repeat from beginning of the file if time is not yet over) -f=STRING scriptFile -- test script name -G=0 setTimeStampSignature -- set value for time stamp signature/random seed -i=1 repetitions -- number of repetitions of test -j=0 outlierThreshold -- warn on outlier N seconds from mean -l, --dataPacketType=STRING datapacket type-- type of packet that will be created [offset|incompressible|timestamp|random|o|i|t|r] -M=STRING memoryPerNode -- hog memory on the node (e.g.: 2g, 75%) -N=-1 numTasks -- number of tasks that are participating in the test (overrides MPI) -o=testFile testFile -- full name for test -O=STRING string of IOR directives (e.g. -O checkRead=1,GPUid=2) -Q=1 taskPerNodeOffset for read tests use with -C & -Z options (-C constant N, -Z at least N) -s=1 segmentCount -- number of segments -t=262144 transferSize -- size of transfer in bytes (e.g.: 8, 4k, 2m, 1g) -T=0 maxTimeDuration -- max time in minutes executing repeated test; it aborts only between iterations and not within a test! -X=0 reorderTasksRandomSeed -- random seed for -Z option --randomPrefill=0 For random -z access only: Prefill the file with this blocksize, e.g., 2m --random-offset-seed=-1 The seed for -z -O summaryFile=FILE -- store result data into this file -O summaryFormat=[default,JSON,CSV] -- use the format for outputting the summary -O saveRankPerformanceDetailsCSV=<FILE> -- store the performance of each rank into the named CSV file.
Module POSIX
Flags --posix.odirect Direct I/O Mode --posix.rangelocks Use range locks (read locks for read ops)
Module DUMMY
Flags --dummy.delay-only-rank0 Delay only Rank0
Optional arguments --dummy.delay-create=0 Delay per create in usec --dummy.delay-close=0 Delay per close in usec --dummy.delay-sync=0 Delay for sync in usec --dummy.delay-xfer=0 Delay per xfer in usec
Module MPIIO
Flags --mpiio.showHints Show MPI hints --mpiio.preallocate Preallocate file size --mpiio.useStridedDatatype put strided access into datatype --mpiio.useFileView Use MPI_File_set_view
Optional arguments --mpiio.hintsFileName=STRING Full name for hints file
Module MMAP
Flags --mmap.madv_dont_need Use advise don't need --mmap.madv_pattern Use advise to indicate the pattern random/sequential
Flags -C only create files/dirs -T only stat files/dirs -E only read files/dir -r only remove files or directories left behind by previous runs -D perform test on directories only (no files) -F perform test on files only (no directories) -k use mknod to create file -L files only at leaf level of tree -P print rate AND time --print-all-procs all processes print an excerpt of their results -R random access to files (only for stat) -S shared file access (file only, no directories) -c collective creates: task 0 does all creates -t time unique working directory overhead -u unique working directory for each task -v verbosity (each instance of option increments by one) -X, --verify-read Verify the data read --verify-write Verify the data after a write by reading it back immediately -y sync file after writing -Y call the sync command after each phase (included in the timing; note it causes all IO to be flushed from your node) -Z print time instead of rate --allocateBufferOnGPU Allocate the buffer on the GPU. --warningAsErrors Any warning should lead to an error. --showRankStatistics Include statistics per rank
Optional arguments -a=STRING API for I/O [POSIX|DUMMY] -b=1 branching factor of hierarchical directory structure -d=./out directory or multiple directories where the test will run [dir|dir1@dir2@dir3...] -B=0 no barriers between phases -e=0 bytes to read from each file -f=1 first number of tasks on which the test will run -G=-1 Offset for the data in the read/write buffer, if not set, a random value is used -i=1 number of iterations the test will run -I=0 number of items per directory in tree -l=0 last number of tasks on which the test will run -n=0 every process will creat/stat/read/remove # directories and files -N=0 stride # between tasks for file/dir operation (local=0; set to 1 to avoid client cache) -p=0 pre-iteration delay (in seconds) --random-seed=0 random seed for -R -s=1 stride between the number of tasks for each test -V=0 verbosity value -w=0 bytes to write to each file after it is created -W=0 number in seconds; stonewall timer, write as many seconds and ensure all processes did the same number of operations (currently only stops during create phase and files) -x=STRING StoneWallingStatusFile; contains the number of iterations of the creation phase, can be used to split phases across runs -z=0 depth of hierarchical directory structure --dataPacketType=t type of packet that will be created [offset|incompressible|timestamp|random|o|i|t|r] --run-cmd-before-phase=STRING call this external command before each phase (excluded from the timing) --run-cmd-after-phase=STRING call this external command after each phase (included in the timing) --saveRankPerformanceDetails=STRINGSave the individual rank information into this CSV file.
Module POSIX
Flags --posix.odirect Direct I/O Mode --posix.rangelocks Use range locks (read locks for read ops)
Module DUMMY
Flags --dummy.delay-only-rank0 Delay only Rank0
Optional arguments --dummy.delay-create=0 Delay per create in usec --dummy.delay-close=0 Delay per close in usec --dummy.delay-sync=0 Delay for sync in usec --dummy.delay-xfer=0 Delay per xfer in usec
Module MPIIO
Flags --mpiio.showHints Show MPI hints --mpiio.preallocate Preallocate file size --mpiio.useStridedDatatype put strided access into datatype --mpiio.useFileView Use MPI_File_set_view
Optional arguments --mpiio.hintsFileName=STRING Full name for hints file
Module MMAP
Flags --mmap.madv_dont_need Use advise don't need --mmap.madv_pattern Use advise to indicate the pattern random/sequential
增加输出的详细程度。增加命令行上 -v 的数量会使详细程度更高,一共 6 个级别。 0:默认值;只显示基本要素。 1:max clock deviation, participating tasks, free space, access pattern, commence/verify access notification with time. 2:rank/hostname, machine name, timer used, individual repetition performance results, timestamp used for data signature. 3:full test details, transfer block/offset compared, individual data checking errors, environment variables, task writing/reading file name, all test operation times. 4:task id and offset for each transfer. 5:each 8-byte data signature comparison (WARNING: more data to STDOUT than stored in file, use carefully).
Flags --posix.odirect Direct I/O Mode --posix.rangelocks Use range locks (read locks for read ops)
Module DUMMY
Flags --dummy.delay-only-rank0 Delay only Rank0
Optional arguments --dummy.delay-create=0 Delay per create in usec --dummy.delay-close=0 Delay per close in usec --dummy.delay-sync=0 Delay for sync in usec --dummy.delay-xfer=0 Delay per xfer in usec
Module MPIIO
Flags --mpiio.showHints Show MPI hints --mpiio.preallocate Preallocate file size --mpiio.useStridedDatatype put strided access into datatype --mpiio.useFileView Use MPI_File_set_view
Optional arguments --mpiio.hintsFileName=STRING Full name for hints file
Module MMAP
Flags --mmap.madv_dont_need Use advise don't need --mmap.madv_pattern Use advise to indicate the pattern random/sequential
Usage: mpiexec [OPTION]... [PROGRAM]... Start the given program using Open RTE
-c|-np|--np <arg0> Number of processes to run -h|--help <arg0> This help message -n|--n <arg0> Number of processes to run -q|--quiet Suppress helpful messages -v|--verbose Be verbose -V|--version Print version and exit
For additional mpirun arguments, run 'mpirun --help <category>'
The following categories exist: general (Defaults to this option), debug, output, input, mapping, ranking, binding, devel (arguments useful to OMPI Developers), compatibility (arguments supported for backwards compatibility), launch (arguments to modify launch options), and dvm (Distributed Virtual Machine arguments).
Report bugs to http://www.open-mpi.org/community/help/