Sandia National Laboratories

TEST2
(Phase 1)

Description:
Each test performed is a file-per-process test where each file is a 1 stripe file (utilizing a single OST).  As the number of clients increases, OST's are allocated one per OSS until the maximum number of OSS's is reached (28). Once the maximum number of OSS's is achieved the OST allocation effectively wraps around and the remaining unallocated OST's are used until the maximum number of OST's is reached (56). Once the maximum  OST count is reached OST's are again allocated in a one per OSS basis until a one per OSS usage pattern is achieved. For example, tests using 1-28 clients allocate a single OST per OSS. A test that utilizes 29 clients would result in 29 OST's being utilized. 27 OST's would be allocated on 27 separate OSS's, and 2 OST's would be allocated from a single OSS. A test using 56 clients would allocate all 56 OST's on the 28 available OSS's. A test using 57 clients would oversubscribe a single OST.

Purpose:
This test will show both how the file-system responds to the case where the number of compute nodes is less than the number of OST's, and how the file-system responds when the number of compute nodes exceeds the number of OST's.

Performance

AVG PER OST Performance

 

AXES:
Y- MiB/sec aggregate transfer for each data point
X - <clients.stripe> increasing client count from 1-246, Constant stripe count of 1.

Testing Parameters:
TEST CODE:    IOR
VERSION:    2.8.10
API:    POSIX
ACCESS:    file-per-process
MAX OST's:    56
MAX OSS's:    28

Observations:
An initial ramp up occurs as additional OST's are employed in the range of 1-56 clients. A clear drop off occurs at the data-point using 66 clients. Both read and write rates increase again as the ratio of clients to OST's become balanced. This step pattern is somewhat evident throughout the remainder of the graph.  We also noted that large error bars are present especially for data points using larger numbers of clients.

Data in TABLE format

RAW data

HOME

Next >>