lfs setstripe /scratch1/jhlaros/8-stripe/154/ior_test 2097152 0 8
yod allocation delayed for Lustre recovery
IOR-2.8.10: MPI Coordinated Test of Parallel I/O

Run began: Fri Jun 23 17:11:22 2006
Command line used: /home/jhlaros/thurs-5-18-06/test5/IOR -g -E -v -a POSIX -C -i 1 -o /scratch1/jhlaros/8-stripe/154/ior_test -b 2048m -k -w -t 16m
Machine: catamount rsclogin01
Maximum wall clock deviation: 0.00 sec
df /scratch1/jhlaros/8-stripe/154:
WARNING: Not using system("df /scratch1/jhlaros/8-stripe/154") call
Participating tasks: 154

Summary:
	api                = POSIX
	test filename      = /scratch1/jhlaros/8-stripe/154/ior_test
	access             = single-shared-file
	pattern            = segmented (1 segment)
	clients            = 154 (154 per node)
	repetitions        = 1
	xfersize           = 16 MiB
	blocksize          = 2 GiB
	aggregate filesize = 308 GiB

Commencing write performance test.
Fri Jun 23 17:11:22 2006

access    bw(MiB/s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s)   iter
------    ---------  ---------- ---------  --------   --------   --------   ----
write     994.61     2097152    16384      0.168839   316.93     0.018582   0   

Max Write: 994.61 MiB/sec (1042.92 MB/sec)

Run finished: Fri Jun 23 17:16:39 2006
yod allocation delayed for Lustre recovery
IOR-2.8.10: MPI Coordinated Test of Parallel I/O

Run began: Fri Jun 23 17:16:43 2006
Command line used: /home/jhlaros/thurs-5-18-06/test5/IOR -g -E -v -a POSIX -C -i 1 -o /scratch1/jhlaros/8-stripe/154/ior_test -b 2048m -k -r -t 16m
Machine: catamount rsclogin01
Maximum wall clock deviation: 0.00 sec
df /scratch1/jhlaros/8-stripe/154:
WARNING: Not using system("df /scratch1/jhlaros/8-stripe/154") call
Participating tasks: 154

Summary:
	api                = POSIX
	test filename      = /scratch1/jhlaros/8-stripe/154/ior_test
	access             = single-shared-file
	pattern            = segmented (1 segment)
	clients            = 154 (154 per node)
	repetitions        = 1
	xfersize           = 16 MiB
	blocksize          = 2 GiB
	aggregate filesize = 308 GiB

Commencing read performance test.
Fri Jun 23 17:16:43 2006

access    bw(MiB/s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s)   iter
------    ---------  ---------- ---------  --------   --------   --------   ----
read      447.55     2097152    16384      0.151135   704.53     0.038326   0   

Max Read:  447.55 MiB/sec (469.29 MB/sec)

Run finished: Fri Jun 23 17:28:28 2006
OBDS:
0: ost11028_0_UUID ACTIVE
1: ost11028_1_UUID ACTIVE
2: ost11031_0_UUID ACTIVE
3: ost11031_1_UUID ACTIVE
4: ost11032_0_UUID ACTIVE
5: ost11032_1_UUID ACTIVE
6: ost11035_0_UUID ACTIVE
7: ost11035_1_UUID ACTIVE
/scratch1/jhlaros/8-stripe/154/ior_test
	obdidx		 objid		objid		 group
	     0	            84	         0x54	             0
	     1	            84	         0x54	             0
	     2	            84	         0x54	             0
	     3	            85	         0x55	             0
	     4	            85	         0x55	             0
	     5	            84	         0x54	             0
	     6	            84	         0x54	             0
	     7	            84	         0x54	             0

lfs setstripe /scratch1/jhlaros/8-stripe/154/ior_test 2097152 0 8
yod allocation delayed for Lustre recovery
IOR-2.8.10: MPI Coordinated Test of Parallel I/O

Run began: Fri Jun 23 17:28:30 2006
Command line used: /home/jhlaros/thurs-5-18-06/test5/IOR -g -E -v -a POSIX -C -i 1 -o /scratch1/jhlaros/8-stripe/154/ior_test -b 2048m -k -w -t 16m
Machine: catamount rsclogin01
Maximum wall clock deviation: 0.00 sec
df /scratch1/jhlaros/8-stripe/154:
WARNING: Not using system("df /scratch1/jhlaros/8-stripe/154") call
Participating tasks: 154

Summary:
	api                = POSIX
	test filename      = /scratch1/jhlaros/8-stripe/154/ior_test
	access             = single-shared-file
	pattern            = segmented (1 segment)
	clients            = 154 (154 per node)
	repetitions        = 1
	xfersize           = 16 MiB
	blocksize          = 2 GiB
	aggregate filesize = 308 GiB

Commencing write performance test.
Fri Jun 23 17:28:30 2006

access    bw(MiB/s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s)   iter
------    ---------  ---------- ---------  --------   --------   --------   ----
write     968.21     2097152    16384      0.168088   325.57     0.019214   0   

Max Write: 968.21 MiB/sec (1015.24 MB/sec)

Run finished: Fri Jun 23 17:33:56 2006
yod allocation delayed for Lustre recovery
IOR-2.8.10: MPI Coordinated Test of Parallel I/O

Run began: Fri Jun 23 17:33:59 2006
Command line used: /home/jhlaros/thurs-5-18-06/test5/IOR -g -E -v -a POSIX -C -i 1 -o /scratch1/jhlaros/8-stripe/154/ior_test -b 2048m -k -r -t 16m
Machine: catamount rsclogin01
Maximum wall clock deviation: 0.00 sec
df /scratch1/jhlaros/8-stripe/154:
WARNING: Not using system("df /scratch1/jhlaros/8-stripe/154") call
Participating tasks: 154

Summary:
	api                = POSIX
	test filename      = /scratch1/jhlaros/8-stripe/154/ior_test
	access             = single-shared-file
	pattern            = segmented (1 segment)
	clients            = 154 (154 per node)
	repetitions        = 1
	xfersize           = 16 MiB
	blocksize          = 2 GiB
	aggregate filesize = 308 GiB

Commencing read performance test.
Fri Jun 23 17:33:59 2006

access    bw(MiB/s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s)   iter
------    ---------  ---------- ---------  --------   --------   --------   ----
read      444.21     2097152    16384      0.153686   709.85     0.038611   0   

Max Read:  444.21 MiB/sec (465.78 MB/sec)

Run finished: Fri Jun 23 17:45:49 2006
OBDS:
0: ost11028_0_UUID ACTIVE
1: ost11028_1_UUID ACTIVE
2: ost11031_0_UUID ACTIVE
3: ost11031_1_UUID ACTIVE
4: ost11032_0_UUID ACTIVE
5: ost11032_1_UUID ACTIVE
6: ost11035_0_UUID ACTIVE
7: ost11035_1_UUID ACTIVE
/scratch1/jhlaros/8-stripe/154/ior_test
	obdidx		 objid		objid		 group
	     0	            85	         0x55	             0
	     1	            85	         0x55	             0
	     2	            85	         0x55	             0
	     3	            86	         0x56	             0
	     4	            86	         0x56	             0
	     5	            85	         0x55	             0
	     6	            85	         0x55	             0
	     7	            85	         0x55	             0

lfs setstripe /scratch1/jhlaros/8-stripe/154/ior_test 2097152 0 8
yod allocation delayed for Lustre recovery
IOR-2.8.10: MPI Coordinated Test of Parallel I/O

Run began: Fri Jun 23 17:45:52 2006
Command line used: /home/jhlaros/thurs-5-18-06/test5/IOR -g -E -v -a POSIX -C -i 1 -o /scratch1/jhlaros/8-stripe/154/ior_test -b 2048m -k -w -t 16m
Machine: catamount rsclogin01
Maximum wall clock deviation: 0.00 sec
df /scratch1/jhlaros/8-stripe/154:
WARNING: Not using system("df /scratch1/jhlaros/8-stripe/154") call
Participating tasks: 154

Summary:
	api                = POSIX
	test filename      = /scratch1/jhlaros/8-stripe/154/ior_test
	access             = single-shared-file
	pattern            = segmented (1 segment)
	clients            = 154 (154 per node)
	repetitions        = 1
	xfersize           = 16 MiB
	blocksize          = 2 GiB
	aggregate filesize = 308 GiB

Commencing write performance test.
Fri Jun 23 17:45:52 2006

access    bw(MiB/s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s)   iter
------    ---------  ---------- ---------  --------   --------   --------   ----
write     1000.23    2097152    16384      0.169184   315.14     0.018698   0   

Max Write: 1000.23 MiB/sec (1048.82 MB/sec)

Run finished: Fri Jun 23 17:51:08 2006
yod allocation delayed for Lustre recovery
IOR-2.8.10: MPI Coordinated Test of Parallel I/O

Run began: Fri Jun 23 17:51:10 2006
Command line used: /home/jhlaros/thurs-5-18-06/test5/IOR -g -E -v -a POSIX -C -i 1 -o /scratch1/jhlaros/8-stripe/154/ior_test -b 2048m -k -r -t 16m
Machine: catamount rsclogin01
Maximum wall clock deviation: 0.00 sec
df /scratch1/jhlaros/8-stripe/154:
WARNING: Not using system("df /scratch1/jhlaros/8-stripe/154") call
Participating tasks: 154

Summary:
	api                = POSIX
	test filename      = /scratch1/jhlaros/8-stripe/154/ior_test
	access             = single-shared-file
	pattern            = segmented (1 segment)
	clients            = 154 (154 per node)
	repetitions        = 1
	xfersize           = 16 MiB
	blocksize          = 2 GiB
	aggregate filesize = 308 GiB

Commencing read performance test.
Fri Jun 23 17:51:10 2006

access    bw(MiB/s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s)   iter
------    ---------  ---------- ---------  --------   --------   --------   ----
read      451.44     2097152    16384      0.153748   698.46     0.038162   0   

Max Read:  451.44 MiB/sec (473.37 MB/sec)

Run finished: Fri Jun 23 18:02:49 2006
OBDS:
0: ost11028_0_UUID ACTIVE
1: ost11028_1_UUID ACTIVE
2: ost11031_0_UUID ACTIVE
3: ost11031_1_UUID ACTIVE
4: ost11032_0_UUID ACTIVE
5: ost11032_1_UUID ACTIVE
6: ost11035_0_UUID ACTIVE
7: ost11035_1_UUID ACTIVE
/scratch1/jhlaros/8-stripe/154/ior_test
	obdidx		 objid		objid		 group
	     0	            86	         0x56	             0
	     1	            86	         0x56	             0
	     2	            86	         0x56	             0
	     3	            87	         0x57	             0
	     4	            87	         0x57	             0
	     5	            86	         0x56	             0
	     6	            86	         0x56	             0
	     7	            86	         0x56	             0

lfs setstripe /scratch1/jhlaros/8-stripe/154/ior_test 2097152 0 8
yod allocation delayed for Lustre recovery
IOR-2.8.10: MPI Coordinated Test of Parallel I/O

Run began: Fri Jun 23 18:02:52 2006
Command line used: /home/jhlaros/thurs-5-18-06/test5/IOR -g -E -v -a POSIX -C -i 1 -o /scratch1/jhlaros/8-stripe/154/ior_test -b 2048m -k -w -t 16m
Machine: catamount rsclogin01
Maximum wall clock deviation: 0.00 sec
df /scratch1/jhlaros/8-stripe/154:
WARNING: Not using system("df /scratch1/jhlaros/8-stripe/154") call
Participating tasks: 154

Summary:
	api                = POSIX
	test filename      = /scratch1/jhlaros/8-stripe/154/ior_test
	access             = single-shared-file
	pattern            = segmented (1 segment)
	clients            = 154 (154 per node)
	repetitions        = 1
	xfersize           = 16 MiB
	blocksize          = 2 GiB
	aggregate filesize = 308 GiB

Commencing write performance test.
Fri Jun 23 18:02:52 2006

access    bw(MiB/s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s)   iter
------    ---------  ---------- ---------  --------   --------   --------   ----
write     984.73     2097152    16384      0.167663   320.10     0.018955   0   

Max Write: 984.73 MiB/sec (1032.57 MB/sec)

Run finished: Fri Jun 23 18:08:13 2006
yod allocation delayed for Lustre recovery
IOR-2.8.10: MPI Coordinated Test of Parallel I/O

Run began: Fri Jun 23 18:08:15 2006
Command line used: /home/jhlaros/thurs-5-18-06/test5/IOR -g -E -v -a POSIX -C -i 1 -o /scratch1/jhlaros/8-stripe/154/ior_test -b 2048m -k -r -t 16m
Machine: catamount rsclogin01
Maximum wall clock deviation: 0.00 sec
df /scratch1/jhlaros/8-stripe/154:
WARNING: Not using system("df /scratch1/jhlaros/8-stripe/154") call
Participating tasks: 154

Summary:
	api                = POSIX
	test filename      = /scratch1/jhlaros/8-stripe/154/ior_test
	access             = single-shared-file
	pattern            = segmented (1 segment)
	clients            = 154 (154 per node)
	repetitions        = 1
	xfersize           = 16 MiB
	blocksize          = 2 GiB
	aggregate filesize = 308 GiB

Commencing read performance test.
Fri Jun 23 18:08:15 2006

access    bw(MiB/s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s)   iter
------    ---------  ---------- ---------  --------   --------   --------   ----
read      442.64     2097152    16384      0.142143   712.37     0.039037   0   

Max Read:  442.64 MiB/sec (464.14 MB/sec)

Run finished: Fri Jun 23 18:20:08 2006
OBDS:
0: ost11028_0_UUID ACTIVE
1: ost11028_1_UUID ACTIVE
2: ost11031_0_UUID ACTIVE
3: ost11031_1_UUID ACTIVE
4: ost11032_0_UUID ACTIVE
5: ost11032_1_UUID ACTIVE
6: ost11035_0_UUID ACTIVE
7: ost11035_1_UUID ACTIVE
/scratch1/jhlaros/8-stripe/154/ior_test
	obdidx		 objid		objid		 group
	     0	            87	         0x57	             0
	     1	            87	         0x57	             0
	     2	            87	         0x57	             0
	     3	            88	         0x58	             0
	     4	            88	         0x58	             0
	     5	            87	         0x57	             0
	     6	            87	         0x57	             0
	     7	            87	         0x57	             0