error on ioctl 0x4008669a for '/scratch1/jhlaros/8-stripe/160/ior_test' (3): stripe already set
error: setstripe: create stripe file failed
lfs setstripe /scratch1/jhlaros/8-stripe/160/ior_test 2097152 0 8
IOR-2.8.10: MPI Coordinated Test of Parallel I/O

Run began: Sat Jun 24 17:37:28 2006
Command line used: /home/jhlaros/thurs-5-18-06/test5/IOR -g -E -v -a POSIX -C -i 1 -o /scratch1/jhlaros/8-stripe/160/ior_test -b 2048m -k -w -t 16m
Machine: catamount rsclogin02
Maximum wall clock deviation: 0.00 sec
df /scratch1/jhlaros/8-stripe/160:
WARNING: Not using system("df /scratch1/jhlaros/8-stripe/160") call
Participating tasks: 160

Summary:
	api                = POSIX
	test filename      = /scratch1/jhlaros/8-stripe/160/ior_test
	access             = single-shared-file
	pattern            = segmented (1 segment)
	clients            = 160 (160 per node)
	repetitions        = 1
	xfersize           = 16 MiB
	blocksize          = 2 GiB
	aggregate filesize = 320 GiB

Commencing write performance test.
Sat Jun 24 17:37:28 2006

access    bw(MiB/s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s)   iter
------    ---------  ---------- ---------  --------   --------   --------   ----
write     1002.90    2097152    16384      0.174924   326.54     0.019552   0   

Max Write: 1002.90 MiB/sec (1051.61 MB/sec)

Run finished: Sat Jun 24 17:42:55 2006
yod allocation delayed for Lustre recovery
IOR-2.8.10: MPI Coordinated Test of Parallel I/O

Run began: Sat Jun 24 17:42:58 2006
Command line used: /home/jhlaros/thurs-5-18-06/test5/IOR -g -E -v -a POSIX -C -i 1 -o /scratch1/jhlaros/8-stripe/160/ior_test -b 2048m -k -r -t 16m
Machine: catamount rsclogin02
Maximum wall clock deviation: 0.00 sec
df /scratch1/jhlaros/8-stripe/160:
WARNING: Not using system("df /scratch1/jhlaros/8-stripe/160") call
Participating tasks: 160

Summary:
	api                = POSIX
	test filename      = /scratch1/jhlaros/8-stripe/160/ior_test
	access             = single-shared-file
	pattern            = segmented (1 segment)
	clients            = 160 (160 per node)
	repetitions        = 1
	xfersize           = 16 MiB
	blocksize          = 2 GiB
	aggregate filesize = 320 GiB

Commencing read performance test.
Sat Jun 24 17:42:58 2006

access    bw(MiB/s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s)   iter
------    ---------  ---------- ---------  --------   --------   --------   ----
read      454.48     2097152    16384      0.159042   720.82     0.039561   0   

Max Read:  454.48 MiB/sec (476.55 MB/sec)

Run finished: Sat Jun 24 17:54:59 2006
OBDS:
0: ost11028_0_UUID ACTIVE
1: ost11028_1_UUID ACTIVE
2: ost11031_0_UUID ACTIVE
3: ost11031_1_UUID ACTIVE
4: ost11032_0_UUID ACTIVE
5: ost11032_1_UUID ACTIVE
6: ost11035_0_UUID ACTIVE
7: ost11035_1_UUID ACTIVE
/scratch1/jhlaros/8-stripe/160/ior_test
	obdidx		 objid		objid		 group
	     0	            89	         0x59	             0
	     1	            89	         0x59	             0
	     2	            89	         0x59	             0
	     3	            90	         0x5a	             0
	     4	            90	         0x5a	             0
	     5	            89	         0x59	             0
	     6	            89	         0x59	             0
	     7	            89	         0x59	             0

lfs setstripe /scratch1/jhlaros/8-stripe/160/ior_test 2097152 0 8
yod allocation delayed for Lustre recovery
IOR-2.8.10: MPI Coordinated Test of Parallel I/O

Run began: Sat Jun 24 17:55:02 2006
Command line used: /home/jhlaros/thurs-5-18-06/test5/IOR -g -E -v -a POSIX -C -i 1 -o /scratch1/jhlaros/8-stripe/160/ior_test -b 2048m -k -w -t 16m
Machine: catamount rsclogin02
Maximum wall clock deviation: 0.00 sec
df /scratch1/jhlaros/8-stripe/160:
WARNING: Not using system("df /scratch1/jhlaros/8-stripe/160") call
Participating tasks: 160

Summary:
	api                = POSIX
	test filename      = /scratch1/jhlaros/8-stripe/160/ior_test
	access             = single-shared-file
	pattern            = segmented (1 segment)
	clients            = 160 (160 per node)
	repetitions        = 1
	xfersize           = 16 MiB
	blocksize          = 2 GiB
	aggregate filesize = 320 GiB

Commencing write performance test.
Sat Jun 24 17:55:02 2006

access    bw(MiB/s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s)   iter
------    ---------  ---------- ---------  --------   --------   --------   ----
write     965.38     2097152    16384      0.177363   339.25     0.020063   0   

Max Write: 965.38 MiB/sec (1012.28 MB/sec)

Run finished: Sat Jun 24 18:00:42 2006
yod allocation delayed for Lustre recovery
IOR-2.8.10: MPI Coordinated Test of Parallel I/O

Run began: Sat Jun 24 18:00:44 2006
Command line used: /home/jhlaros/thurs-5-18-06/test5/IOR -g -E -v -a POSIX -C -i 1 -o /scratch1/jhlaros/8-stripe/160/ior_test -b 2048m -k -r -t 16m
Machine: catamount rsclogin02
Maximum wall clock deviation: 0.00 sec
df /scratch1/jhlaros/8-stripe/160:
WARNING: Not using system("df /scratch1/jhlaros/8-stripe/160") call
Participating tasks: 160

Summary:
	api                = POSIX
	test filename      = /scratch1/jhlaros/8-stripe/160/ior_test
	access             = single-shared-file
	pattern            = segmented (1 segment)
	clients            = 160 (160 per node)
	repetitions        = 1
	xfersize           = 16 MiB
	blocksize          = 2 GiB
	aggregate filesize = 320 GiB

Commencing read performance test.
Sat Jun 24 18:00:44 2006

access    bw(MiB/s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s)   iter
------    ---------  ---------- ---------  --------   --------   --------   ----
read      440.15     2097152    16384      0.159016   744.29     0.040877   0   

Max Read:  440.15 MiB/sec (461.53 MB/sec)

Run finished: Sat Jun 24 18:13:09 2006
OBDS:
0: ost11028_0_UUID ACTIVE
1: ost11028_1_UUID ACTIVE
2: ost11031_0_UUID ACTIVE
3: ost11031_1_UUID ACTIVE
4: ost11032_0_UUID ACTIVE
5: ost11032_1_UUID ACTIVE
6: ost11035_0_UUID ACTIVE
7: ost11035_1_UUID ACTIVE
/scratch1/jhlaros/8-stripe/160/ior_test
	obdidx		 objid		objid		 group
	     0	            90	         0x5a	             0
	     1	            90	         0x5a	             0
	     2	            90	         0x5a	             0
	     3	            91	         0x5b	             0
	     4	            91	         0x5b	             0
	     5	            90	         0x5a	             0
	     6	            90	         0x5a	             0
	     7	            90	         0x5a	             0

lfs setstripe /scratch1/jhlaros/8-stripe/160/ior_test 2097152 0 8
yod allocation delayed for Lustre recovery
IOR-2.8.10: MPI Coordinated Test of Parallel I/O

Run began: Sat Jun 24 18:13:12 2006
Command line used: /home/jhlaros/thurs-5-18-06/test5/IOR -g -E -v -a POSIX -C -i 1 -o /scratch1/jhlaros/8-stripe/160/ior_test -b 2048m -k -w -t 16m
Machine: catamount rsclogin02
Maximum wall clock deviation: 0.00 sec
df /scratch1/jhlaros/8-stripe/160:
WARNING: Not using system("df /scratch1/jhlaros/8-stripe/160") call
Participating tasks: 160

Summary:
	api                = POSIX
	test filename      = /scratch1/jhlaros/8-stripe/160/ior_test
	access             = single-shared-file
	pattern            = segmented (1 segment)
	clients            = 160 (160 per node)
	repetitions        = 1
	xfersize           = 16 MiB
	blocksize          = 2 GiB
	aggregate filesize = 320 GiB

Commencing write performance test.
Sat Jun 24 18:13:12 2006

access    bw(MiB/s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s)   iter
------    ---------  ---------- ---------  --------   --------   --------   ----
write     984.64     2097152    16384      0.174523   332.60     0.019877   0   

Max Write: 984.64 MiB/sec (1032.47 MB/sec)

Run finished: Sat Jun 24 18:18:45 2006
yod allocation delayed for Lustre recovery
IOR-2.8.10: MPI Coordinated Test of Parallel I/O

Run began: Sat Jun 24 18:18:48 2006
Command line used: /home/jhlaros/thurs-5-18-06/test5/IOR -g -E -v -a POSIX -C -i 1 -o /scratch1/jhlaros/8-stripe/160/ior_test -b 2048m -k -r -t 16m
Machine: catamount rsclogin02
Maximum wall clock deviation: 0.00 sec
df /scratch1/jhlaros/8-stripe/160:
WARNING: Not using system("df /scratch1/jhlaros/8-stripe/160") call
Participating tasks: 160

Summary:
	api                = POSIX
	test filename      = /scratch1/jhlaros/8-stripe/160/ior_test
	access             = single-shared-file
	pattern            = segmented (1 segment)
	clients            = 160 (160 per node)
	repetitions        = 1
	xfersize           = 16 MiB
	blocksize          = 2 GiB
	aggregate filesize = 320 GiB

Commencing read performance test.
Sat Jun 24 18:18:48 2006

access    bw(MiB/s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s)   iter
------    ---------  ---------- ---------  --------   --------   --------   ----
read      450.34     2097152    16384      0.159596   727.45     0.039780   0   

Max Read:  450.34 MiB/sec (472.22 MB/sec)

Run finished: Sat Jun 24 18:30:56 2006
OBDS:
0: ost11028_0_UUID ACTIVE
1: ost11028_1_UUID ACTIVE
2: ost11031_0_UUID ACTIVE
3: ost11031_1_UUID ACTIVE
4: ost11032_0_UUID ACTIVE
5: ost11032_1_UUID ACTIVE
6: ost11035_0_UUID ACTIVE
7: ost11035_1_UUID ACTIVE
/scratch1/jhlaros/8-stripe/160/ior_test
	obdidx		 objid		objid		 group
	     0	            91	         0x5b	             0
	     1	            91	         0x5b	             0
	     2	            91	         0x5b	             0
	     3	            92	         0x5c	             0
	     4	            92	         0x5c	             0
	     5	            91	         0x5b	             0
	     6	            91	         0x5b	             0
	     7	            91	         0x5b	             0

lfs setstripe /scratch1/jhlaros/8-stripe/160/ior_test 2097152 0 8
yod allocation delayed for Lustre recovery
IOR-2.8.10: MPI Coordinated Test of Parallel I/O

Run began: Sat Jun 24 18:30:58 2006
Command line used: /home/jhlaros/thurs-5-18-06/test5/IOR -g -E -v -a POSIX -C -i 1 -o /scratch1/jhlaros/8-stripe/160/ior_test -b 2048m -k -w -t 16m
Machine: catamount rsclogin02
Maximum wall clock deviation: 0.00 sec
df /scratch1/jhlaros/8-stripe/160:
WARNING: Not using system("df /scratch1/jhlaros/8-stripe/160") call
Participating tasks: 160

Summary:
	api                = POSIX
	test filename      = /scratch1/jhlaros/8-stripe/160/ior_test
	access             = single-shared-file
	pattern            = segmented (1 segment)
	clients            = 160 (160 per node)
	repetitions        = 1
	xfersize           = 16 MiB
	blocksize          = 2 GiB
	aggregate filesize = 320 GiB

Commencing write performance test.
Sat Jun 24 18:30:58 2006

access    bw(MiB/s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s)   iter
------    ---------  ---------- ---------  --------   --------   --------   ----
write     947.05     2097152    16384      0.175593   345.81     0.020434   0   

Max Write: 947.05 MiB/sec (993.05 MB/sec)

Run finished: Sat Jun 24 18:36:44 2006
yod allocation delayed for Lustre recovery
IOR-2.8.10: MPI Coordinated Test of Parallel I/O

Run began: Sat Jun 24 18:36:47 2006
Command line used: /home/jhlaros/thurs-5-18-06/test5/IOR -g -E -v -a POSIX -C -i 1 -o /scratch1/jhlaros/8-stripe/160/ior_test -b 2048m -k -r -t 16m
Machine: catamount rsclogin02
Maximum wall clock deviation: 0.00 sec
df /scratch1/jhlaros/8-stripe/160:
WARNING: Not using system("df /scratch1/jhlaros/8-stripe/160") call
Participating tasks: 160

Summary:
	api                = POSIX
	test filename      = /scratch1/jhlaros/8-stripe/160/ior_test
	access             = single-shared-file
	pattern            = segmented (1 segment)
	clients            = 160 (160 per node)
	repetitions        = 1
	xfersize           = 16 MiB
	blocksize          = 2 GiB
	aggregate filesize = 320 GiB

Commencing read performance test.
Sat Jun 24 18:36:47 2006

access    bw(MiB/s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s)   iter
------    ---------  ---------- ---------  --------   --------   --------   ----
read      435.64     2097152    16384      0.158981   752.00     0.040656   0   

Max Read:  435.64 MiB/sec (456.80 MB/sec)

Run finished: Sat Jun 24 18:49:20 2006
OBDS:
0: ost11028_0_UUID ACTIVE
1: ost11028_1_UUID ACTIVE
2: ost11031_0_UUID ACTIVE
3: ost11031_1_UUID ACTIVE
4: ost11032_0_UUID ACTIVE
5: ost11032_1_UUID ACTIVE
6: ost11035_0_UUID ACTIVE
7: ost11035_1_UUID ACTIVE
/scratch1/jhlaros/8-stripe/160/ior_test
	obdidx		 objid		objid		 group
	     0	            92	         0x5c	             0
	     1	            92	         0x5c	             0
	     2	            92	         0x5c	             0
	     3	            93	         0x5d	             0
	     4	            93	         0x5d	             0
	     5	            92	         0x5c	             0
	     6	            92	         0x5c	             0
	     7	            92	         0x5c	             0