Cplant IO (powerpoint)
Fortran example
(Download text)
C example
(Download text)
ENFS is a scalable parallel I/O solution currently running on Cplant clusters. The ENFS daemons run on dedicated I/O nodes in the service partition. Each compute node is mapped to one of the daemons in a round-robin selection at boot time. The daemons serve as proxies between the file server(s) external to the Cplant cluster and the compute and service nodes.
Standard I/O without ENFS from a user application is routed back to the service node where the job was initiated. For a compute job running from several hundred nodes, this creates quite a bottleneck. The use of multiple I/O proxies provides parallelization of simultaneous requests from many compute nodes. This results in a great improvement in aggregate I/O rates. Proper use of ENFS also relieves the load on the service nodes. The ENFS file systems are also mounted on the visualization machines tesla and discovery, so data is immediately available for post-processing.
The Cplant clusters provide access to scratch directories
mounted on the service nodes at
/enfs/tmp/username. SRN scratch directory is /enfs/tmp/username on tesla.
SON scratch directory is /enfs/tmp/username on discovery. If your directory does not exist,
you should be able to create it using mkdir. These directories can be accessed
efficiently from a user application by referencing the file
name as:
Note that the full path name is required. If the prefix "enfs:" is omitted, the file operations still work, but are routed back through the service node from which the job was launched.enfs:/enfs/tmp/username/path_to_file
ENFS does no file locking, and provides no synchronization of writes or reads from different compute nodes. This means that user applications should not attempt overlapping writes using ENFS. Also, it is not safe to write a file from one node and read from another, unless the file is closed after the write, since the order of these operations is not guaranteed.