A typical Cplant PBS set-up will have the PBS server and scheduler running on one service node, and one PBS MOM on each of several other service nodes. The bebopd node allocator will be on still another service node.
A small Cplant may have just one service node and can run server, scheduler, MOM and bebopd all on the same service node.
In any case, the PBS components (both daemons and client programs) and a bebopd running in PBSupdate mode require a runtime directory of PBS configuration files. The runtime configuration files are identical on each node. In addition, the PBS deamons log information into this directory, so on their nodes the runtime directory must be private, writable storage that persists after reboots.
When you build the PBS codes and install them they will create a template of this directory on your build machine. You must then edit certain configuration files to have the specific hostnames and pathnames for your cluster. Complete instructions are in the README file in the Cplant source code repository [3]. We suggest copying the template directory to a location readable by all service nodes. Then at startup time if the runtime directory is not set up on a node, a startup script (/cplant/etc/pbs-env) can create it from the template.
You should know where this runtime directory is on the service node. When you encounter a problem with PBS, the log files in this directory may help you to diagnose and solve the problem.
We typically place the runtime directory at /tmp/pbs/working on the service node. This is on a local disk, if we have one, or is a link to a separate directory for each service node on a file server if we have no local disk.