NAME

sdown - initiate shutdown sequence on a node or collection of nodes


MODULE

base


SYNOPSIS

sdown [ --help ] [ --db connectstring ] (--halt|--reboot|--sysrq) [--force] <item>...


DESCRIPTION

This script takes care of halting or rebooting nodes as nicely as possible. It will first ping a node to see if it is accessible, and then login to the node and instruct the OS to either halt or reboot using the "shutdown" command.


OPTIONS

--halt Halt the system after OS shutdown.

--reboot Reboot the system after OS shutdown

--sysrq Attempt to reboot the system using the reBoot feature of the Linux kernel SysRq interface (which must be enabled in the kernel)

--force When used with --halt, perform a hard power cycle in the event that sdown is unable to login to the system and gracefully shutdown. Using with --reboot is just the same with the addition of sending a boot command to the node's console after power cycling.

--noping Don't bother checking to see if the node is up before trying to login to it. May speed things up a little, and allows sdown to work on systems where "echo" is not enabled, but can result in hung rsh processes.

--db <connectstring> Database type and connection information. For GDBM, "GDBM:" followed by the filename of the cluster database to use. For LDAP, the syntax is "LDAP:host:port:dbname"

--help Print extended usage information

--man Print this manpage.


NOTES

sdown will accept multiple devices or collection on the command line.

When shutting down collections, leaders in that collection are not shut down, in case that would prevent access to other nodes in the collection.

Before logging in to the remote host to initiate shutdown, sdown connects to the remote host's "echo" port. So, make sure that "echo" is enabled in inetd.conf or xinetd.d. This behavior can be overridden with the "--noping" option.


FILES

The default paths for location of the cluster configuration database, cloned directories, supporting libraries, and other settings are recorded in the CConf.pm config file. Set the environment variable CLUSTER_CONFIG to the location of CConf.pm, or use the default of /cluster/config


SEE ALSO

boot, console, lookup, power, status