Previous Page
Next Page

System Startup and Shutdown

During system startup, or bootup, the boot process goes through the following phases:

  1. Boot PROM phase After you turn on power to the system, the PROM displays system identification information and runs self-test diagnostics to verify the system's hardware and memory. It then loads the primary boot program, called bootblk.

  2. Boot program phase The bootblk program finds and executes the secondary boot program (called ufsboot) from the UFS and loads it into memory. After the ufsboot program is loaded, it loads the two-part kernel.

  3. Kernel initialization phase The kernel initializes itself and begins loading modules, using ufsboot to read the files. When the kernel has loaded enough modules to mount the root file system, it unmaps the ufsboot program and continues, using its own resources.

  4. init phase The kernel starts the Unix operating system, mounts the necessary file systems, and runs /sbin/init to bring the system to the initdefault state specified in /etc/inittab.

    The kernel creates a user process and starts the /sbin/init process, which starts other processes by reading the /etc/inittab file.

    The /sbin/init process starts the run control (rc) scripts, which execute a series of other scripts. These scripts (/sbin/rc*) check and mount file systems, start various processes, and perform system maintenance tasks.

  5. svc.startd phase The svc.startd daemon starts the system services and boots the system to the appropriate milestone.

OpenBoot Environment

The hardware-level user interface that you see before the operating system starts is called the OpenBoot PROM (OBP). The primary tasks of the OpenBoot firmware are as follows:

  • Test and initialize the system hardware.

  • Determine the hardware configuration.

  • Start the operating system from either a mass storage device or a network.

  • Provide interactive debugging facilities for testing hardware and software.

  • Allow modification and management of system startup configuration, such as NVRAM parameters.

Specifically, the following tasks are necessary to initialize the operating system kernel:

  1. OpenBoot displays system identification information and then runs self-test diagnostics to verify the system's hardware and memory. These checks are known as a POST.

  2. OpenBoot loads the primary startup program, bootblk, from the default startup device.

  3. The bootblk program finds and executes the secondary startup program, ufsboot, and loads it into memory. The ufsboot program loads the operating system kernel.

A device tree is a series of node names separated by slashes (/). The top of the device tree is the root device node. Following the root device node, and separated by a leading slash (/), is a bus nexus node. Connected to a bus nexus node is a leaf node, which is typically a controller for the attached device. Each device pathname has this form:

driver-name@unit-address:device-arguments

Nodes are attached to a host computer through a hierarchy of interconnected buses on the device tree. OpenBoot deals directly with the hardware devices in the system. Each device has a unique name that represents both the type of device and the location of that device in the device tree. The OpenBoot firmware builds a device tree for all devices from information gathered at the POST. Sun uses the device tree to organize devices that are attached to the system.

Device pathnames tend to get very long; therefore, the OpenBoot environment utilizes a method that allows you to assign shorter names to the long device pathnames. These shortened names are called device aliases and they are assigned using the devalias command. Table 7 describes the devalias command, which is used to examine, create, and change OpenBoot aliases.

Table 7. devalias Commands

Command

Description

devalias

Displays all current device aliases.

devalias_<alias>

Displays the device pathname corresponding to alias.

devalias_<alias> <device-path>

Defines an alias representing device-path.


When the kernel is loading, it reads the /etc/system file where system configuration information is stored. This file modifies the kernel's parameters and treatment of loadable modules. It specifically controls the following:

  • The search path for default modules to be loaded at boot time as well as the modules not to be loaded at boot time

  • The modules to be forcibly loaded at boot time rather than at first access

  • The root type and device

  • The new values to override the default kernel parameter values

Various parameters are used to control the OpenBoot environment. Any user can view the OpenBoot configuration variables from a Unix prompt by typing the following:

/usr/sbin/eeprom

OpenBoot can be used to gather and display information about your system with the commands described in Table 8.

Table 8. OpenBoot Commands

Command

Description

banner

Displays the power-on banner

show-sbus

Displays a list of installed and probed SBus devices

.enet-addr

Displays the current Ethernet address

.idprom

Displays ID PROM contents, formatted

.traps

Displays a list of SPARC trap types

.version

Displays the version and date of the startup PROM

.speed

Displays CPU and bus speeds

show-devs

Displays all installed and probed devices


In addition, various hardware diagnostics can be run in OpenBoot to troubleshoot hardware and network problems.

The operating system is booted from the OpenBoot prompt using the boot command. You can supply several options to the OpenBoot boot command at the ok prompt. Table 9 describes each of these.

Table 9. boot Command Options

Option

Description

-a

An interactive boot

-r

A reconfiguration boot

-s

A single-user boot

-v

A verbose-mode boot


The following list describes the steps for booting interactively:

1.
At the ok prompt, type boot -a and press Enter. The boot program prompts you interactively.

2.
Press Enter to use the default kernel (/kernel/unix) as prompted, or type the name of the kernel to use for booting and press Enter.

3.
Press Enter to use the default modules directory path as prompted, or type the path for the modules directory and press Enter.

4.
Press Enter to use the default /etc/system file as prompted, or type the name of the system file and press Enter.

5.
Press Enter to use the default root file system type as prompted (UFS for local disk booting, or NFS for diskless clients).

6.
Press Enter to use the default physical name of the root device as prompted, or type the device name.

The Kernel

After the boot command initiates the kernel, the kernel begins several phases of the startup process. The first task is for OpenBoot to load the two-part kernel. The secondary startup program, ufsboot, which is described in the preceding section, loads the operating system kernel. The core of the kernel is two pieces of static code called genunix and unix. genunix is the platform-independent generic kernel file, and unix is the platform-specific kernel file. When the system boots, ufsboot combines these two files into memory to form the running kernel.

The kernel initializes itself and begins loading modules, using ufsboot to read the files. After the kernel has loaded enough modules to mount the root file system, it unmaps the ufsboot program and continues, using its own resources. The kernel creates a user process and starts the /sbin/init process.

During the init phase of the boot process, the init daemon (/sbin/init) reads the /etc/default/init file to set any environment variables. By default, only the TIMEZONE variable is set. Then, init reads the /etc/inittab file and executes any process entries that have sysinit in the action field, so that any special initializations can take place before users login.

After reading the /etc/inittab file, init starts the svc.startd daemon, which is responsible for starting and stopping other system services such as mounting file systems and configuring network devices. In addition, svc.startd will execute legacy run control (rc) scripts, which are described later in this section.

The kernel is dynamically configured in Solaris 10. The kernel consists of a small static core and many dynamically loadable kernel modules. Many kernel modules are loaded automatically at boot time, but for efficiency, otherssuch as device driversare loaded from the disk as needed by the kernel.

When the kernel is loading, it reads the /etc/system file where system configuration information is stored. This file modifies the kernel's parameters and treatment of loadable modules.

After control of the system is passed to the kernel, the system begins initialization and starts the svc.startd daemon. In Solaris 10, the svc.startd daemon replaces the init process as the master process starter and restarter. Where in previous version of Solaris, init would start all processes and bring the system to the appropriate "run level" or "init state," now SMF, or more specifically, the svc.startd daemon, assumes the role of starting system services.

The service instance is the fundamental unit of administration in the SMF framework and each SMF service has the potential to have multiple versions of it configured. An instance is a specific configuration of a service and multiple instances of the same version can run in the Solaris operating environment.

The services started by svc.startd are referred to as milestones. The milestone concept replaces the traditional run levels that were used in previous versions of Solaris. A milestone is a special type of service which represents a group of services. A milestone is made up of several SMF services. For example, the services which constituted run levels S, 2, and 3 in previous versions of Solaris are now represented by milestone services named.

milestone/single-user (equivalent to run level S)

milestone/multi-user (equivalent to run level 2)

milestone/multi-user-server (equivalent to run level 3)

Other milestones that are available in the Solaris 10 OE are

     milestone/name-services

     milestone/devices

     milestone/network

     milestone/sysconfig

An SMF manifest is an XML (Extensible Markup Language) file that contains a complete set of properties that are associated with a service or a service instance. The properties are stored in files and subdirectories located in /var/svc/manifest.

The SMF provides a set of command-line utilities used to administer and configure the SMF that are described in Chapter 3.

A run level is a system state (run state), represented by a number or letter, that identifies the services and resources that are currently available to users. The who -r command can still be used to identify a systems run state as follows:

who -r

The system responds with the following, indicating that run-level 3 is the current run state:

.      run-level 3  Aug  4 09:38     3     1  1

Since the introduction of SMF in Solaris 10, we now refer to these run states as milestones and Chapter 3 describes how the legacy run states coincide with the Solaris 10 milestones.

Commands to Shut Down the System

When preparing to shut down a system, you need to determine which of the following commands is appropriate for the system and the task at hand:

/usr/sbin/shutdown

/sbin/init

/usr/sbin/halt

/usr/sbin/reboot

/usr/sbin/poweroff

Stop+A or L1+A (to be used as a last resort)


Previous Page
Next Page