This is an excerpt from Chapter 17, "Administering Solaris Zones," of Oracle Solaris 11 System Administration Exam Guide by Michael Ernest (McGraw-Hill Professional; 2013) with
permission from McGraw-Hill. Download a PDF of the full chapter. Also, read a Q&A with the author.
You are about to learn how to create and configure a Solaris Zone, observe it in its various states of being, and manage it as it runs on your system. For those of you who need to get on with learning about other virtualization techniques available on Solaris and other operating systems, learning how zones work will give you a solid conceptual foundation for those subjects.
You've been learning about zones already -- or at least the key parts that make up one. For example, you've learned to configure projects, an abstraction for treating a group of processes as a workload. You've learned how to set limits with process, task, and project resource controls, and you've learned about global resource controls such as memory caps, process scheduling, and CPU provisioning. You can create a new ZFS file system with its own property controls for any occasion you deem fit. And you know that you can now create virtual NICs whenever you need a new logical network resource.
Using properties or attributes as a meme, each of these subsystems provides a common quality, isolation, that contributes to zones. Projects and resource controls let you separate a workload from other processes in your system. ZFS file systems give you configuration management as well as a separate namespace for files. VNICs let you create a network presence that looks like another station on the network.
Congratulations if you're now asking, What else could I possibly need to create a zone? You've come to exactly the right place.
Understanding Solaris Zones
Let's cut right to it: What is a Solaris Zone? If I'm talking loosely, I'd say it's a special kind of project that runs in its own boot environment (BE). That is, it runs its own operating system and its own file systems. It maintains its own process scheduler, its own service repository, and it can support its own network services. There are a few subtler elements as well that can make it virtually separate from the machine-level operating system that supports it. From now on, I'll refer to this machine-level environment as the global zone.
A zone takes the isolation concept a step further by partitioning its operations from all other processes running on the same machine, whether those processes run in the global zone or in another non-global zone. This additional level of separation means you can run multiple versions of some service, such as an HTTPd server, by using zones as separate process- and namespaces. It means you can run a slew of processes in one zone without fear that misconfiguration, process failures, or other anomalies can cause failures in other zones, including the global zone.
But let's back up for a minute. A lot of computing technology in the last decade, and all of this one so far, has focused heavily on virtualization. When Solaris Zones debuted in 2005, a lot people asked how they related to virtual machine technology such as VMWare. It became common to pitch zones using illustrations such as Figure 17-1.
This figure suggests the roles in a traditional multitiered environment could be fulfilled by zones running on one machine instead of running on separate physical systems. If each software package here supports its own HTTP and HTTPS services, it's no problem. Each zone can have its own network stack (and therefore its own range of ports), its own resource configuration, and so on. This picture is appealing for managers who want to consolidate their software processes, but still lots of people ask, What other operating systems can you run in a zone?
The short answer: not too many. In Solaris 11, zones running either late update versions of Solaris 10 or Solaris 11 are supported. A zone does not run on a hypervisor as many virtual machine implementations do. The analogous term sometimes applied to the global zone is supervisor, meant to suggest a facility that is much closer in form to a custom BE manager than a resource provider to different operating system instances. In Solaris 10, there was some support for Solaris Zones running an instance of Linux, but it didn't seem to attract an audience. Bottom line: Thinking of zones as a kind of virtualization misses the point, at least in my view.
What you can create instead are as many Solaris instances as your current hardware will support; that's not a small benefit. Each one can run a workload that is, from a resource perspective, unrelated to other workloads. More to the point, it can be configured specifically for the workload(s) that it hosts: a database, an HTTP server, a batch-processing application, or something else. Many resource settings that once had to be made in the global /etc/system file are now applied to a zone, thus removing the implied dependency of all processes on global settings. Consolidation, after all, is not much of a benefit if your database has to work with the same resource configuration as your HTTP server.
Solaris 11 expert answers
Do you have a question about Solaris in general or Solaris 11 in particular? Expert Michael Ernest may be able to help. Send your question to Editor Mark Fontecchio.
Test your knowledge with this Solaris 11 quiz.
Zone isolation doesn't just protect the workload, either. All non-global zones rely on the global zone for kernel resources. Those resources are shared in the sense that every zone relies on them, but they are also moderated in the sense that only the global zone permits every other zone access to them. In addition, Solaris 11 extends the idea of a zone as a separate user space with what is called delegated administration. Each zone can be configured for access to a specific non-root user in the global zone. Each zone's resources derive from the global zone, but it's otherwise reserved for the user(s) as a separate process space they can use without fear of conflicting with other non-local zones.
Understanding the Global Zone
The global zone is the operating system instance that manages your physical operating environment (or virtualized one). It therefore has certain privileges and capabilities that other zones you create will not have. For one, only the global zone can host and manage another zone. You cannot create a zone inside a non-global zone. Also, although non-global zones share many aspects of a BE, you cannot boot off the hardware using one. Non-global zones can only run when the global zone runs. A non-global zone configuration, in one sense, is just a list of resource requests passed to the global zone at startup time. The global zone validates each request before initiating the zone's boot. Some resource requests may even fail without causing the zone boot to fail too. The general actions you can perform include:
- Adding and deleting zone configurations
- Installing and removing zone files
- Booting, halting, restarting, and shutting down zones
You use the zonecfg utility to create, modify, and delete zones (we'll examine this tool in the next section). When you create a new zone, its configuration is stored in the /etc/zones directory in an XML file format called a manifest. The full contents of this directory on a freshly installed system are shown here:
This directory includes an index file that records the name and current installation state of every zone. The global zone is listed in the index but is not represented with an XML manifest by default. If, however, you use the zonecfg utility to add resource controls to the global zone, it will write a global.xml file that reflects the changes. Other XML files that start with SUNW or SYS are templates for stock configurations, including legacy Solaris 10 types.
Zone manifests can be modified at any time. However, changes made to a manifest cannot be pushed into the running zone; doing so requires a zone reboot. Resource control changes are different. Any control you can modify using the rctladm and prtcl utilities will take effect on a running zone.
This was first published in September 2013