Changes

A RHEL Container Tutorial using Podman, Skopeo and Buildah

20,639 bytes added, 20:17, 10 June 2019
Created page with "{{#pagetitle: A RHEL 8 Container Tutorial using Podman, Skopeo and Buildah }} <table border="0" cellspacing="0" width="100%"><tr> <td width="20%">An Introduction to Linux Co..."
{{#pagetitle: A RHEL 8 Container Tutorial using Podman, Skopeo and Buildah }}
<table border="0" cellspacing="0" width="100%"><tr>
<td width="20%">[[An Introduction to Linux Containers on RHEL|Previous]]<td align="center">[[Red Hat Enterprise Linux Essentials|Table of Contents]]<td width="20%" align="right">[[Setting Up a RHEL Web Server|Next]]</td>
<tr>
<td width="20%">An Introduction to Linux Containers on RHEL 8<td align="center"><td width="20%" align="right">Setting Up a RHEL 8 Web Server</td>
</table>
<hr>


<htmlet>rhel8</htmlet>


Now that the basics of Linux Containers have been covered in the previous chapter, the goal of this chapter is demonstrate how to create and containers using the Podman, Skopeo and Buildah tools included with RHEL 8. It is intended that by the end of this chapter you will have a clearer understanding of how to create and manage containers on RHEL 8 and will have gained a knowledge foundation on which to continue exploring the power of Linux Containers.

== Logging in to the Red Hat Container Registry ==

To begin with, a container will be created using an existing image provided within the Red Hat Container Registry. Before an image can be pulled from the registry to the local system, you must first log into the registry using your existing Red Hat credentials using the ''podman'' tool as follows:

<pre>
# podman login registry.redhat.io
Username: yourusername
Password: yourpassword
Login Succeeded!
</pre>

== Pulling a Container Image ==


For this example, the RHEL 8 Base image will be pulled from the registry. Before pulling an image, however, information about the image can be obtained using the ''skopeo'' tool, for example:

<pre>
# skopeo inspect docker://registry.redhat.io/rhel8-beta/rhel-init
{
&quot;Name&quot;: &quot;registry.redhat.io/rhel8-beta/rhel&quot;,
&quot;Digest&quot;: &quot;sha256:4886c1acd710c40217ef3a072cfe66faf0f77b5944abbf8a9b89ff12d1521982&quot;,
&quot;RepoTags&quot;: [
&quot;8.0&quot;,
&quot;8.0-760&quot;,
&quot;latest&quot;
],
&quot;Created&quot;: &quot;2018-11-13T18:11:21.184832Z&quot;,
&quot;DockerVersion&quot;: &quot;1.13.1&quot;,
&quot;Labels&quot;: {
&quot;architecture&quot;: &quot;x86_64&quot;,
&quot;authoritative-source-url&quot;: &quot;registry.access.redhat.com&quot;,
&quot;build-date&quot;: &quot;2018-11-13T18:10:51.964878&quot;,
&quot;com.redhat.build-host&quot;: &quot;cpt-0004.osbs.prod.upshift.rdu2.redhat.com&quot;,
&quot;com.redhat.component&quot;: &quot;rhel-base-container&quot;,
&quot;description&quot;: &quot;The Red Hat Enterprise Linux Base image is designed to be a fully supported foundation for your containerized applications. This base image provides your operations and application teams with the packages, language runtimes and tools necessary to run, maintain, and troubleshoot all of your applications. This image is maintained by Red Hat and updated regularly. It is designed and engineered to be the base layer for all of your containerized applications, middleware and utilites. When used as the source for all of your containers, only one copy will ever be downloaded and cached in your production environment. Use this image just like you would a regular Red Hat Enterprise Linux distribution. Tools like yum, gzip, and bash are provided by default. For further information on how this image was built look at the /root/anacanda-ks.cfg file.&quot;,
.
.
</pre>

Having verified that this is indeed the correct image, the following ''podman'' command will pull the image to the local system:

<pre>
# podman pull registry.redhat.io/rhel8-beta/rhel
Trying to pull registry.redhat.io/rhel8-beta/rhel...Getting image source signatures
Copying blob sha256:619051b1fc41546ce2c2d6911145f66472d72caf7a4aaf8ffcad782f27227e9c
66.48 MB / 66.48 MB [=====================================================] 16s
Copying blob sha256:386105ae8b6231e5c25160d9a40bec1da1fdb822455f6e3094bef2b6e877d865
1.33 KB / 1.33 KB [========================================================] 0s
Copying config sha256:a80dad1c19537b0961e485dfa0a43fbe3c0a71cec2cca32d3264e087e396a211
6.33 KB / 6.33 KB [========================================================] 0s
Writing manifest to image destination
Storing signatures
a80dad1c19537b0961e485dfa0a43fbe3c0a71cec2cca32d3264e087e396a211
</pre>

Verify that the image has been stored by asking ''podman'' to list all local images:

<pre>
# podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.redhat.io/rhel8-beta/rhel latest a80dad1c1953 4 months ago 210MB
</pre>

Details about a local image may be obtained by running the ''podman inspect'' command:

<pre>
# podman inspect registry.redhat.io/rhel8-beta/rhel
</pre>

This command should output the same information as the ''skopeo'' command performed on the remote image earlier in this chapter.

== Running the Image in a Container ==

The image pulled from the registry is a fully operational image that is ready to run in a container without modification. To run the image, use the ''podman run'' command. In this case the ''–rm'' option will be specified to indicate that we want to run the image in a container, execute one command and then have the container exit. In this case, the ''cat'' tool will be used to output the content of the ''/etc/passwd'' located on the container root filesystem:

<pre>
# podman run --rm registry.redhat.io/rhel8-beta/rhel cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
sync:x:5:0:sync:/sbin:/bin/sync
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
halt:x:7:0:halt:/sbin:/sbin/halt
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
operator:x:11:0:operator:/root:/sbin/nologin
games:x:12:100:games:/usr/games:/sbin/nologin
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
nobody:x:65534:65534:Kernel Overflow User:/:/sbin/nologin
dbus:x:81:81:System message bus:/:/sbin/nologin
systemd-coredump:x:999:997:systemd Core Dumper:/:/sbin/nologin
systemd-resolve:x:193:193:systemd Resolver:/:/sbin/nologin
</pre>

Compare the content of the ''/etc/passwd'' file within the container with the ''/etc/passwd'' file on the host system and note that it lacks all of the additional users that are present on the host confirming that the ''cat'' command was executed within the container environment. Also note that the container started, ran the command and exited all within a matter of seconds. Compare this to the amount of time it takes to start a full operating, perform a task and shutdown a virtual machine and you begin to appreciate the speed and efficiency of containers.

To launch a container, keep it running and access the shell, the following command can be used:

# podman run --name=mycontainer -it registry.redhat.io/rhel8-beta/rhel /bin/bash

<pre>
bash-4.4#
</pre>

Note that an additional command-line option is used to assign the name “mycontainer” to the container. Though optional, this makes the container easier to recognize and reference as an alternative to using the automatically generated container ID.

While the container is running, run ''podman'' in a different terminal window to see the status of all containers on the system

<pre>
# podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES IS INFRA
2bc48881067d registry.redhat.io/rhel8-beta/rhel:latest /bin/bash 43 seconds ago Up 42 seconds ago mycontainer false
</pre>

To execute a command in a running container from the host, simply use the ''podman exec'' command, referencing the name of the running container and the command to be executed. The following command, for example, starts up a second bash session in the container named ''mycontainer'':

<pre>
# podman exec -it mycontainer /bin/bash
bash-4.4#
</pre>

Note that though the above example referenced the container name the same result can be achieved using the container ID as listed by the ''podman ps -a'' command:

<pre>
# podman exec -it 2bc48881067d /bin/bash
bash-4.4#
</pre>

Alternatively, the ''podman attach'' command will also attach to a running container and access the shell prompt:

<pre>
# podman attach mycontainer
bash-4.4#
</pre>

Once the container is up and running, any additional configuration changes can be made and packages installed just like any other RHEL 8 system.

== Managing a Container ==

Once launched, a container will continue to run until it is stopped via ''podman'', or the command that was launched when the container was run exits. Running the following command on the host, for example, will cause the container to exit:

<pre>
# podman stop mycontainer
</pre>

Alternatively, pressing the Ctrl-D keyboard sequence within the last remaining bash shell of the container would cause both the shell and container to exit. Once it has exited, the status of the container will change accordingly:

<pre>
# podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES IS INFRA
2bc48881067d registry.redhat.io/rhel8-beta/rhel:latest /bin/bash 19 minutes ago Exited (0) 51 seconds ago mycontainer false
</pre>

Although the container is no longer running, it still exists and contains all of the changes that were made to the configuration and file system. If you installed packages, made configuration changes or added files, these changes will persist within “mycontainer”. To verify this, simply restart the container as follows:

<pre>
# podman start mycontainer
</pre>

After starting the container, use the ''podman exec'' command once again to execute commands within the container as outlined previously. For example, to once again gain access to a shell prompt:

<pre>
# podman exec -it mycontainer /bin/bash
</pre>

A running container may also be paused and resumed using the ''podman pause'' and ''unpause'' commands as follows:

<pre>
# podman pause my container
# podman unpause my container
</pre>

== Saving a Container to an Image ==

Once the container guest system is configured to your requirements there is a good chance that you will want to create and run more than one container of this particular type. To do this, the container needs to be saved as an image to local storage so that it can be used as the basis for additional container instances. This is achieved using the ''podman commit'' command combined with the name or ID of the container and the name by which the image will be stored, for example:

<pre>
# podman commit mycontainer myrhel_image
</pre>

Once the image has been saved, check that it now appears in the list of images in the local repository:

<pre>
# podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/myrhel_image latest 15b7788c7e6b About a minute ago 246MB
registry.redhat.io/rhel8-beta/rhel latest a80dad1c1953 4 months ago 210MB
</pre>

The saved image can now be used to create additional containers identical to the original:

<pre>
# podman run --name=mycontainer2 -it localhost/myrhel_image /bin/bash
</pre>

== Removing an Image from Local Storage ==

To remove an image from local storage once it is no longer needed, simply run the ''podman rmi ''command, referencing either the image name or ID as output by the ''podman images'' command. For example, to remove the image named ''myrhel_image'' created in the previous section, run ''podman'' as follows:

<pre>
# podman rmi localhost/myrhel_image
</pre>

Note before an image can be removed, any containers based on that image must first be removed.

== Removing Containers ==

Even when a container has exited or been stopped, it still exists and can be restarted at any time. If a container is no longer needed, it can be deleted using the ''podman rm'' command as follows after the container has been stopped:

<pre>
# podman rm mycontainer2
</pre>

== Building a Container with Buildah ==

Buildah allows new containers to be built either from existing containers, an image or entirely from scratch. Buildah also includes the ability to mount the file system of a container so that it can be accessed and modified from the host.

The following ''buildah'' command, for example, will build a container from the RHEL 8 Base image (if the image has not already been pulled from the registry, ''buildah'' will download it before creating the container):

<pre>
# buildah from registry.redhat.io/rhel8-beta/rhel
</pre>

The result of running this command will be a container named ''rhel-working-container'' that is ready to run:

<pre>
# buildah run rhel-working-container cat /etc/passwd
</pre>

== Building a Container from Scratch ==

Building a container from scratch essentially creates an empty container. Once created, packages may be installed to meet the requirements of the container. This approach is useful when creating a container that will only need the minimum of packages installed.

The first step in building from scratch is to run the following command to build the empty container:

<pre>
# buildah from scratch
working-container
</pre>

After the build is complete, a new container will have been created named ''working-container'':

<pre>
# buildah containers
CONTAINER ID BUILDER IMAGE ID IMAGE NAME CONTAINER NAME
5d426700faac * a80dad1c1953 registry.redhat.io/rhel8-beta/rhel:latest rhel-working-container
3ec997ea2a87 * scratch working-container
</pre>

The empty container is now ready to have some packages installed. Unfortunately this cannot be performed from within the container because not even the ''bash'' or ''dnf'' tools exist at this point. Instead the container filesystem needs to be mounted on the host system and the installation of the packages performed using ''dnf'' with the system root set to the mounted container filesystem. Begin this process by mounting the container’s filesystem as follows:

<pre>
# buildah mount working-container
/var/lib/containers/storage/overlay/20b46cf0e2994d1ecdc4487b89f93f6ccf41f72788da63866b6bf80984081d9a/merge
</pre>

If the file system was successfully mounted, ''buildah'' will output the mount point for the container file system. Now that we have access to the container filesystem, the ''dnf'' command can be used to install packages into the container using the ''–installroot'' option to point to the mounted container file system. The following command, for example, installs the bash, CoreUtils and dnf packages on the container filesystem (where ''&lt;container_fs_mount&gt;'' is the mount path output previously by the ''buildah mount'' command) :

<pre>
# dnf install --installroot &lt;container_fs_mount&gt; bash coreutils dnf
</pre>

After the installation completes, unmount the scratch filesystem as follows:

<pre>
# buildah umount working-container
</pre>

Once ''dnf'' has performed the package installation, the container can be run and the bash command prompt accessed as follows:

<pre>
# buildah run working-container bash
bash-4.4#
</pre>

== Container Bridge Networking ==

As outlined in the previous chapter, container networking is implemented using the Container Networking Interface (CNI) bridged network stack. The following command shows the typical network configuration on a host system on which containers are running:

<pre>
# ip a
1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:20:dc:2f brd ff:ff:ff:ff:ff:ff
inet 192.168.0.33/24 brd 192.168.0.255 scope global dynamic noprefixroute enp0s3
valid_lft 3453sec preferred_lft 3453sec
inet6 2606:a000:4307:f000:aa6:6da1:f8a9:5f95/64 scope global dynamic noprefixroute
valid_lft 3599sec preferred_lft 3599sec
inet6 fe80::4275:e186:85e2:d81f/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: cni0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 7e:b6:04:22:4f:22 brd ff:ff:ff:ff:ff:ff
inet 10.88.0.1/16 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::7cb6:4ff:fe22:4f22/64 scope link
valid_lft forever preferred_lft forever
12: veth2a07dc55@if3: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue master cni0 state UP group default
link/ether 42:0d:69:13:89:af brd ff:ff:ff:ff:ff:ff link-netns cni-61ba825e-e596-b2ef-a59f-b0743025e448
inet6 fe80::400d:69ff:fe13:89af/64 scope link
valid_lft forever preferred_lft forever
</pre>

In the above example, the host has an interface named enp0s3 which is connected to the external network with an IP address of 192.168.0.33. In addition, a virtual interface has been created named cni0 and assigned the IP address of 10.88.0.1. Running the same ''ip'' command on a container running on the host might result in the following output:

<pre>
# ip a
1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0@if12: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue state UP group default
link/ether 3e:52:22:4b:e0:d8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.88.0.28/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::3c52:22ff:fe4b:e0d8/64 scope link
valid_lft forever preferred_lft forever
</pre>

In this case, the container has an IP address of 10.88.0.28. Running the ''ping'' command on the host will verify that the host and containers are indeed on the same subnet:

<pre>
# ping 10.88.0.28
PING 10.88.0.28 (10.88.0.28) 56(84) bytes of data.
64 bytes from 10.88.0.28: icmp_seq=1 ttl=64 time=0.056 ms
64 bytes from 10.88.0.28: icmp_seq=2 ttl=64 time=0.039 ms
.
.
</pre>

The CNI configuration settings can be found in the ''/etc/cni/net.d/87-podman-bridge.conflist'' file on the host system which, by default, will read as follows:

<pre>
{
&quot;cniVersion&quot;: &quot;0.3.0&quot;,
&quot;name&quot;: &quot;podman&quot;,
&quot;plugins&quot;: [
{
&quot;type&quot;: &quot;bridge&quot;,
&quot;bridge&quot;: &quot;cni0&quot;,
&quot;isGateway&quot;: true,
&quot;ipMasq&quot;: true,
&quot;ipam&quot;: {
&quot;type&quot;: &quot;host-local&quot;,
&quot;subnet&quot;: &quot;10.88.0.0/16&quot;,
&quot;routes&quot;: [
{ &quot;dst&quot;: &quot;0.0.0.0/0&quot; }
]
}
},
{
&quot;type&quot;: &quot;portmap&quot;,
&quot;capabilities&quot;: {
&quot;portMappings&quot;: true
}
}
]
}
</pre>

Changes can be made to this file to change the subnet address range, and also to change the plugin type (set to bridge for this example) for implementing different network configurations.

== Summary ==

This chapter has worked through the creation and management of Linux Containers on RHEL 8 using the ''skopeo'' and ''buildah'' tools, including the use of container images obtained from a repository and the creation of a new image built entirely from scratch.


<htmlet>rhel8</htmlet>


<hr>
<table border="0" cellspacing="0" width="100%"><tr>
<td width="20%">[[An Introduction to Linux Containers on RHEL|Previous]]<td align="center">[[Red Hat Enterprise Linux Essentials|Table of Contents]]<td width="20%" align="right">[[Setting Up a RHEL Web Server|Next]]</td>
<tr>
<td width="20%">An Introduction to Linux Containers on RHEL 8<td align="center"><td width="20%" align="right">Setting Up a RHEL 8 Web Server</td>
</table>