DAPS in a Container

DAPS is OpenSUSE’s “DocBook Authoring and Publishing Suite” that is used to build documentation for SUSE and OpenSUSE. It actually requires A LOT of dependencies when being installed and for that reason alone, it’s actually better to run it in a container. This is my image and how I use it.

docker run -v ~/myproject/:/home/user jsevans/daps:latest daps -d DC-project epub

Command Breakdown:

docker run – Run the command in the container:

-v ~/myproject/:/home/user – Maps a local directory called ~/myproject to a directory in the container called /home/user. /home/user is the default directory that is used by the daps command, so it is best to map this directory rather than needing any extra command line components.

jsevans/daps:latest – This is the image that I’ve created. It is based on OpenSUSE Tumbleweed but it is stable enough for this use. However, it is a large image ~1.2GB due to the number of dependencies.

daps -d DC-project epub – This is the actual command line argument for creating an EPUB ebook using DAPS. I use Asciidoc as my markup language since I don’t really want to learn docbook.

My Dockerfile:

FROM opensuse/tumbleweed
MAINTAINER Jason Evans <jsevans@opensuse.com>

RUN zypper refresh
RUN zypper --non-interactive in daps git

ENV HOME /home/user
RUN useradd --create-home --home-dir $HOME user \
&& chown -R user $HOME

WORKDIR $HOME
USER user

CMD [ "/usr/bin/daps" ]
Advertisement

Creating the Ultimate Container Playground: Salt in LXD

Introduction: Installing Saltstack

The great thing about being able to spin up several new system containers running multiple Linux distros is that you get to experiment with software like Saltstack without the hassle of creating multiple VM’s. This can be especially daunting on a machine that is lacking resources.

The following directions are how I installed Salt on multiple containers running at the same time but using less that 2G of RAM total for testing. Saltstack easily controlled all of their very different package manager and system configurations effortlessly.

Master and Minion on OpenSUSE

zypper in salt-master salt-minion
echo "10.132.120.155" >> /etc/hosts
systemctl enable salt-master
systemctl start salt-master
systemctl enable salt-minion
systemctl start salt-minion

Minion on Fedora

dnf install salt-minion
echo "10.132.120.155" >> /etc/hosts
systemctl enable salt-minion
systemctl start salt-minion

Minion on Ubuntu

apt update
sudo apt install salt-minion
echo "10.132.120.155" >> /etc/hosts
systemctl enable salt-minion
systemctl restart salt-minion

Minion on Arch

pacman -S salt
echo "10.132.120.155" >> /etc/hosts
systemctl enable salt-minion
systemctl start salt-minion

Minion on CentOS

vim /etc/yum.repos.d/saltstack.repo (see https://docs.saltstack.com/en/latest/topics/installation/rhel.html)
echo "10.132.120.155" >> /etc/hosts
yum install salt-minion
systemctl enable salt-minion
systemctl start salt-minion

Set up Salt Master

opensuse:~ # salt-key -A
The following keys are going to be accepted:
Unaccepted Keys:
opensuse
ubuntu.lxd
arch
fedora
centos
Proceed? [n/Y] y
Key for minion opensuse accepted.
Key for minion ubuntu.lxd accepted.
Key for minion arch accepted.
Key for minion fedora accepted.
Key for minion centos accepted.

Testing Saltstack

opensuse:~ # salt '*' grains.get os         
ubuntu.lxd:
    Ubuntu
centos:
    CentOS
arch:
    Arch
fedora:
    Fedora
opensuse:
    SUSE
opensuse:~ # salt '*' grains.get saltversion
fedora:
    2017.7.3
ubuntu.lxd:
    2017.7.4
centos:
    2018.3.0
opensuse:
    2018.3.0
arch:
    2018.3.0

Install a package

opensuse:~ # salt '*' pkg.install mutt
arch:
    ----------
    mailcap:
        ----------
        new:
            2.1.48+14+g5811758-1
        old:
    mutt:
        ----------
        new:
            1.10.0-1
        old:
centos:
    ----------
    mailcap:
        ----------
        new:
            2.1.41-2.el7
        old:
    mutt:
        ----------
        new:
            5:1.5.21-27.el7
        old:
    tokyocabinet:
        ----------
        new:
            1.4.48-3.el7
        old:
    urlview:
        ----------
        new:
            0.9-15.20121210git6cfcad.el7
        old:
fedora:
    ----------
    mailcap:
        ----------
        new:
            2.1.48-2.fc27
        old:
    mutt:
        ----------
        new:
            5:1.9.2-1.fc27
        old:
    perl-Time-Local:
        ----------
        new:
            1:1.250-394.fc27
        old:
    tokyocabinet:
        ----------
        new:
            1.4.48-9.fc27
        old:
    urlview:
        ----------
        new:
            0.9-22.20131022git08767a.fc27
        old:
opensuse:
    ----------
    exim:
        ----------
        new:
            4.86.2-20.1
        old:
    libgc1:
        ----------
        new:
            7.2d-11.3
        old:
    libgmime-2_6-0:
        ----------
        new:
            2.6.20-6.3
        old:
    libgpgme11:
        ----------
        new:
            1.9.0-1.3
        old:
    libkyotocabinet16:
        ----------
        new:
            1.2.76-16.1
        old:
    liblua5_2:
        ----------
        new:
            5.2.4-6.1
        old:
    libmysqlclient18:
        ----------
        new:
            10.0.34-32.2
        old:
    libnotmuch4:
        ----------
        new:
            0.22.1-3.17
        old:
    libpq5:
        ----------
        new:
            9.6.8-15.1
        old:
    libspf2-2:
        ----------
        new:
            1.2.10-8.1
        old:
    libtalloc2:
        ----------
        new:
            2.1.10-2.3.1
        old:
    libxapian22:
        ----------
        new:
            1.2.21-5.3
        old:
    mutt:
        ----------
        new:
            1.8.2-1.7
        old:
    mutt-doc:
        ----------
        new:
            1.8.2-1.7
        old:
    mutt-lang:
        ----------
        new:
            1.8.2-1.7
        old:
    perl-Expect:
        ----------
        new:
            1.32-5.1
        old:
    perl-IO-Tty:
        ----------
        new:
            1.12-6.1
        old:
    python-curses:
        ----------
        new:
            2.7.13-27.3.1
        old:
    python-urwid:
        ----------
        new:
            1.3.0-6.2
        old:
    urlscan:
        ----------
        new:
            0.8.3-1.2
        old:
    urlview:
        ----------
        new:
            0.9-737.1
        old:
    w3m:
        ----------
        new:
            0.5.3.git20161120-163.3
        old:
ubuntu.lxd:
    ----------
    imap-client:
        ----------
        new:
            1
        old:
    libgpgme11:
        ----------
        new:
            1.10.0-1ubuntu1
        old:
    libtokyocabinet9:
        ----------
        new:
            1.4.48-11
        old:
    mutt:
        ----------
        new:
            1.9.4-3
        old:
opensuse:~ # 

Creating the Ultimate Container Playground: LXD on Kubic

Introduction

LXC (Linux Containers) are whole-system containers. They are meant to be able to do just about anything you can do with a VM with a percentage of the system resources and and a tiny startup time.

During Installation:

During installation, you can pretty much choose defaults for everything except you will need to create two additional btrfs subvolumes and if you gave your VM more than 30G of space, you will need to specify that manually because the installer will only recognize 30G by default.

Create btrfs subvolumes for:
/snap
/media

After Installation

Add the snappy repo

sudo zypper addrepo --refresh http://download.opensuse.org/repositories/system:/snappy/openSUSE_Tumbleweed/ snappy

Create the last subvolume needed for snappy

sudo btrfs subvolume create /var/lib/snapd

Install snappy

sudo transactional-update pkg install snapd

reboot

Enable and start the snapd service

sudo systemctl enable snapd && sudo systemctl start snapd

Install the LXD snap

sudo snap install lxd

Setup

Initialize LXD

lxd init (choose defaults to make life easier the first time)

Create your first LXC container. The first time you create the container, LXD will download the image. After that any new containers build from that image will start very quickly.

lxc launch images:opensuse/42.3 opensuse

Enter into your first container

lxc exec opensuse bash

Container Confession

Hi, my name is Jason and I use containers in other containers and I’m unhappy that I can’t run even other containers inside of those.

I’m not a big fan of Canonical’s snapd application containers, but they have one application there that I can’t get anywhere else for openSUSE outside of building it all from source and that is LXD. LXD is a hypervisor for Linux Containers a.k.a. LXC. With LXD, I can create full system containers that have much of the same functionality as VMs without the virtualization overhead and unlike Docker application containers, it provides a full environment to work in, not just enough to run one application.

My goal is to use LXD to create very quick and small environments to play with new tools in a safe sandbox. If I screw something up, I can create a new VM in less than 10 seconds or revert it to a previous save state even quicker instead of the time it takes to do the same in Virtualbox or KVM.

One of the things that I would like to do is play with Docker containers in my LXC container that is running in a snapd container. Well, that just doesn’t work. Mostly apparmor is confused and by default, it is doing it’s job.

root@docker-test:~# docker run -it hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
ca4f61b1923c: Pull complete
Digest: sha256:97ce6fa4b6cdc0790cda65fe7290b74cfebd9fa0c9b8c38e979330d547d22ce1
Status: Downloaded newer image for hello-world:latest
docker: Error response from daemon: Could not check if docker-default AppArmor profile was loaded: open /sys/kernel/security/apparmor/profiles: permission denied.
root@docker-test:~#

So, that’s what I’m playing with today. If I get it to work, I’ll post it here.

Deploying WordPress with SUSE CaaS Platform

Introduction

While you may never have a reason to deploy WordPress on CaaSP, the following exercise will give you a way to see how persistent storage, deployments, and NodePort networking work in Kubernetes work. It is my belief that a decent understanding of the theory around Kubernetes is useful, but an actual hands-on understanding will help fill in gaps where theory alone misses. It is assumed that you have already built your own cluster and have kubectl working. Example files are on Github and are free to download.

Storage

NFS

The persistent storage strategy that we will be using example below uses NFS as the Persistent Volume (pv). An example /etc/exports is below.

/NFS/nfs1  *(rw,no_root_squash,sync,no_subtree_check)
/NFS/nfs2   *(rw,no_root_squash,sync,no_subtree_check)

It’s important that the CaaSP cluster be allowed to access the NFS share and that no_root_squash be enabled as the Persistent Volume will be writing to the NFS share as root.

Persistent Volume

NFS-volumes.yaml is where persistent data for the WordPress installation will reside. The PV mounts the NFS shares. The file is in yaml format which means that spaces are important and never use tabs. Edit this file to point to two separate NFS shares and then you can deploy it.

kubectl create -f NFS-volumes.yaml

Deployment

Secret

The MySQL database needs a root password in order to be started. We create a secret in Kubernetes that the MySQL deployment can pick up and apply to the new MySQL Pod.

kubectl create secret generic mysql-pass --from-literal=password=YOUR_PASSWORD

MySQL Deployment

The MySQL Deployment has three parts. They are the Service, the Persistent Volume Claim (PVC), and the Pod deployment.

kubectl create -f mysql-deployment.yaml

Service

The Service is the networking element of the deployment. It allows WordPress to connect to MySQL on port 3306.

Persistent Volume Claim

The PVC connects to the PV that we created previously. When the PVC is created successfully, the PV will be bound to it and persistent files will be able to be written to the NFS share that the PV is based on.

Pod

A Pod will be created from the mysql:5.6 image pulled from hub.docker.com and put into a container. Containers in the Kubernetes world are called Pods.

WordPress Deployment

The WordPress Deployment is very similar to the MySQL deployment. However, we will need to be able to access WordPress, the application from the outside. In the service section of the wordpress deployment file, we see that the service type is NodePort. This is important to note because with this, we can access the installation from outside.

kubectl create -f wordpress-deployment.yaml

Review

The following section will review all of the parts of a successfully installed WordPress deployment using the steps above.

Review Persistent Volumes

$ kubectl get pv
NAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                    STORAGECLASS   REASON    AGE
local-pv-1    20Gi       RWO            Retain           Bound       default/wp-pv-claim                               2h
local-pv-18   20Gi       RWO            Retain           Bound       default/mysql-pv-claim                            2h

Review Secrets

$ kubectl get secrets
NAME                  TYPE                                  DATA      AGE
mysql-pass            Opaque                                1         2h

Review Persistent Volume Claims

$ kubectl get pvc
NAME             STATUS    VOLUME        CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mysql-pv-claim   Bound     local-pv-18   20Gi       RWO                           2h
wp-pv-claim      Bound     local-pv-1    20Gi       RWO                           2h

Review Services

$ kubectl get services
NAME              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
wordpress         NodePort    172.24.11.97     <none>        80:30941/TCP     2h
wordpress-mysql   ClusterIP   None             <none>        3306/TCP         2h

Review Pods

$ kubectl get pods
NAME                               READY     STATUS        RESTARTS   AGE
wordpress-db8f78568-n45nv          1/1       Running       0          2h
wordpress-mysql-7b4ffb6fb4-9r8tl   1/1       Running       0          2h

Accessing Your Deployment

In the section on the WordPress deployment, I mentioned that our WordPress service will use NodePort. In the section where we reviewed the services, we see that NodePort type for WordPress and that it is listening on ports 80:30941. This means that WordPress is looking for traffic to come in on port 80 but our cluster will actually be listening on port 30941. Find the IP of any of your worker nodes and combine that with your NodePort port to access your new WordPress deployment. Example: http://192.168.0.1:30941.

This guide is an adaption from the Kubetnetes.io’s Deploying WordPress and MySQL with Persistent Volumes which was produced under a Creative Commons Attribution 4.0 license.

On writing dockerfiles

I came across an email yesterday before I went home in our internal Docker mailing list. The author was looking for a Tomcat container written using SLES as a base-image. I didn’t remember coming across anything like that so I checked dockerhub. There were several there, but most of them, including the official one from Apache, were build on Debian or Ubuntu. I found one that uses a binary package in a tarball created by Apache. I created a plain container shell:

docker run -it sles12sp2 /bin/bash

Then I went through the dockerfile line by line making sure it worked. It didn’t right away. Suse products tend to put things into their own paths rather than the ones that Ubuntu/Debian uses and I fixed the dockerfile accordingly. It only took 20 minutes or so to test, build, and get it running.

I published the final version here using OpenSUSE instead of SLES. It’s also on dockerhub as: jsevans/tomcat-opensuse. This is how I use open source software. I take what others  who are smarter than I am have made, build on it to make something better or at least different, and then give back to the community. Maybe my little hack will help someone else also.

One more thing.

Why didn’t I just add “zypper in tomcat” to my dockerfile? I tried that at first. However, like many Linux distros that moved to SystemD, services like Tomcat will not start without a real instance of SystemD running already. Some distros include that into their images like Ubuntu. Others like SLES/OpenSUSE and CentOS/Fedora don’t so you have to find alternative ways to install some applications.

That leads be to ask, will we see a day soon when applications are created to be “container ready”? The applications could be included in a container as is without needing to worry about dependencies to system processes like SystemD.