Deploying WordPress with SUSE CaaS Platform

Introduction

While you may never have a reason to deploy WordPress on CaaSP, the following exercise will give you a way to see how persistent storage, deployments, and NodePort networking work in Kubernetes work. It is my belief that a decent understanding of the theory around Kubernetes is useful, but an actual hands-on understanding will help fill in gaps where theory alone misses. It is assumed that you have already built your own cluster and have kubectl working. Example files are on Github and are free to download.

Storage

NFS

The persistent storage strategy that we will be using example below uses NFS as the Persistent Volume (pv). An example /etc/exports is below.

/NFS/nfs1  *(rw,no_root_squash,sync,no_subtree_check)
/NFS/nfs2   *(rw,no_root_squash,sync,no_subtree_check)

It’s important that the CaaSP cluster be allowed to access the NFS share and that no_root_squash be enabled as the Persistent Volume will be writing to the NFS share as root.

Persistent Volume

NFS-volumes.yaml is where persistent data for the WordPress installation will reside. The PV mounts the NFS shares. The file is in yaml format which means that spaces are important and never use tabs. Edit this file to point to two separate NFS shares and then you can deploy it.

kubectl create -f NFS-volumes.yaml

Deployment

Secret

The MySQL database needs a root password in order to be started. We create a secret in Kubernetes that the MySQL deployment can pick up and apply to the new MySQL Pod.

kubectl create secret generic mysql-pass --from-literal=password=YOUR_PASSWORD

MySQL Deployment

The MySQL Deployment has three parts. They are the Service, the Persistent Volume Claim (PVC), and the Pod deployment.

kubectl create -f mysql-deployment.yaml

Service

The Service is the networking element of the deployment. It allows WordPress to connect to MySQL on port 3306.

Persistent Volume Claim

The PVC connects to the PV that we created previously. When the PVC is created successfully, the PV will be bound to it and persistent files will be able to be written to the NFS share that the PV is based on.

Pod

A Pod will be created from the mysql:5.6 image pulled from hub.docker.com and put into a container. Containers in the Kubernetes world are called Pods.

WordPress Deployment

The WordPress Deployment is very similar to the MySQL deployment. However, we will need to be able to access WordPress, the application from the outside. In the service section of the wordpress deployment file, we see that the service type is NodePort. This is important to note because with this, we can access the installation from outside.

kubectl create -f wordpress-deployment.yaml

Review

The following section will review all of the parts of a successfully installed WordPress deployment using the steps above.

Review Persistent Volumes

$ kubectl get pv
NAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                    STORAGECLASS   REASON    AGE
local-pv-1    20Gi       RWO            Retain           Bound       default/wp-pv-claim                               2h
local-pv-18   20Gi       RWO            Retain           Bound       default/mysql-pv-claim                            2h

Review Secrets

$ kubectl get secrets
NAME                  TYPE                                  DATA      AGE
mysql-pass            Opaque                                1         2h

Review Persistent Volume Claims

$ kubectl get pvc
NAME             STATUS    VOLUME        CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mysql-pv-claim   Bound     local-pv-18   20Gi       RWO                           2h
wp-pv-claim      Bound     local-pv-1    20Gi       RWO                           2h

Review Services

$ kubectl get services
NAME              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
wordpress         NodePort    172.24.11.97     <none>        80:30941/TCP     2h
wordpress-mysql   ClusterIP   None             <none>        3306/TCP         2h

Review Pods

$ kubectl get pods
NAME                               READY     STATUS        RESTARTS   AGE
wordpress-db8f78568-n45nv          1/1       Running       0          2h
wordpress-mysql-7b4ffb6fb4-9r8tl   1/1       Running       0          2h

Accessing Your Deployment

In the section on the WordPress deployment, I mentioned that our WordPress service will use NodePort. In the section where we reviewed the services, we see that NodePort type for WordPress and that it is listening on ports 80:30941. This means that WordPress is looking for traffic to come in on port 80 but our cluster will actually be listening on port 30941. Find the IP of any of your worker nodes and combine that with your NodePort port to access your new WordPress deployment. Example: http://192.168.0.1:30941.

This guide is an adaption from the Kubetnetes.io’s Deploying WordPress and MySQL with Persistent Volumes which was produced under a Creative Commons Attribution 4.0 license.

Stupid Script

I just cobbled this script together to make starting up VirtualBox VM’s a little easier when remote.

for a in `vboxmanage list vms | sed 's/^"\(.*\)".*/\1/'`; \
do echo $a && vboxmanage showvminfo $a | grep -c "running (since" ;done

This will give you a list of all of your VMs. Followed by a 1 if it is powered on or a 0 if it isn’t. If your VM name contains a space, this won’t work. In that case, the following will give you just the list:

vboxmanage list vms

You can then run the following to get all of the information and grep out the “since”

vboxmanage showvminfo "My VM"

 

What are these very old posts?

Recently I imported posts from my old Ham Radio/Tech blog: ki4gmx.wordpress.com. They are ancient, but they are important to me to keep. I am slowly going through some of the posts and updating links. If you see two links in a row with one of them struckthrough, the that one is the old link. The new one, which probably points to the original on archive.org, is next to it.

What is a CVE and How Can It Benefit Me?

Like a lot of the things that I write here, this is a question that came up in a ticket that I worked on recently. A customer recently received a message like this:

Samba is a freely available file- and printer-sharing application maintained and developed by the Samba Development Team. Samba allows users to share files and printers between operating systems on UNIX and Windows platforms. Samba is prone to a security-bypass vulnerability because it fails to properly enforce SMB signing when certain configuration options is enabled. Successfully exploiting this issue may allow attackers to bypass security restrictions and perform unauthorized actions by conducting a man-in-the-middle attack. This may lead to other attacks. The following versions are vulnerable: Samba 3.0.25 through 4.4.15 Samba 4.5.x versions prior to 4.5.14 Samba 4.6.x versions prior to 4.6.8.

This doesn’t actually tell us a lot. I could ping one of the Samba developers and ask them if they are aware of this vulnerability, if we’ve ever patched it, and if not what the status of it is. That’s could be a lot of time waiting for a reply and taking time out of the developer’s day to answer a fairly straightforward customer service question. However, there is an easier way.

When a software vulnerability is detected, it is reported as a CVE (Common Vulnerabilities and Exposures) number for that specific application. In this case, I found the CVE number that best matched the description that I was given and I was able to show the customer that we had patched it and which patch it was in.

One famous example was the “Heartbleed Vulnerability” from a few years ago which is CVE-2014-0160. SUSE retains a list of all CVE’s that we review and patch here: https://www.suse.com/security/cve/. As you can see here: https://www.suse.com/security/cve/CVE-2014-0160/ Heartbleed was patched in all versions of SLE 11 and 12 as well as OpenSUSE 12, 13, Leap, and Tumbleweed.

For those concerned about their system’s security, CVE’s are a great way to make sure that newly found vulnerabilities have been patched in their OS of choice.

More information:

About Patching: What is a Patch in SLE and OpenSUSE?

A while back I wrote a post on why you should patch your servers. I think it surprised some people. I got at least one comment from twitter saying, “I’m surprised you get so many tickets on this topic since security is so important in enterprise server environments.” And yet, we do. At any current time, we have multiple tickets asking for RCA (Root Cause Analysis) for a server crash or hang when the server has not been patched in month, years, or even ever. Sometimes they never register the server to receive patched and so never patch their server beyond what is in the base version that we ship in the beginning.

This post isn’t to complain. Its to help alleviate the problem. The first step is to discuss, what are patches and what do they do. Using a SUSE Customer Center (SCC) account, you can go to https://scc.suse.com/patches to view detailed information on all of our patches. I can get a list of them so far using this command:

jsevans@linux-rtf9:~> sudo zypper patches
Refreshing service 'Containers_Module_12_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_Server_12_SP2_x86_64'.
Refreshing service 'SUSE_Package_Hub_12_SP2_x86_64'.
Loading repository data...
Reading installed packages...
Repository | Name | Category | Severity | Interactive | Status | Summary
--------------------------------+-----------------------------------------+-------------+-----------+-------------+------------+----------------------------------------------------------------------------------
SLES12-SP2-Updates | SUSE-SLE-SERVER-12-SP2-2017-990 | security | important | --- | needed | Security update for glibc
SLES12-SP2-Updates | SUSE-SLE-SERVER-12-SP2-2017-994 | security | critical | reboot | needed | Security update for the Linux Kernel
SLES12-SP2-Updates | SUSE-SLE-SERVER-12-SP2-2017-998 | security | important | --- | not needed | Security update for openvp

As you can see, I need to apply three patches to this server.  Since patch, “SUSE-SLE-SERVER-12-SP2-2017-994” is listed as a critical update, we’ll review what makes this so important:

jsevans@linux-rtf9:~> zypper patch-info SUSE-SLE-SERVER-12-SP2-2017-994
Loading repository data...
Reading installed packages...



Information for patch SUSE-SLE-SERVER-12-SP2-2017-994:
------------------------------------------------------
Repository : SLES12-SP2-Updates
Name : SUSE-SLE-SERVER-12-SP2-2017-994
Version : 1
Arch : noarch
Vendor : maint-coord@suse.de
Status : applied
Category : security
Severity : critical
Created On : Mon 19 Jun 2017 05:28:39 PM CEST
Interactive : reboot
Summary : Security update for the Linux Kernel
Description :

The SUSE Linux Enterprise 12 SP2 kernel was updated to receive various security and bugfixes.



The following security bugs were fixed:

- CVE-2017-1000364: The default stack guard page was too small and could be "jumped over" by userland programs using
 more than one page of stack in functions and so lead to memory corruption. This update extends the stack guard page
 to 1 MB (for 4k pages) and 16 MB (for 64k pages) to reduce this attack vector. This is not a kernel bugfix, but a
 hardening measure against this kind of userland attack.(bsc#1039348)

The following non-security bugs were fixed:

- There was a load failure in the sha-mb encryption implementation (bsc#1037384).
Provides : patch:SUSE-SLE-SERVER-12-SP2-2017-994 = 1
Conflicts : [10]
 kernel-default.nosrc < 4.4.59-92.20.2
 kernel-default.x86_64 < 4.4.59-92.20.2
 kernel-default-base.x86_64 < 4.4.59-92.20.2
 kernel-default-devel.x86_64 < 4.4.59-92.20.2
 kernel-devel.noarch < 4.4.59-92.20.2
 kernel-macros.noarch < 4.4.59-92.20.2
 kernel-source.noarch < 4.4.59-92.20.2
 kernel-source.src < 4.4.59-92.20.2
 kernel-syms.src < 4.4.59-92.20.2
 kernel-syms.x86_64 < 4.4.59-92.20.2

In other words, this patch was written to avoid a possible security issue from a rogue application.

A quick and easy way to review what patches are needed for your system, simply run:

zypper patches | grep needed | grep -v "not "

This will allow you view the complete summary of all of your needed patches, you can run:

for i in `zypper lp | grep -i needed | awk '{ print $3 }'`; do zypper patch-info $i; done

If you haven’t patched in a while, this can be a lot of information. However, if you need to justify why you should patch, this is a great way to summarize the information. Another option is to visit https://www.suse.com/support/update/ which is a web-based repository for specific packages with much of the same information.

In my next post, I’ll discuss ways to intelligently apply patches to minimize downtime. In the meantime, here are some more resources.

On writing dockerfiles

I came across an email yesterday before I went home in our internal Docker mailing list. The author was looking for a Tomcat container written using SLES as a base-image. I didn’t remember coming across anything like that so I checked dockerhub. There were several there, but most of them, including the official one from Apache, were build on Debian or Ubuntu. I found one that uses a binary package in a tarball created by Apache. I created a plain container shell:

docker run -it sles12sp2 /bin/bash

Then I went through the dockerfile line by line making sure it worked. It didn’t right away. Suse products tend to put things into their own paths rather than the ones that Ubuntu/Debian uses and I fixed the dockerfile accordingly. It only took 20 minutes or so to test, build, and get it running.

I published the final version here using OpenSUSE instead of SLES. It’s also on dockerhub as: jsevans/tomcat-opensuse. This is how I use open source software. I take what others  who are smarter than I am have made, build on it to make something better or at least different, and then give back to the community. Maybe my little hack will help someone else also.

One more thing.

Why didn’t I just add “zypper in tomcat” to my dockerfile? I tried that at first. However, like many Linux distros that moved to SystemD, services like Tomcat will not start without a real instance of SystemD running already. Some distros include that into their images like Ubuntu. Others like SLES/OpenSUSE and CentOS/Fedora don’t so you have to find alternative ways to install some applications.

That leads be to ask, will we see a day soon when applications are created to be “container ready”? The applications could be included in a container as is without needing to worry about dependencies to system processes like SystemD.

How can I get Malware?

I received an email this morning. I’m actually expecting a package that has been held for some time and when I saw it, it seems potentially legit at first. But then I realized that I don’t use my work email for personal things like that. When I saw that it came from someone who didn’t have a UPS email address, and there is a .zip attachment. I don’t know for sure that this attachment has a virus or any kind of malware, but I won’t take the chance on company hardware.

In the past few months ransomware such as WannaCry has been a big thing in IT news. It is a program that encrypts (locks) your documents and files so that you can’t use them until you pay someone a ransom. Like a virus, then often move from machine to machine on a network. How do these things start? They usually start with emails like this one. Someone opens an attachment that they think is legitimate and then they find that they can’t use their data anymore and if they don’t have backups, then they might be out a lot of money. If it happens on a corporate network, they could be out millions.

How can you be safe?

  • Don’t open attachments that you aren’t expecting.
  • Be smart about email. If you don’t know who the email is from, never open an attachment.
  • Report suspicious email like this to your Information Security team or Email administrator so they can be blocked in the future.

Malware attacks happen, but they don’t have to happen to you. A bit of savvy about how these things works can save a lot of headaches.

Why Should I Patch My Server?

Wait, what?

Why should I patch my server?

Because you won’t get bug fixes.

But my server works just fine, if it’s not broken don’t fix it, right?

What about security fixes?

My server isn’t accessible from the internet. I’m not worried about it getting hacked.

But it’s still possible, right? Do you really want to take that chance?

Yeah, patching requires time and money. It could even require downtime. It’s more important for us to keep the server going than to worry about that stuff. We’re safe.

Over months memory was slowly drained by a rogue application that never released it. A month before, some users were experiencing slowness, but nothing too bad. The week before, a ticket was opened to with the helpdesk. It wasn’t really a major issue. It was probably their laptops, not the server. And then in happens, the ticket goes from P3 to P1. The server is down and nobody knows why. Nobody can log in. The ticket is then escalated to L2 and then to L3 support. When someone opens a remote terminal to the server, they see OOM-Killer alerts and then nothing. There is no response in the console or ssh. The server is rebooted. The applications are running again and nobody really knows what happened. They open a ticket with the OS vendor so they can tell them why their server crashed.

The logs show nothing. Journalctl shows business as usual and then a reboot. There’s no OOM-Killer alerts, no memory errors, and no kernel core-dump. There’s nothing in the application logs. The OS vendor reports that the reason is inconclusive except that they never patched the server. In the years since the server was set up, hundreds of patches were created and sent out. The issue could come from any number of places. To paraphrase Arthur Conan Doyle, in order to get to the truth, you must eliminate the impossible. The easiest way to do that is to make sure that the systems are patched against known issues so that when a new issue comes along it is easier to diagnose and repair.

If the company performed all of these patches, could this have still happened? Yes, of course. Bug fixes can only fix issue that are already found and reported. Another part of good system hygiene is just keeping an eye on the server. Of course, you don’t log into a few hundred servers at all times watching them. You set up a monitoring system like Nagios or one of a dozen others. If CPU cycles, memory or swap usage, or network lag goes up, you have a chance to do something about it before the system crashes. That’s when you want to start your investigation, not after you’ve experienced downtime. A monitoring system isn’t terribly costly, but it’s not completely free either. It takes time to set up and it takes resources to host it.

To answer the original question, you patch your server to save money in the long run against system and application crashes and against security breaches. If you don’t, you’re doing yourself a disservice and loss in revenue could ultimately cost significantly more than the time lost for patching.

SUSE Documentation

Did you know that SUSE provides free documentation for all of our products to the general public in multiple languages and in multiple formats? Not only that, it is released under the GNU Free Documentation or under a Creative Commons license. As a big fan of the work that the Creative Commons folks do, this makes me proud to work for a company that actually cares about giving back to the community. Not only is our documentation freely available, so are our knowledgebase articles. Unlike other companies, we don’t give just a few lines of the article and then prompt the user to buy our products in order to fix a problem. Also, now that SLE* and OpenSUSE share the same direct codebase, a problem solved in one product can easily be used to fix the other.

There are currently 20 manuals and white papers for SLES 12 SP 2 and 6 for OpenSUSE Leap 42.2. For the Linux newbie, spending $50+ for a paper book on one product at the local bookstore that will be out of date in two years is a huge investment. It makes sense for anyone wanting to get into Linux and Open Source to start with a distribution that makes it easy.

Our Free Documentation for SLE:

Installation and Administration

Deployment Guide
Administration Guide
Virtualization Guide
Storage Administration Guide
System Analysis and Tuning Guide
Security Guide

Additional Information

AutoYaST
Hardening Guide
Xen to KVM Migration Guide
Docker Quick Start
systemd in SUSE® Linux Enterprise 12 – White Paper
Virtualization Technologies
Networking with Wicked in SUSE® Linux Enterprise 12 – White Paper
Virtualization Best Practice
Subscription Management Tool (SMT)
Introduction to SUSE Linux Enterprise Server for the Raspberry Pi
Data Replication across Geo Clusters via DRBD
SUSE Linux Enterprise Server-Support for Intel Server Platforms

How Low Can You Go?

Every since my days on dial-up internet and telnet MUD’s, I’ve always been interested in low-bandwidth utilities. After all, not every user get’s to live in an area that provides unlimited high-speed broadband. Some, like those who depend on cellular or satellite internet, have to choose between work and watching a few YouTube videos.

This post will introduce some tools that will help you make the most out of a limited or high-latency Internet connection.

Mosh

https://mosh.org/

zypper in mosh

(SLES, Leap, Tumbleweed)

What does it do?

Mosh is a replacement for SSH. It uses the normal SSH protocol to initiate the connection to the remote server. However, after connecting, MOSH’s true magic begins. Rather than continuing using TCP port 22, MOSH changes to UDP, yet it is still encrypted. Due to its fault-tolerance nature, TCP tends to be quite slow and unresponsive when bandwidth is low and latency is high. UDP, on the other hand, isn’t worried about getting every byte correct. If a packet or two gets dropped, the connection stays up and continues working. Recently, I took a train from Prague to Nuremberg using the train’s complementary wifi. While traveling through rural areas, the connection often dropped to a crawl and yet my connection to my home computer never dropped.

How do I use it?

Install mosh on the computer that you are using and the computer that you want to connect to, the connect like you would using ssh.

mosh jsevans@myserver

Really, it’s that easy. There’s no extra daemon to start or configure. Mosh uses SSH to connect to the other server and activate Mosh on the receiving end and the two communicate.

Alpine

https://www.washington.edu/alpine/

zypper in alpine

(Leap, Tumbleweed)

What does it do?

Alpine is a text-based email client. Why would anyone use an antique like that? Easy, if email is important and your internet is limited, then Alpine might be your best option. With that said there is a learning curve, but it’s not as crazy as it seems at first.

How do I use it?

Below is a sample of what you will see at the main menu.

 ALPINE 2.20   MAIN MENU      Folder: INBOX      No Messages


    ?     HELP               -  Get help using Alpine

    C     COMPOSE MESSAGE    -  Compose and send a message

    I     MESSAGE INDEX      -  View messages in current folder

    L     FOLDER LIST        -  Select a folder to view

    A     ADDRESS BOOK       -  Update address book

    S     SETUP              -  Configure Alpine Options

    Q     QUIT               -  Leave the Alpine program


               For Copyright information press "?"
             [Folder "INBOX" opened with 0 messages]
? Help                              P PrevCmd              R RelNotes
O OTHER CMDS       > [ListFldrs]    N NextCmd              K KBLockH

Help, Compose, Setup, and Quit are self-explanatory. Message Index opens your Inbox and Folder List lists all of your email folders. The hardest part is setting it up for the first time. The University of Virginia has a fantastic tutorial for first-time installation using Gmail.

Screen

https://www.gnu.org/software/screen/

What does it do?

Many system administrators know about Screen, but it’s one of those tools that seems a little too mysterious for many new users. The main use of Screen is to create persistent shells. That means that if you start a new Screen session using the screen command, you can close that window and return to exactly where you left off.

How do I use it?

Try it for yourself. Run screen, then yast to get to console-mode yast, then close the window. If you were using yast on a remote server and the connection dropped, you might have lost all of your work. Open a new terminal window and type screen -x and you’ll see yast exactly how you left it. Screen has many more uses than that and the following tutorial will help you get the most out of it.

Using Screen in conjunction with Mosh can make even dial-up speeds bearable at the command line.

And now for something completely different.

There’s enough material out there to write a book or two on different text-based applications that can make life easier when bandwidth is limited, but I wanted to show something that could actually make system administration much easier for the admin who needs to work in the field. Did you know you can install a VM with KVM or Xen using only the console?

How do I do it?

There are basically three steps. First, install the virt-install application on your client machine.

sudo zypper in virt-install

Secondly, log into the remote machine and create an image to be used for storage for the new VM

qemu-img create -f qcow2 ./workspace/VM/opensuse42.2.qcow2 60G

Finally, the following command will use the ssh protocol to create the new VM and you will see everything in your console. Sadly, it doesn’t work with Mosh, but it does work quite well with Screen. Didn’t know that SUSE products had a text-only installation mode? It’s there and it works beautifully.

virt-install 
 --connect qemu+ssh://jsevans@10.0.1.35/system 
 --name leap42 
 --ram 2048 --disk path=/workspace/VM/opensuse42.2.qcow2 
 --vcpus 1 
 --os-type linux --os-variant generic 
 --network network=default 
 --graphics none 
 --console pty,target_type=serial 
 --location 'http://download.opensuse.org/distribution/leap/42.2/repo/oss/' 
 --extra-args 'console=ttyS0,115200n8 serial' 

Rather than giving one long command, it’s sometimes easier to break them down like this. You can still copy and paste the text here to test it for yourself. Just remember to replace my information with your own. On the second line of the command, I use the connect option to connect to the remote server with /system at the end of the command. Most of these options probably seem normal if you’re used to creating VM’s in virt-manager in KVM or even in VMWare for that matter. However, the location and extra-args options need some explanation. This installation technique only works with network installs so basically you need to install from a remote HTTP or FTP server and supply the URL or you’ll need to mount an ISO and serve the files via HTTP or FTP. You can see the url in the location option. The extra-args option tells KVM to emulate a serial connection similar to plugging in a serial cable to the back of a server and connecting directly. There is no virtual video card so there will be not be a remote VNC to view a GUI. There is only the serial connection. One difference in the final installation is that the kernel line will include directions for sending all text to the serial output. This makes booting your new VM easier to troubleshoot when booting. This technique isn’t limited to just SUSE products. Raymii.org has a fantastic guide on how to do remote installations with several other distributions.