Since my last post was so popular, here are a couple replacement WP’s that I made myself from schematics for a classic Heathkit HW101 transceiver.
First of all, the entire wallpaper is here:
On the right-hand side, there are these cool line-art cad drawing with Geeko in the middle. The problem is that these aren’t just just random lines nor are they circuits boards or anything like that. These are architectural blueprints. There are 5 full bathroom and a small piece of a 6th. There’s also a couple of conference rooms.
Why do I care?
I think it’s hilarious. It’s a strange decision for a wallpaper and I bizarre easter egg to find.
I was incredibly excited to get this message earlier this week and now to finish my presentation.
Hi, my name is Jason and I use containers in other containers and I’m unhappy that I can’t run even other containers inside of those.
I’m not a big fan of Canonical’s snapd application containers, but they have one application there that I can’t get anywhere else for openSUSE outside of building it all from source and that is LXD. LXD is a hypervisor for Linux Containers a.k.a. LXC. With LXD, I can create full system containers that have much of the same functionality as VMs without the virtualization overhead and unlike Docker application containers, it provides a full environment to work in, not just enough to run one application.
My goal is to use LXD to create very quick and small environments to play with new tools in a safe sandbox. If I screw something up, I can create a new VM in less than 10 seconds or revert it to a previous save state even quicker instead of the time it takes to do the same in Virtualbox or KVM.
One of the things that I would like to do is play with Docker containers in my LXC container that is running in a snapd container. Well, that just doesn’t work. Mostly apparmor is confused and by default, it is doing it’s job.
root@docker-test:~# docker run -it hello-world Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world ca4f61b1923c: Pull complete Digest: sha256:97ce6fa4b6cdc0790cda65fe7290b74cfebd9fa0c9b8c38e979330d547d22ce1 Status: Downloaded newer image for hello-world:latest docker: Error response from daemon: Could not check if docker-default AppArmor profile was loaded: open /sys/kernel/security/apparmor/profiles: permission denied. root@docker-test:~#
So, that’s what I’m playing with today. If I get it to work, I’ll post it here.
10 hours of jetlag and rainy afternoon naps don’t mix. It’s 2 minutes to 2AM here in Provo, UT and I can’t sleep so I’m blogging.
I want to ssh into a machine that doesn’t have any external IP. In the case of my situation at home, I get a 192.168… IP from my ISP because of a shared connection. In other cases, I have VM’s with natted IP’s that also have no direct way in.
I could pay for a VPN service and vpn into these machines, but instead I’m using a free way of doing it. I use Tor.
Here’s how it works, the Tor service reaches out to the Tor network and is listening on port 22 (or whatever port I choose for ssh) for incoming requests. I use “torsocks ssh zzzzzzzzz.onion” from my laptop and I am in. This bypasses the external internet and gives me a pretty secure route from my laptop to my home machine only via Tor.
Here’s how I set it up with OpenSUSE
On your remote or inaccessible server:
$ sudo zypper in tor
This installs the tor service and the torsocks proxy app.
$ sudo vim /etc/tor/torrc
Uncomment the following lines:
HiddenServiceDir /var/lib/tor/hidden_service/ HiddenServicePort 22 127.0.0.1:22
$sudo systemctl start tor
The service is now started and you should have a new .onion address
$ cat /var/lib/tor/hidden_service/hostname
On your local machine/laptop/etc:
$ sudo zypper in tor
$ sudo systemctl start tor
$ torsocks ssh xxxxxxxxx.onion
This is a cool trick. Of course you can use it on any server/VM/etc even if they do have accessible IP’s. In those cases, I suggest that you close the firewall on port 22 and allow it to be only accessible via Tor. There is no need to have extra ports open to the internet.
Using Tor is a great way to add security to your network communication. In addition to the SSH encryption, the packets have additional encryption due to the nature of how Tor works.
Anyway, I hope this helps people out if you’re like me and have to make do with an ISP that makes using the web just a little harder.
One last thing. Tor is more laggy than a straight connection. You’re not doing anything wrong, it’s just a side-effect of how this all works.
While you may never have a reason to deploy WordPress on CaaSP, the following exercise will give you a way to see how persistent storage, deployments, and NodePort networking work in Kubernetes work. It is my belief that a decent understanding of the theory around Kubernetes is useful, but an actual hands-on understanding will help fill in gaps where theory alone misses. It is assumed that you have already built your own cluster and have kubectl working. Example files are on Github and are free to download.
The persistent storage strategy that we will be using example below uses NFS as the Persistent Volume (pv). An example /etc/exports is below.
/NFS/nfs1 *(rw,no_root_squash,sync,no_subtree_check) /NFS/nfs2 *(rw,no_root_squash,sync,no_subtree_check)
It’s important that the CaaSP cluster be allowed to access the NFS share and that no_root_squash be enabled as the Persistent Volume will be writing to the NFS share as root.
NFS-volumes.yaml is where persistent data for the WordPress installation will reside. The PV mounts the NFS shares. The file is in yaml format which means that spaces are important and never use tabs. Edit this file to point to two separate NFS shares and then you can deploy it.
kubectl create -f NFS-volumes.yaml
The MySQL database needs a root password in order to be started. We create a secret in Kubernetes that the MySQL deployment can pick up and apply to the new MySQL Pod.
kubectl create secret generic mysql-pass --from-literal=password=YOUR_PASSWORD
The MySQL Deployment has three parts. They are the Service, the Persistent Volume Claim (PVC), and the Pod deployment.
kubectl create -f mysql-deployment.yaml
The Service is the networking element of the deployment. It allows WordPress to connect to MySQL on port 3306.
Persistent Volume Claim
The PVC connects to the PV that we created previously. When the PVC is created successfully, the PV will be bound to it and persistent files will be able to be written to the NFS share that the PV is based on.
A Pod will be created from the mysql:5.6 image pulled from hub.docker.com and put into a container. Containers in the Kubernetes world are called Pods.
The WordPress Deployment is very similar to the MySQL deployment. However, we will need to be able to access WordPress, the application from the outside. In the service section of the wordpress deployment file, we see that the service type is NodePort. This is important to note because with this, we can access the installation from outside.
kubectl create -f wordpress-deployment.yaml
The following section will review all of the parts of a successfully installed WordPress deployment using the steps above.
Review Persistent Volumes
$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1 20Gi RWO Retain Bound default/wp-pv-claim 2h local-pv-18 20Gi RWO Retain Bound default/mysql-pv-claim 2h
$ kubectl get secrets NAME TYPE DATA AGE mysql-pass Opaque 1 2h
Review Persistent Volume Claims
$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql-pv-claim Bound local-pv-18 20Gi RWO 2h wp-pv-claim Bound local-pv-1 20Gi RWO 2h
$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE wordpress NodePort 172.24.11.97 <none> 80:30941/TCP 2h wordpress-mysql ClusterIP None <none> 3306/TCP 2h
$ kubectl get pods NAME READY STATUS RESTARTS AGE wordpress-db8f78568-n45nv 1/1 Running 0 2h wordpress-mysql-7b4ffb6fb4-9r8tl 1/1 Running 0 2h
Accessing Your Deployment
In the section on the WordPress deployment, I mentioned that our WordPress service will use NodePort. In the section where we reviewed the services, we see that NodePort type for WordPress and that it is listening on ports 80:30941. This means that WordPress is looking for traffic to come in on port 80 but our cluster will actually be listening on port 30941. Find the IP of any of your worker nodes and combine that with your NodePort port to access your new WordPress deployment. Example: http://192.168.0.1:30941.
This guide is an adaption from the Kubetnetes.io’s Deploying WordPress and MySQL with Persistent Volumes which was produced under a Creative Commons Attribution 4.0 license.
I just cobbled this script together to make starting up VirtualBox VM’s a little easier when remote.
for a in `vboxmanage list vms | sed 's/^"\(.*\)".*/\1/'`; \ do echo $a && vboxmanage showvminfo $a | grep -c "running (since" ;done
This will give you a list of all of your VMs. Followed by a 1 if it is powered on or a 0 if it isn’t. If your VM name contains a space, this won’t work. In that case, the following will give you just the list:
vboxmanage list vms
You can then run the following to get all of the information and grep out the “since”
vboxmanage showvminfo "My VM"