Kubernetes is a popular open-source container orchestration platform that is widely used in production environments. However, it can also be used at home to manage your personal workloads. By using Kubernetes at home, you can take advantage of the same benefits that businesses do, such as automated scaling, load balancing, and service discovery. In this post, we’ll discuss the advantages of using Kubernetes at home, the differences from production usage, and how to install and run Kubernetes on your own virtual machines.
Advantages of Using Kubernetes at Home
- Scalability: Kubernetes can automatically scale up or down your workloads based on resource usage. This means you don’t need to manually adjust your resources as your workloads change.
- Fault tolerance: Kubernetes provides high availability by automatically restarting failed containers or rescheduling them on other nodes.
- Service discovery and load balancing: Kubernetes can automatically discover and route traffic to your services, providing load balancing and high availability.
- Portability: Kubernetes provides a consistent platform for deploying your workloads across different environments, whether it’s your laptop, local data center, or cloud provider.
Differences from Production Usage
While using Kubernetes at home is similar to using it in production, there are some differences to keep in mind:
- Resource constraints: At home, you may have limited resources such as CPU, memory, and storage. You need to keep this in mind when deploying your workloads.
- Security: At home, you may have less strict security requirements compared to production environments. However, you still need to ensure that your Kubernetes cluster is secure and protected.
- Backup and recovery: At home, you may not have the same backup and recovery options as you do in production. You need to ensure that you have a backup plan in place.
Running a Kubernetes Cluster on Three Virtual Machines
To run Kubernetes at home, you need at least three virtual machines (VMs) with Ubuntu 22.04 installed. You can use any virtualization technology such as VirtualBox, Proxmox (my favorite) or VMware to create your VMs. Here are the steps to create a Kubernetes cluster:
- Install Docker on all three VMs: sudo apt-get update && sudo apt-get install -y docker.io
- Install kubeadm, kubectl, and kubelet on all three VMs: sudo apt-get update && sudo apt-get install -y apt-transport-https && curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add – && echo “deb https://apt.kubernetes.io/ kubernetes-xenial main” | sudo tee -a /etc/apt/sources.list.d/kubernetes.list && sudo apt-get update && sudo apt-get install -y kubelet kubeadm kubectl
- Initialize the Kubernetes cluster on the first VM: sudo kubeadm init –pod-network-cidr=10.244.0.0/16
- Join the other two VMs to the cluster using the token generated by the previous command: sudo kubeadm join <master-ip>:6443 –token <token> –discovery-token-ca-cert-hash <hash>
- Install a network plugin such as Flannel to enable networking between the pods: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Deploying with Rancher Scripts
Rancher is a popular open-source Kubernetes management platform that provides a user-friendly interface to manage your clusters. You can use Rancher to deploy your applications with Helm charts, which are packages that define your Kubernetes deployments. Here are the steps to deploy your applications with Rancher:
- Install Rancher on the Kubernetes cluster by following the installation instructions on their website.
- Once Rancher is installed, log in to the Rancher UI using your credentials.
- Navigate to the Cluster Explorer tab and select your Kubernetes cluster.
- Click on the Catalogs tab and select Helm Catalog.
- Browse the catalog for the Helm chart you want to deploy, such as Jellyfin for media streaming or Home Assistant for home automation.
- Click on the Deploy button for the Helm chart you want to deploy and enter the required values for your deployment.
- Once you have entered the required values, click on the Launch button to deploy your application.
Deploying a MetaLB
Since you are at home, and if you are not using a cloud provider, you need a form of load balancing to expose you Kubernetes cluster services to the world.
MetalLB is a load balancer implementation that can be used in a Kubernetes cluster to expose services externally. To add MetalLB to your cluster, you can follow these steps:
- Install MetaLB
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.9/config/manifests/metallb-native.yaml
- Create a Kubernetes manifest file. You can use the following example manifest file as a starting point:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- <ip-range-for-your-metal-lb-node>
Replace <ip-range-for-your-metal-lb-node>
with the IP range for your local network. This IP range should be different from the IP ranges used for the other VMs in your cluster.
- Apply the MetalLB manifest file using the kubectl apply command:
kubectl apply -f <path-to-manifest-file>
- Verify that MetalLB is running by checking the status of the MetalLB pods:
kubectl get pods -n metallb-system
You should see output similar to the following:
NAME READY STATUS RESTARTS AGE
controller-6f4b894d5d-n22sp 1/1 Running 0 17h
speaker-c2drw 1/1 Running 0 17h
- Now that MetalLB is installed and running, you can use it to expose services externally. You can do this by creating a Kubernetes service of type
LoadBalancer
, like you would for any other load balancer implementation.
For example, to expose a service named my-service
on port 80
, you can create the following service manifest file:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- name: http
port: 80
targetPort: 8080
type: LoadBalancer
Replace my-app
and 8080
with the appropriate values for your service.
- Apply the service manifest file using the kubectl apply command:
kubectl apply -f <path-to-manifest-file>
- Verify that the service is exposed by checking the external IP address assigned by MetalLB:
kubectl get services my-service
You should see output similar to the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service LoadBalancer 10.102.159.123 192.168.1.100 80:31885/TCP 17h
In this example, the external IP address assigned by MetalLB is 192.168.1.100
.
That’s it! You’ve now added MetalLB as an additional VM to your Kubernetes cluster and can use it to expose services externally.
Examples of Helm Charts for Home Usage
Here are some examples of Helm charts you can use for your home usage:
- Jellyfin for media streaming: Jellyfin is an open-source media streaming server that can be deployed using a Helm chart. With Jellyfin, you can stream your music, movies, and TV shows to any device in your home.
- VPN container: You can deploy a VPN container using a Helm chart, such as OpenVPN or WireGuard. This allows you to access your home network securely from anywhere in the world.
- Home automation using Home Assistant chart: Home Assistant is an open-source home automation platform that can be deployed using a Helm chart. With Home Assistant, you can automate your home devices and create custom automations.
- Minecraft server: You can deploy a Minecraft server using a Helm chart, such as the one provided by Helm Charts.
- Test & Use Different Linux distributions. Find you favorite Linux distribution, and build a customized workspace for you development or learning environment
Exposing Services to The World
Now, after you deployed several services, simply configure your home router to port forward your LoadBalancer ip ranges to external access ports. Pay attention to use the relevant service listening ports, or if you are using NGINX Ingress you need to register a Domain and configure a subdomain for each service.
Register A Domain
Choose any domain provider, and purchase a domain. This is needed once you use NGINX Ingress, since it’s default forwarding method is via domains & subdomains. NGINX Ingress listens to port 80 & 443, and forward any relevant subdomain to it’s configured service. Here is a quick example:
- First, make sure you have an Nginx ingress controller deployed in your cluster. You can deploy the controller using the following command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/baremetal/deploy.yaml
Note that the URL of the YAML file may vary depending on the version of the ingress controller you want to use.
- Create a Kubernetes service for your application. For example, you can create a service with the following YAML file:
apiVersion: v1
kind: Service
metadata:
name: my-app
spec:
selector:
app: my-app
ports:
- name: http
port: 80
targetPort: 8080
This creates a service named my-app
that selects pods with the label app: my-app
and exposes port 80.
- Create an Ingress resource that defines the routing rules for your application. For example, you can create an ingress resource with the following YAML file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my-app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app
port:
name: http
This creates an ingress resource named my-app
that routes traffic to the my-app
service when the host is my-app.example.com
. The nginx.ingress.kubernetes.io/rewrite-target: /
annotation removes the prefix from the URL, so requests to my-app.example.com/foo
will be routed to the my-app
service at the /
path.
- Create a DNS record that points the subdomain
my-app.example.com
to the IP address of the MetalLB load balancer. You can find the IP address by running the following command:
kubectl get services -n ingress-nginx ingress-nginx-controller
This will display the external IP address of the ingress controller service. Note that it may take some time for DNS changes to propagate.
That’s it! Requests to my-app.example.com
will now be routed to the my-app
service in your Kubernetes cluster.
Note: for security reasons, you might want to add white listing of IPs into your ingress configuration, since some of them should be exposed to your local network only.
Conclusion
Using Kubernetes at home provides many advantages, such as scalability, fault tolerance, and service discovery. With Rancher, you can easily deploy your applications using Helm charts, allowing you to automate your home workloads. By following the steps outlined in this post, you can install and run Kubernetes on your own virtual machines, and deploy your applications with ease.