Files
skubelb/README.md

130 lines
4.2 KiB
Markdown
Raw Normal View History

2025-03-29 22:44:06 -07:00
# A simple Kubernetes load balancer
Configures nginx to forward connections to your node IPs.
Services should be declared as NodePort, which means that they
open a port on all nodes. When the request lands on any node,
it is forwarded to the correct pod via the network mesh kubernetes
is using. In theory, there is one a hop penalty.
But lets be honest. You're running with a single LB, probably on a
GCE free tier N1 VM. That extra hop doesn't matter.
2025-03-29 22:44:06 -07:00
## Config
Configure nginx to do what you want, test it. Use any Node IP for your testing.
2025-03-30 00:09:49 -07:00
This will become the 'template_dir' in the argument to the LB.
2025-03-29 22:44:06 -07:00
Move that directory (i.e., `/etc/nginx`) to somewhere new,
(i.e. `/etc/nginx-template/`). Make a symlink from that
new directory to the old one (i.e., `ln -s /etc/nginx-template /etc/nginx`).
2025-03-29 22:44:06 -07:00
2025-03-30 00:09:49 -07:00
Make a workspace directory for this tool; it will write configs to this folder
before updating the symlink you created above. It needs to be persistent so on
server reboot the service starts ok (i.e., `mkdir /var/skubelb/`).
Make sure the user running the tool has read access to the template folder, read-write
access to the workspace folder and config symlink.
Run the server with a command like:
```sh
skubelb --needle some_node_ip \
--workspace_dir /var/skubelb \
--config_symlink /etc/nginx \
--template_dir /etc/nginx-template
--listen 0.0.0.0:8888
2025-03-30 00:09:49 -07:00
```
Replacing `some_node_ip` with the node IP you used during the initial setup.
Next, configure the Kubernetes nodes to POST `http://loadbalancer:8888/register` when
they started, and DELETE `http://loadbalancer:8888/register` when they shutdown.
#### Running as a system service
Setup a user to run the service; make that user
with `useradd -M skubelb`, Prevent logins with `usermod -L skubelb`.
Make a workspace dir, `mkdir /var/skubelb/`, and give access to the
daemon user, `chown skubelb:skubelb /var/skubelb/`.
Add the systemd config to `/etc/systemd/system/skubelb.service`:
```toml
[Unit]
Description=Simple Kubernetes Load Balancer
After=network.target
StartLimitIntervalSec=0
[Service]
Type=simple
Restart=always
RestartSec=1
User=skubelb
ExecStart=/usr/local/bin/skubelb --needle some_node_ip \
--workspace-dir /var/skubelb \
--config-symlink /etc/nginx \
--template-dir /etc/nginx-template \
--listen 0.0.0.0:8888 \
--reload-cmd '/usr/bin/sudo systemctl reload nginx'
[Install]
WantedBy=multi-user.target
```
Make sure you update `--needle some_node_ip` with something
like `--needle 123.44.55.123`. The IP of node you tested with.
### Sample Kubernets configuration
Deploy this [daemon set](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/)
to your cluster, replacing `lb_address` with the address of your load balancer.
```yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: skubelb
namespace: skubelb
labels:
k8s-app: skubelb
spec:
selector:
matchLabels:
name: skubelb
template:
metadata:
labels:
name: skubelb
spec:
tolerations:
# these tolerations are to have the daemonset runnable on control plane nodes
# remove them if your control plane nodes should not run pods
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: skubelb
image: alpine/curl:latest
command: ['sh', '-c', 'echo "Wait for heat death of universe" && sleep 999999d']
lifecycle:
preStop:
exec:
command: ['curl', '-X', 'DELETE', '34.56.7.198:8888/register']
preStart:
exec:
command: ['curl', '-X', 'POST', '34.56.7.198:8888/register']
resources:
limits:
memory: 200Mi
requests:
cpu: 10m
memory: 100Mi
terminationGracePeriodSeconds: 30
```
NOTE: you should need to make an entry in the firewall to allow this request through. It is very important that the firewall entry has a source filter; it should only be allowed from the Kubernetes cluster. Nginx will forward traffic to any host that registers, and this could easily become a MitM vulnerability.