By. Gabriel Cuba
Why do we need to configure kubernetes nodes?
Most Kubernetes users don’t typically care about node configuration, as most applications run fine without any special configuration on the nodes.
However, this is not the case for Telco use cases. Sometimes, it’s necessary to tune the operating system with specific kernel parameters, or perhaps add kernel modules or drivers while the cluster is already running. This becomes even more critical when using bare metal servers as Kubernetes nodes, where re-provisioning is time-consuming and operationally expensive.
Current tools
Some tools, like Ansible, can help achieve automated configurations, but when nodes are rebuilt or scaled, the configuration often needs to be applied manually again.
Certain operators, such as the Machine Config operator or kube-node-init, have been developed. However, they do not provide all the essential configuration types for Telco use cases, are unmaintained, or are focused on a specific Kubernetes distribution.
An operator for node configurations
We propose node-config-operator, an all-purpose open-source operator powered by the operator framework. It defines a Custom Resource (CR) to declare the configuration and ensures that the nodes comply with the declared settings.
The node-config-operator offers the following features:
-
Easy addition and removal of configuration, with the CR as the only interface.
-
Allows node segmentation using node selectors.
-
Full Go (Golang) implementation.
-
Simple to extend with new configuration types.
You can install the operator with helm by following the instructions in its repository.
In the next section, we’ll explore a few use cases where the operator can be used to configure Kubernetes nodes in an automated and declarative way.
Kernel parameter
Kernel parameters are configured using sysctl
commands. For this example, you need to configure the following kernel parameters in the kubernetes nodes:
sysctl -w fs.file-max=54321
This configuration can be applied with the following CR:
apiVersion: configuration.whitestack.com/v1beta1
kind: NodeConfig
metadata:
name: nodeconfig-sample
spec:
kernelParameters:
parameters:
- name: fs.file-max
value: "54321"
state: present
Etc hosts
Hostname resolution can be configured locally in /etc/hosts
. For this example, you need to add the following entries:
test.example.com 10.0.0.1
test2.example.com 10.0.0.2
test3.example.com test4.example.com 10.0.0.3
This configuration can be applied with the following CR:
apiVersion: configuration.whitestack.com/v1beta1
kind: NodeConfig
metadata:
name: nodeconfig-sample
spec:
hosts:
hosts:
- hostname: "test.example.com"
ip: "10.0.0.1"
- hostname: "test2.example.com"
ip: "10.0.0.2"
- hostname: "test3.example.com test4.example.com"
ip: "10.0.0.3"
state: present
Systemd overrides
Systemd overrides will be written to /etc/systemd/system/<unit-name>.d
. For example, if you want to change the `Exec` command to a running service, you can use the following CR:
apiVersion: configuration.whitestack.com/v1beta1
kind: NodeConfig
metadata:
name: nodeconfig-sample
spec:
systemdOverrides:
overrides:
- name: getty@tty2.service
file: |-
[Service]
ExecStart=
ExecStart=sleep 2000
state: present
Conclusion
As you can see, configuring nodes post-deployment (Day 2 operations) is easily done with the node-config-operator. There’s no need for manual commands or Ansible playbooks—just add the Custom Resource Definition (CRD), and within seconds, the new configuration will be applied.
Contact us