Creating Kubernetes Operators with Kubebuilder

Featured image



Overview

Learn how to create Kubernetes Operators using Kubebuilder, a powerful framework for building custom controllers and APIs.


what is kubebuilder?

Kubebuilder is an SDK for developing Kubernetes Operator. It is a framework that makes it easy to develop CRD and controllers using the Go language.

It simplifies the development of custom controllers and APIs by providing scaffolding, code generation, and integration with Kubernetes tools.

Kubernetes Operator is a custom controller that manages custom resources (CRD) and automates application deployment, expansion, and management.

Why do I use Kubbuilder?


Getting Started with Kubebuilder

Installation

# Install Kubebuilder
brew install kubebuilder

# Initialize project
kubebuilder init --domain example.com --repo github.com/username/project

# Create API
kubebuilder create api --group apps --version v1 --kind MyApp

Project Structure

.
├── api
│   └── v1
├── bin
├── cmd
├── config
│   ├── crd
│   ├── default
│   ├── manager
│   ├── network-policy
│   ├── prometheus
│   ├── rbac
│   └── samples
├── hack
├── internal
│   └── controller
└── test
    ├── e2e
    └── utils


Development Workflow

Step Commands Description
Generate make generate Generate code for custom resources
Manifests make manifests Generate CRD manifests
Test make test Run unit tests
Build make docker-buildx Build multi-platform images
Deploy make deploy Deploy to cluster



Create Operator with Kubbuilder

It’s the Kubernetes Operator I created.

Prerequisites

# Install kubebuilder
brew install kubebuilder

# Create git repo and clone
git clone git@github.com-somaz94:somaz94/k8s-namespace-sync.git

# Initialize kube builder project
kubebuilder init --domain nsync.dev --repo github.com/somaz94/k8s-namespace-sync

# Create api
kubebuilder create api --group sync --version v1 --kind NamespaceSync

File Structure Explanation

The crd schema definition is done in the file below.

The main.go is defined in the path below.

The controller defines it in that file. In addition to this file, I have organized files for various functions.

ls
backup					filters.go				namespacesync_controller.go		namespacesync_controller_target_test.go	resources.go				sync.go
events.go				metrics.go				namespacesync_controller_filter_test.go	namespacesync_controller_test.go	suite_test.go				utils.go

You can also define the test code on the same path and test it using the make test command. If you are curious about the details, please check the make file.

If you’re done implementing all the features, create a dockerhub account to upload after the image is built.

Log in from the terminal.

docker login -u somaz940
# Enter token


Kubebuilder Markers

It is a kubbuilder marker that defines the schema and presentation method of the Kubernetes Custom Resource Definition (CRD).

If you check the namespacesync_types.go file, the marker is defined as below.

// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:name="Source",type="string",JSONPath=".spec.sourceNamespace"
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp"
// +kubebuilder:printcolumn:name="Status",type="string",JSONPath=".status.conditions[?(@.type=='Ready')].status"
// +kubebuilder:printcolumn:name="Message",type="string",JSONPath=".status.conditions[?(@.type=='Ready')].message"

When using makemanifests

When using make generator


If you check the makefile, it is defined as follows. After performing all necessary tasks, proceed to the test.

.PHONY: test
test: manifests generate fmt vet envtest ## Run tests.
	KUBEBUILDER_ASSETS="$(shell $(ENVTEST) use $(ENVTEST_K8S_VERSION) --bin-dir $(LOCALBIN) -p path)" go test $$(go list ./... | grep -v /e2e) -coverprofile cover.out


And you can specify IMG upload path and K8S test version in makefile.

# Image URL to use all building/pushing image targets
IMG ?= somaz940/k8s-namespace-sync:v0.1.6
# ENVTEST_K8S_VERSION refers to the version of kubebuilder assets to be downloaded by envtest binary.
ENVTEST_K8S_VERSION = 1.31.0


If the test is successful, make docker-buildx is used to build various platforms at the same time.

I modified the makefile so that it could be built into the late, versiontag, and late+ versiontag environments.

PLATFORMS ?= linux/arm64,linux/amd64,linux/s390x,linux/ppc64le
.PHONY: docker-buildx-tag
docker-buildx-tag: ## Build and push docker image for the manager for cross-platform support with specific version
	# copy existing Dockerfile and insert --platform=${BUILDPLATFORM} into Dockerfile.cross
	sed -e '1 s/\\(^FROM\\)/FROM --platform=\\$$\\{BUILDPLATFORM\\}/; t' -e ' 1,// s//FROM --platform=\\$$\\{BUILDPLATFORM\\}/' Dockerfile > Dockerfile.cross
	- $(CONTAINER_TOOL) buildx create --name k8s-namespace-sync-builder
	$(CONTAINER_TOOL) buildx use k8s-namespace-sync-builder
	# Build and push version-specific tag
	- $(CONTAINER_TOOL) buildx build --push --platform=$(PLATFORMS) \\
		--tag ${IMG} \\
		-f Dockerfile.cross .
	- $(CONTAINER_TOOL) buildx rm k8s-namespace-sync-builder
	rm Dockerfile.cross

.PHONY: docker-buildx-latest
docker-buildx-latest: ## Build and push docker image for the manager for cross-platform support with latest tag
	# copy existing Dockerfile and insert --platform=${BUILDPLATFORM} into Dockerfile.cross
	sed -e '1 s/\\(^FROM\\)/FROM --platform=\\$$\\{BUILDPLATFORM\\}/; t' -e ' 1,// s//FROM --platform=\\$$\\{BUILDPLATFORM\\}/' Dockerfile > Dockerfile.cross
	- $(CONTAINER_TOOL) buildx create --name k8s-namespace-sync-builder
	$(CONTAINER_TOOL) buildx use k8s-namespace-sync-builder
	# Build and push latest tag
	- $(CONTAINER_TOOL) buildx build --push --platform=$(PLATFORMS) \\
		--tag $(shell echo ${IMG} | cut -f1 -d:):latest \\
		-f Dockerfile.cross .
	- $(CONTAINER_TOOL) buildx rm k8s-namespace-sync-builder
	rm Dockerfile.cross

.PHONY: docker-buildx
docker-buildx: ## Build and push both version-specific and latest tags
docker-buildx: docker-buildx-tag docker-buildx-latest


If you want to deploy to the cluster, you can use the make deploy command.

make deploy IMG=somaz940/k8s-namespace-sync:v0.1.6


Check api and crd.

k get apiservices.apiregistration.k8s.io |grep nsync
v1.sync.nsync.dev                      Local                        True        25s

k get crd |grep nsync
namespacesyncs.sync.nsync.dev                2024-12-16T09:41:00Z


I created various sample crd.

ls config/samples/
kustomization.yaml			sync_v1_namespacesync_exclude.yaml	sync_v1_namespacesync_target.yaml	test-configmap2.yaml			test-secret2.yaml
sync_v1_namespacesync.yaml		sync_v1_namespacesync_filter.yaml	test-configmap.yaml			test-secret.yaml


The start log of the app is as follows.

2024-12-16T09:41:01.541Z        ESC[34minfoESC[0m       setup   creating controller     {"controller": "NamespaceSync", "scheme": "pkg/runtime/scheme.go:100"}
2024-12-16T09:41:01.541Z        ESC[34minfoESC[0m       namespacesync-controller        Setting up controller manager
2024-12-16T09:41:01.545Z        ESC[34minfoESC[0m       setup   controller created successfully {"controller": "NamespaceSync"}
2024-12-16T09:41:01.545Z        ESC[34minfoESC[0m       setup   starting manager
2024-12-16T09:41:01.545Z        ESC[34minfoESC[0m       controller-runtime.metrics      Starting metrics server
2024-12-16T09:41:01.545Z        ESC[34minfoESC[0m       setup   disabling http/2
2024-12-16T09:41:01.545Z        ESC[34minfoESC[0m       starting server {"name": "health probe", "addr": "[::]:8081"}
I1216 09:41:01.645802       1 leaderelection.go:254] attempting to acquire leader lease k8s-namespace-sync-system/47c2149c.nsync.dev...
I1216 09:41:01.659829       1 leaderelection.go:268] successfully acquired lease k8s-namespace-sync-system/47c2149c.nsync.dev
2024-12-16T09:41:01.659Z        ESC[35mdebugESC[0m      manager.events  k8s-namespace-sync-controller-manager-5874b756c8-h9ng6_2a3d0c37-b1f0-46c3-bcff-e28ccc2750e9 became leader       {"type": "Normal", "object": {"kind":"Lease","namespace":"k8s-namespace-sync-system","name":"47c21
49c.nsync.dev","uid":"385be981-f090-4b4e-a341-7888c2858e4b","apiVersion":"coordination.k8s.io/v1","resourceVersion":"12611207"}, "reason": "LeaderElection"}
2024-12-16T09:41:01.660Z        ESC[34minfoESC[0m       manager Starting EventSource    {"controller": "namespacesync", "controllerGroup": "sync.nsync.dev", "controllerKind": "NamespaceSync", "source": "kind source: *v1.NamespaceSync"}
2024-12-16T09:41:01.660Z        ESC[34minfoESC[0m       manager Starting EventSource    {"controller": "namespacesync", "controllerGroup": "sync.nsync.dev", "controllerKind": "NamespaceSync", "source": "kind source: *v1.Namespace"}
2024-12-16T09:41:01.660Z        ESC[34minfoESC[0m       manager Starting EventSource    {"controller": "namespacesync", "controllerGroup": "sync.nsync.dev", "controllerKind": "NamespaceSync", "source": "kind source: *v1.Secret"}
2024-12-16T09:41:01.660Z        ESC[34minfoESC[0m       manager Starting EventSource    {"controller": "namespacesync", "controllerGroup": "sync.nsync.dev", "controllerKind": "NamespaceSync", "source": "kind source: *v1.ConfigMap"}



Best Practices


1. Project Structure: - Organize code logically
- Separate concerns
- Use consistent naming

2. Testing: - Write comprehensive tests
- Use test fixtures
- Test edge cases

3. Deployment: - Use multi-platform builds
- Implement proper RBAC
- Monitor performance

4. Documentation: - Document APIs clearly
- Include usage examples
- Maintain changelog



Reference