At times, there’s a need to integrate resources deployed by another tool within a Helm chart. Helm, by default, doesn’t offer a straightforward way to achieve this without manual overwrites of existing resources. This is a well-known issue described in Helm’s GitHub some time ago.

 

While patching manually all the pre-Helm installed resources for one service could hardly be a problem, it changes dramatically when dealing with numerous services. In this article, I will demonstrate how to overcome this limitation without the need for redeploying services, thus avoiding any forced downtime.

Requirements

  • Two Kubernetes clusters, target and temporary. You can use your existing cluster or deploy a testing environment.
  • Linux host with kubectl installed.

Problem

Kubernetes administrators often encounter challenges when attempting to integrate non-Helm deployed resources into a new or existing Helm chart. When trying to redeploy such resources using Helm, they encounter an error message: “Error: rendered manifests contain a resource that already exists…“. This occurs because Helm is unable to rewrite objects for which it is not the owner.

 

One potential solution is to delete these resources and redeploy them with Helm. However, this approach leads to unavoidable production downtime and introduces potential risks. Yet, it is feasible to “deceive” Helm into recognizing existing resources as if they were deployed by Helm.

Solution

Helm retains information about its ownership in the state secret. To make Helm consider existing resources as if they were Helm-deployed, a workaround involves deploying the code to a temporary cluster, copying Helm’s state to the target cluster, and then syncing the resources.

 

As long as we don’t have information about the code in your target cluster, we will provide example code that you can deploy to your target cluster to test our solution:

export KUBECONFIG=~/.kube/target
echo \
'apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-deployment
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test-deployment
  template:
    metadata:
      labels:
        app: test-deployment
    spec:
      containers:
      - name: busybox
        image: busybox:1.35.0
        command: ["/bin/sh"]
        args: ["-c", "sleep 999999"]
        resources:
          requests:
            memory: "100Mi"
            cpu: "10m"
          limits:
            memory: "100Mi"
            cpu: "10m"' \
> test.yaml

kubectl -f test.yaml apply

Implementation

1.Create a test Helm chart and add our code to it:

mkdir -p test/templates

echo \
'apiVersion: v1
name: test
version: 0.0.1
description: Test code
' \
> test/Chart.yaml

cp -a test.yaml test/templates/

2. Try to deploy the Helm chart to the target cluster – you should see the “resource already exists” error:

helm upgrade --install test ./test/ -n default

3. Get the target cluster version:

kubectl version | grep ^Server | grep --color GitVersion:

4. Run a temporary Kubernetes cluster by Minicube with the same or similar version:

minikube start -p aged --kubernetes-version=v1.25.3
export KUBECONFIG=~/.kube/config

5. Deploy the Helm chart to the temporary cluster:

helm upgrade --install test ./test/ -n default

6. Save the chart state to a file:

kubectl -n default get secret sh.helm.release.v1.test.v1 -o yaml > test_state.yaml

7. Deploy the state to the target cluster:

export KUBECONFIG=~/.kube/target 
kubectl -n default -f test_state.yaml apply

8. Deploy the chart to the target cluster – you should see no errors:

helm upgrade --install test ./test/ -n default

9. Finally, check the result with Helm annotations – you should see no differences:

 

kubectl -f test.yaml get -o yaml | grep helm
kubectl -f test.yaml diff

 

Now you are able to add any code to the test Helm chart and migrate any resources if you need to.

Conclusion

In this article I have demonstrated how to overwrite existing resources deployed by non-Helm without stopping production services – an easy and convenient workaround for Kubernetes administrators who might have faced the same problem.