After Building Hundreds of Helm Charts, These Are the Rules I Live By

Jan 01, 2020

After deploying hundreds of applications to Kubernetes with Helm, I’ve learned that a well-crafted Helm chart is the difference between a smooth, reliable deployment and a weekend spent debugging. I’ve developed a set of best practices that have become my personal playbook for creating production-ready charts. These are the rules I live by.

My Core Charting Principles

Start With a Solid Chart Structure

I learned this lesson the hard way after inheriting a chart where someone had crammed multiple resources into single template files. It was a nightmare to maintain. Now, the first thing I do when creating a new chart is set up a clean structure that I know future-me will appreciate.

I keep one Kubernetes resource per template file. deployment.yaml contains only the Deployment. service.yaml contains only the Service. This might seem obvious, but I’ve seen charts where people combine related resources just to reduce file count. Don’t do that. The extra files are worth it for the clarity.

The _helpers.tpl file is where I put all my reusable template logic. This includes things like generating consistent labels, constructing resource names, and building image tags. If I’m writing the same template logic more than once, it goes in _helpers.tpl.

For values files, I maintain a base values.yaml with sensible defaults and then create environment-specific overlays like values-prod.yaml and values-staging.yaml. This lets me keep environment differences explicit rather than buried in conditionals throughout the templates.

Treat Your values.yaml Like a Public API

Here’s something that took me too long to realize: the values.yaml file isn’t just configuration, it’s the public interface to your chart. Anyone using your chart will interact with it primarily through the values file. If it’s poorly organized or missing documentation, people will struggle.

I organize values hierarchically by resource type. There’s an image section with repository, tag, and pullPolicy. A serviceAccount section with create, annotations, and name. A resources section for CPU and memory limits. This structure mirrors how I think about Kubernetes resources, which makes it intuitive.

Every value gets a comment explaining what it does and what the valid options are. Yes, this takes time. Yes, it’s worth it. I’ve spent hours debugging chart deployments only to discover someone misunderstood what a value was supposed to control. Good comments prevent that.

I also provide sensible defaults for everything. The chart should work out of the box for development environments. Production-specific settings like resource limits or security contexts can be overridden, but they should have reasonable starting values.

Keep Templates DRY With Helpers

Early in my Helm journey, I wrote the same label definitions in every template. Then I’d need to change the label format and have to update 15 files. That’s when I learned to love _helpers.tpl.

Now I define named templates for anything I use more than once. Standard labels, resource names, selector labels, image strings, all of it goes in helpers. Here’s a simple example I use in almost every chart:

{{/* Standard labels that I use everywhere */}}
{{- define "my-app.labels" -}}
helm.sh/chart: {{ include "my-app.chart" . }}
app.kubernetes.io/name: {{ include "my-app.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}

This helper gets used in every resource, so when I need to add a new standard label, I change it in one place and it propagates everywhere. The time investment in setting up good helpers pays off within weeks.

Build Security In From Day One

I used to treat security as something to add later, after getting the chart working. That was stupid. Security should be the default, not an upgrade path.

Now every chart I create starts with secure defaults. In values.yaml, I set a securityContext and podSecurityContext that run containers as non-root, drop all Linux capabilities, and use a read-only root filesystem. If an application needs special privileges, the person deploying it can explicitly grant them. But the default is locked down.

# Security defaults that go in every values.yaml I write
podSecurityContext:
  runAsNonRoot: true
  runAsUser: 10001

securityContext:
  allowPrivilegeEscalation: false
  capabilities:
    drop:
    - ALL
  readOnlyRootFilesystem: true

This configuration has caught real vulnerabilities in container images I was using. The read-only root filesystem especially is great because it forces you to explicitly declare any writable volumes your app needs, which makes you think about what file access is actually necessary.

The Checksum Annotation Trick

Here’s a gotcha that bit me several times before I learned the solution: if you update a ConfigMap or Secret but don’t change anything in your Deployment spec, Kubernetes won’t roll your pods. Your new configuration just sits there, unused, until something else triggers a rollout.

The fix is dead simple but not obvious. Add a checksum annotation to your Deployment that includes the hash of your ConfigMap or Secret. Now when the config changes, the checksum changes, which changes the Deployment spec, which triggers a rollout.

# This goes in your Deployment template's pod annotations
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}

This one line has saved me from so many “why isn’t my config updating” debugging sessions.

Actually Test Your Charts

I’ll be honest, I didn’t write Helm tests for my first dozen charts. Then I deployed a chart to production that passed helm lint but had a typo in the Service selector. The pods were running, but no traffic could reach them. That was embarrassing.

Now I write helm test resources for every chart. These are just Kubernetes Job or Pod resources with the helm.sh/hook: test annotation. They run after deployment and verify basic functionality. My standard test is a simple pod that curls the Service endpoint and checks for a 200 response.

More importantly, my CI/CD pipeline runs both helm lint and helm test on every commit. Lint catches syntax errors and structural problems. Test catches runtime issues like bad selectors or misconfigured services. Together they catch maybe 80% of the bugs that used to make it to production.

What I’ve Learned

Building Helm charts well is less about knowing every template function and more about having a consistent approach. The principles I follow now are simple:

Structure your charts for maintainability, not brevity. One resource per file. Consistent naming. Clear organization. Future you will thank present you.

Treat your values file as the public API of your chart. Document it well. Provide good defaults. Organize it logically. This is what users interact with.

Security should be the default state, not something people opt into. Run as non-root. Drop capabilities. Use read-only filesystems. Make people explicitly ask for elevated privileges rather than having to remember to lock things down.

Use checksum annotations on Deployments that depend on ConfigMaps or Secrets. This is not optional if you want reliable config updates.

Test your charts in CI/CD. Both lint and functional tests. Catching bugs in CI is so much better than catching them in production.

Keep your templates DRY with helpers. Write template logic once, use it everywhere.

These practices have saved me countless hours of debugging and made my charts much more reliable. They’re not complicated rules, just consistent discipline applied over time.

El Muhammad's Portfolio

© 2025 Aria

Instagram YouTube TikTok 𝕏 GitHub