Skip to main content

Building Your Own Helm Chart

In this documentation we will go over configuration of HELM we require to have a better experience and a seamless experience.

Each hyperlink will redirect you to official documentation which should provide more dept to help you better undestand the changes we are about to do.

Initializing Your First Chart.

.helmignore
Chart.yaml
charts
templates
values.yaml
  • Let's modify the file Chart.yaml

    • You should read each comment and understand what each does
  • We want to change the name, description, version and appVersion e.g.

    apiVersion: v2
    name: hello_thales
    description: An Example Helm Chart to help Thales Devs

    # A chart can be either an 'application' or a 'library' chart.
    #
    # Application charts are a collection of templates that can be packaged into versioned archives
    # to be deployed.
    #
    # Library charts provide useful utilities or functions for the chart developer. They're included as
    # a dependency of application charts to inject those utilities and functions into the rendering
    # pipeline. Library charts do not define any templates and therefore cannot be deployed.
    type: application

    # This is the chart version. This version number should be incremented each time you make changes
    # to the chart and its templates, including the app version.
    # Versions are expected to follow Semantic Versioning (https://semver.org/)
    version: 0.1.0

    # This is the version number of the application being deployed. This version number should be
    # incremented each time you make changes to the application. Versions are not expected to
    # follow Semantic Versioning. They should reflect the version the application is using.
    # It is recommended to use it with quotes.
    appVersion: "1.0.0"

Editing Your Templates

The following changes are to be put in place to ensure a seamless experience on our platform.

Liveness Readiness

  • Ensure readiness probe is configured

  • Ensure liveness probe is configured

  • Configure Liveness, Readiness and Startup Probes

    • These configuration help your application know if it has started, if it's live and if it can start accepting traffic.

    • The configuration will be found in templates/deployment.yaml

      - Under `spec.template.spec.containers`
      - you should see this block of code
      ```yaml
      ports:
      - name: http
      containerPort: 80
      protocol: TCP
      livenessProbe:
      httpGet:
      path: /
      port: http
      readinessProbe:
      httpGet:
      path: /
      port: http
      ```
      * We will change this to leverage the usage of the `values.yaml` file and add more option to control both probes

      ```yaml
      ports:
      - name: http
      containerPort: {{ .Values.application.port }}
      protocol: TCP
      {{- if .Values.livenessProbe.enabled }}
      livenessProbe:
      httpGet:
      path: /
      port: {{ .Values.application.port }}
      initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
      periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
      timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
      successThreshold: {{ .Values.livenessProbe.successThreshold }}
      failureThreshold: {{ .Values.livenessProbe.failureThreshold }}
      {{- end }}
      {{- if .Values.readinessProbe.enabled }}
      readinessProbe:
      httpGet:
      path: /
      port: {{ .Values.application.port }}
      initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
      periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
      timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
      successThreshold: {{ .Values.readinessProbe.successThreshold }}
      failureThreshold: {{ .Values.readinessProbe.failureThreshold }}
      ```
      * Now let's configure the [probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes) in the `values.yaml`
      * To make use of the `httpGet` probe we need to define the container port
      * in
      ```yaml
      image:
      repository: nginx
      pullPolicy: IfNotPresent
      # Overrides the image tag whose default is the chart appVersion.
      tag: ""
      application:
      port: 80
      ```
      * Between the `ingress:` and `resources:` add the following, these number can be adjusted if needed by your application.
      ```yaml
      livenessProbe:
      enabled: true
      initialDelaySeconds: 30
      periodSeconds: 10
      timeoutSeconds: 5
      failureThreshold: 6
      successThreshold: 1
      readinessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 10
      timeoutSeconds: 5
      failureThreshold: 6
      successThreshold: 1
      ```

      Note: Readiness probes runs on the container during its whole lifecycle.

    Caution: Liveness probes do not wait for readiness probes to succeed. If you want to wait before executing a liveness probe you should use initialDelaySeconds or a startupProbe.

Pod Disruption Budget

  • Disruptions

  • Pod Disruption Budget

  • As an application owner, you can create a PodDisruptionBudget (PDB) for each application. A PDB limits the number of Pods of a replicated application that are down simultaneously from voluntary disruptions. For example, a quorum-based application would like to ensure that the number of replicas running is never brought below the number needed for a quorum. A web front end might want to ensure that the number of replicas serving load never falls below a certain percentage of the total.

    Note, you need to take into account your affinity and anti-affinity, number of nodes and replica to build proper pdb as this can result in a deadlock preventing us to properly upgrade the cluster.

  • The first thing to do if you want to use pdb will be to have more than 1 replica of your application

    • In values.yaml, you should find this line replicaCount: depending on your need bump this to a higher number than 1.
    # Default values for template.
    # This is a YAML-formatted file.
    # Declare variables to be passed into your templates.

    replicaCount: 2

    image:
    repository: nginx
    pullPolicy: IfNotPresent
    # Overrides the image tag whose default is the chart appVersion.
    tag: "1.20.1"
  • Then we need to create a new file, call it pdb.yaml, then add these lines

    {{- if and .Values.podDisruptionBudget.enabled}}
    ---
    apiVersion: policy/v1beta1
    kind: PodDisruptionBudget
    metadata:
    name: {{ template "deployment.fullname" . }}-pdb
    namespace: {{ .Release.Namespace }}
    labels:
    spec:
    {{- if .Values.podDisruptionBudget.minAvailable }}
    minAvailable: {{ .Values.podDisruptionBudget.minAvailable }}
    {{- end }}
    {{- if .Values.podDisruptionBudget.maxUnavailable }}
    maxUnavailable: {{ .Values.podDisruptionBudget.maxUnavailable }}
    {{- end }}
    selector:
    matchLabels:
    app.kubernetes.io/name: {{ include "deployment.name" . }}
    {{- end }}
    • In values.yaml add these lines and specify disruption budget depending on your need following the documentation recommendation. In our case we will allow all disruption but 1 pod, hence specifying 1 at minAvailable.
      • You can specify only one of maxUnavailable and minAvailable in a single PodDisruptionBudget. maxUnavailable can only be used to control the eviction of pods that have an associated controller managing them. In the examples below, "desired replicas" is the scale of the controller managing the pods being selected by the PodDisruptionBudget.
    podDisruptionBudget:
    enabled: true
    minAvailable: 1
    # maxUnavailable:
    • With a replicaCount of 2 this is what the status of your pdb should look like.
    status:
    currentHealthy: 2
    desiredHealthy: 1
    disruptionsAllowed: 1
    expectedPods: 2
    observedGeneration: 1

WhiteList and Network Security Groups

  • We don't recommand the usage of nginx/whitelist-source-range as it doesn't block other ip to access the cluster it will only block the access at the nginx level.
    • e.g.
    nginx.ingress.kubernetes.io/whitelist-source-range: 10.201.248.4,10.201.248.5,10.201.198.128/25
  • We would recommend openning a ticket with our team so we can add these addresse in the Network Security Groups instead. This way only allowed Ip would be able to have access to the cluster.

Assign Pods to Nodes using Node Affinity

  • Node Affinity

  • Pod and Node Affinity is a great tool to ensure a proper distribution of your apps. In this scenario we ask the deployment to spread as much as possible the deployement across differente node.

  • In values.yaml you should see the following section.

    affinity: {}
  • Change it to :

    affinity:
    podAntiAffinity:
    preferredDuringSchedulingIgnoredDuringExecution:
    - weight: 1
    podAffinityTerm:
    topologyKey: "app.kubernetes.io/instance"
    labelSelector:
    matchLabels:
    app.kubernetes.io/name: hello_thales # This would be the name of your application, in this exemple we use hello_thales.
  • This will tell your deployment to use the Label app.kubernetes.io/name: hello_thales during the scheduling to not schedule any pod on the same node as another one, as we use the podAntiAffinity with the topologyKey app.kubernetes.io/instance

  • This will force pod to be schedule on different node instead of all being stacked on the same and lower the risk of the application going down if a node fail.

CPU, Memory Request & Limits

Images Tags and Digest

  • Ensure image tag is set to Fixed - not Latest or Blank
  • Ensure images are selected using a digest
    • We recommend you avoid using the :latest and :blank tags when deploying containers in production as it is harder to track which version of the image is running, and more difficult to roll back properly.
    • Pulling using a digest allows you to “pin” an image to that version, and guarantee that the image you’re using is always the same. Digests also prevent race-conditions; if a new image is pushed while a deploy is in progress, different nodes may be pulling the images at different times, so some nodes have the new image, and some have the old one. Services automatically resolve tags to digests, so you don't need to manually specify a digest.
    image:
    repository: nginx
    pullPolicy: IfNotPresent
    # Overrides the image tag whose default is the chart appVersion.
    tag: "1.20.1"

Managing Multiple Env with secrets

  • If you need to manage multiple env for your container during your deployment and use secrets here's a simple way to do it instead of adding tons of line in your deployment template.
  • At the top of your deployment.yml add
    {{- $deploymentName := include "deployment.fullname" . -}}
  • Then under spec.template.spec.container add these line.
    env:
    {{- range $key, $value := .Values.extraEnv }}
    - name: {{ $key }}
    valueFrom:
    secretKeyRef:
    name: {{ $deploymentName }}
    key: {{ $value }}
    {{- end }}
  • In values.yaml add the lines extraEnv: and for each of key value pair you need under
    extraEnv:
    key1: value1
    key2: value2

Security, Filesystem & Users

Enabling or Disabling the deployment of an helm resource

  • In a scenario where you would apply a configmap for your dev environment but not in prod and you would like to leverage the same helm chart. You can easily do so by adding condition to the resources. Here's a quick example.
    • At the top of your resources file, add the following line
    {{- if .Values.ConfigMap.enabled -}}
    • In your values.yaml file add these lines to control the deployment of the configmap
    ConfigMap:
    enabled: <bool>

Other Use Cases