Skip to main content

Private Application Exposition with Nginx

Context

When a team begins development, the application may not be sufficiently secured. Exposing this application to the internet during the development phase is not a good practice.

K8SaaS provides options for private ingress, allowing you to expose your application internally. This ensures that your containers are exposed using an ingress, but with a private IP (part of the AKS VNet).

To access your internal containers, you should:

  • Use a VPN (such as Zscaler provided by the TrustNest platform).
  • Use a VNet peered to the AKS.

Use Case

  • Proof of Concept (PoC) with low-security requirements.
  • Hybrid architecture (on-premise and cloud).
  • Applications that don’t support MFA in a containerized environment.

Architecture

Kubernetes Landing Zone offers several ways to expose an application:

  • A default Nginx ingress controller (in orange).
  • An internal Nginx ingress controller (in light green).

Alt text

Default Ingress

The default Nginx ingress controller uses an Azure Load Balancer to expose your application to the internet. Below is the network flow of an end-user accessing the application:

Alt text

Internal Ingress

For development or low-security applications, you can use the internal ingress controller to expose your application internally.

In this configuration, you must have a correctly set up VPN to access the shared services. Depending on your context:

  • If you're part of the Thales Digital Factory: Alt text

  • If you're part of the Thales Corporate Network: Alt text

Updated Recommendations

If Your Cluster Has the Corporate Addon

If your cluster includes the corporate addon, there is already a private IP address and peering to Zscaler in place. No additional setup is required for private access.

If Your Cluster Does Not Have the Corporate Addon (C2)

Use case 1: debug locally

Use kubectl port-forward to access your internal application during debugging sessions.

kubectl port-forward svc/<service-name> <local-port>:<service-port>

Deploy Pomerium and set up Multi-Factor Authentication (MFA) with Azure AD. Pomerium acts as a secure gateway, ensuring that access is only granted to authenticated users.

here is the documentation on the Pomerium MFA

Use case 4: Restrict Public Access on C3 Data

To handle any C3-CA data, deploy a cluster with the corporate addon to take advantage of pre-configured private IP addresses and Zscaler peering.

here is the documentation on Corporate addon

Requirements : Your cluster's IP range must use the 172.X.X.X subnet.

warning

If the cluster is in a legacy ip range (not 172.X), please request a new one (specidying a migration of your workload): Postit link

tip

You can check your cluster's IP range with the following command:

kubectl get nodes -o wide

Example output:

NAME                                STATUS   ROLES    AGE   VERSION    INTERNAL-IP    EXTERNAL-IP    OS-IMAGE            KERNEL-VERSION       CONTAINER-RUNTIME
aks-agentpoolX Ready <none> 25d v1.29.9 172.X.X.X <none> Ubuntu 22.04.5 LTS 5.15.0-1073-azure containerd//1.X

First, create a subdomain of your cluster's domain.

Example: If your cluster domain is: *.project.eu2.k8saas.thalesdigital.io You can create a private ingress subdomain such as: *.internal.project.eu2.k8saas.thalesdigital.io

Next, open a support request to request Zscaler Private Access for this domain so that it can be peered with the VWAN. Use the following link: Zscaler Private Access Request

HOWTO

Configure Your Application to Use the Right Nginx Ingress Controller

The two controllers are deployed with the following specific classes:

  • nginx for the default controller.
  • nginx-internal for the internal controller.

Example:

Using the default controller:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
ingressClassName: nginx
tls:
- hosts:
- hello-world-ingress.demo.kaas.thalesdigital.io
secretName: tls-hello-world

To switch to the internal ingress, update the ingressClassName field and redeploy:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
ingressClassName: nginx-internal
tls:
- hosts:
- hello-world-ingress.internal.demo.kaas.thalesdigital.io
secretName: tls-internal-hello-world

Next Steps

  • Explore the all-in-one publication service: here