SSO Authentication with OAuth2 Proxy Sidecar Containers in Kubernetes

SSO in Kubernetes workloads using a sidecar containers

TJ. Podobnik, @dorkamotorka
Level Up Coding

--

To enhance security within Kubernetes environments, safeguarding workloads often involves implementing authentication measures. However, certain applications lack native support for this requirement, such as the ability to enable it via Helm Chart or similar configuration files. Although many services do provide basic authentication (username and password) by default, this may suffice for development purposes. However, in a production setting, where there are multiple applications, access management becomes challenging in case it’s not done properly.

Source: Palark Blog

In this post, we will demonstrate how to centralize application access management in Kubernetes through Single Sign-On (SSO). Additionally, we’ll showcase the implementation on the Uptime Kuma Monitoring Service using Kustomize. This will involve deploying an OAuth2 sidecar container acting as a proxy, configuring it with the KeyCloak Identity Provider (IdP), and redirecting users to the login page connected to KeyCloak.

Let’s start by going over the fundamental terms and concepts we’re using for our solution.

Single Sign-on (SSO)

SSO’s strength lies in centralizing access control configurations in a single location. Imagine running multiple applications within your enterprise, each requiring distinct access permissions. Some services might be accessible only to specific teams, while others may be reserved for administrators or managers. Creating service-specific access controls might not be problematic initially, but it becomes a challenge when users transition between teams or leave the company — something that happens frequently. Like in the scenario where an employee resigns, prompting the need to swiftly revoke all their access permissions to prevent any potential malicious actions. Ideally, achieving this with a click or two is optimal, and this efficiency is almost always only possible through a single source of access control management.

There are various Identity Providers available for Single Sign-On (SSO), enabling centralized access control. The image below illustrates a few of them:

Though not depicted in the image, KeyCloak is another option. It’s an Open Source Identity and Access Management solution that I’ve deployed frequently in production but less so for development. While initially appearing straightforward, choosing KeyCloak may not be as clear-cut. It boasts community support, a track record, and security patches, covering diverse use cases. However, its extensive customization options can add complexity, leading me to avoid it in non-scaling enterprise scenarios. If you’re faced with the decision between different SSO providers, consider also GitHub or GitLab. Ultimately, the choice depends on your specific toolset and the level of granularity you need for managing permissions.

OAuth2 Proxy (as a sidecar Container)

To facilitate user redirection to the login page for SSO authentication, deploying a proxy is necessary. Specifically for this purpose, we have the option of using oauth2-proxy. Given that we are managing Kubernetes workloads, it makes sense to deploy it as a sidecar container, as it is tightly coupled with a specific application. While there are scenarios where running it as a single deployment for centralized authentication might seem appealing, but considering its small container image size (~12.8 MB), I recommend opting for the decentralized pattern. This choice is mainly motivated by the potential reliability and scalability issues that may arise with central management.

For those of you with a particular interest in Kubernetes sidecar containers, I have another post that delves deeply into how you can effectively leverage them for API Caching.

Kustomize

We haven’t yet come to the demo part, but our plan is to deploy the UptimaKuma Monitoring service using the Helm Chart. However, this Helm Chart doesn’t inherently support OAuth2 proxy sidecar for SSO Authentication. To address this limitation, we need to patch the chart with some additional configuration. I believe the most effective approach is to utilize Kustomize. This allows us to declaratively define the changes we want to apply while keeping the original Helm Chart intact. Using Kustomize avoids the need for creating our own fork of the Helm Chart, ensuring that we can seamlessly incorporate the latest updates to the chart.

💡 Note: Kustomize is a declarative configuration management tool for Kubernetes that allows users to customize and manage Kubernetes manifests without modifying the original YAML files.

This isn’t a dedicated Kustomize blog post, so we won’t delve into details. However, it’s crucial to note that in our specific scenario, Kustomize configuration will be outlined in the kustomization.yaml file, offering various features such as applying patches, decrypting and applying Kubernetes Secrets, deploying Helm Chart, and more.

Uptime Kuma Demo

For demonstration purposes, we’ll explore the widely used self-hosted monitoring tool, Uptime Kuma. I’ve employed it in various projects due to its straightforward setup. However, for production deployment, the basic auth it offers out-of-the-box may not be suitable. Therefore, we’ll illustrate how to patch its Helm Chart to incorporate the SSO feature.

At this stage, I also want to emphasize that we won’t be covering the configuration and deployment of KeyCloak in detail. This process would require an entire separate post, and their documentation already provides a comprehensive guide.

The primary emphasis will be on the following kustomization.yaml script. This serves as the main deployment file that can be deployed manually or through GitOps tools like ArgoCD or FluxCD. It essentially bundles all the resources, or, I should say, "puzzles," together. Further details on these resources will be discussed in this post, but for reference, please check out the comments:

# Decrypts Kubernetes Secret Objects encrypted using SOPS
generators:
- ./decrypt-secrets.kustomize.yaml

# Apply the following resources in-order
resources:
- ./namespace.yaml

# Deploy references resources in this file to uptime namespace
namespace: uptime

# Deploy uptime-kuma Helm Chart
helmCharts:
- name: uptime-kuma
repo: https://dirsigler.github.io/uptime-kuma-helm
releaseName: uptime-kuma
version: 2.15.0
valuesFile: values.yaml

# Patch uptime-kuma Kubernetes manifest from the Helm Chart
patches:
- path: ./patches/sidecar-oauth-proxy-deployment.yaml
target:
kind: Deployment
- path: ./patches/sidecar-oauth-proxy-service.yaml
target:
kind: Service

Essentially the generators block executes the decryption of the encrypted Kubernetes Secrets objects:

  • ./uptime-oauth-client.secret.yaml: Client secret configured in KeyCloak
  • ./uptime-oauth-cookie.secret.yaml: Cookie secret value generated using python -c ‘import os,base64; print(base64.url_safe_b64encode(os.urandom(32)).decode())

A Kubernetes Secret object for both scenarios would be as follows:

---
# uptime-oauth-client-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: oauth-uptime-client-secret
namespace: uptime
stringData:
OAUTH_UPTIME_CLIENT_SECRET: <client-secret>

---
# uptime-oauth-cookie-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: oauth-uptime-cookie-secret
namespace: uptime
stringData:
OAUTH_UPTIME_COOKIE_SECRET: <cookie-secret>

As noted in the code comments above, we are utilizing SOPS for secrets encryption. If you need a refresher on this, please refer to one of my previous posts.

Alternatively, you could use sealed-secrets by Bitnami. Fortunately, Kustomize integrates well with SOPS through the ksops extension, as utilized in decrypt-secrets.kustomize.yaml:

apiVersion: viaduct.ai/v1
kind: ksops
metadata:
name: ksops-secret-decoder
annotations:
config.kubernetes.io/function: |
exec:
path: ksops
files:
- ./uptime-oauth-client.secret.yaml
- ./uptime-oauth-cookie.secret.yaml

💡 Note: ksops, or Kustomize-SOPS, is a kustomize KRM exec plugin for SOPS encrypted resources. It can be used to decrypt any Kubernetes resource, but is most commonly used to decrypt encrypted Kubernetes Secrets and ConfigMaps.

Kustomize Patches

We’ll skip discussing the resources, namespace, and HelmCharts blocks as they are relatively intuitive. However, a more crucial aspect is patching the Uptime Kuma Helm Chart to include the OAuth Proxy sidecar container. Initially, we are faced with the following scenario:

In summary, the Helm Chart deploys a Kubernetes Service and a Deployment that directs traffic directly to the Uptime Kuma Pod. However, our objective is to deploy the OAuth2 proxy sidecar container. The Service should redirect traffic to this sidecar container, and after successful authentication, the sidecar container should then redirect authenticated requests to the Uptima Kuma Pod. This configuration is achieved through the patches referenced in the kustomization.yaml:

  • sidecar-oauth-proxy-service.yaml
apiVersion: apps/v1
kind: Service
metadata:
name: uptime-kuma-patch # not important
spec:
# Instruct service to route traffic to the OAuth2 proxy sidecar container
$patch: replace
ports:
- name: http
port: 3001
protocol: TCP
targetPort: 3000
selector:
app.kubernetes.io/instance: uptime-kuma
app.kubernetes.io/name: uptime-kuma
  • sidecar-oauth-proxy-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: uptime-kuma-patch # not important
spec:
template:
spec:
# Add OAuth2 proxy sidecar container configuration
containers:
- name: oauth2-proxy
image: quay.io/oauth2-proxy/oauth2-proxy:v7.1.3
imagePullPolicy: Always
ports:
- containerPort: 3000
protocol: TCP
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
args:
- --provider=oidc
# Where to redirect authenticated requests
- --upstream=http://127.0.0.1:3001
# Listen on port 3000
- --http-address=0.0.0.0:3000
# Provide KeyCloak realm URL
- --oidc-issuer-url=https://<keycloak-url>/auth/realms/<realm-name>
- --pass-user-headers=true
- --email-domain=*
- --skip-provider-button=true
- --scope=openid
# Read Client and Cookie secret from Kubernetes Secrets
env:
- name: OAUTH2_PROXY_CLIENT_ID
value: <client-ID-configured-in-keycloak>
- name: OAUTH2_PROXY_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: oauth-uptime-client-secret
key: OAUTH_UPTIME_CLIENT_SECRET
- name: OAUTH2_PROXY_COOKIE_SECRET
valueFrom:
secretKeyRef:
name: oauth-uptime-cookie-secret
key: OAUTH_UPTIME_COOKIE_SECRET

The code comments should provide sufficient self-explanation, and the end result is as follows:

It’s worth noting that no modifications are needed in the application container, as we are simply proxying the traffic using the sidecar container. Now, whenever external users attempt to access our application, they will encounter the Keycloak login page first.

Conclusion

In conclusion, this post has illustrated a practical approach to centralizing application access management in Kubernetes through Single Sign-On (SSO), focusing on implementing SSO with the Uptime Kuma Monitoring Service using Kustomize. The necessity for robust access controls in a production environment, especially for applications lacking native support, was emphasized. By seamlessly integrating KeyCloak as an Identity Provider and deploying oauth2-proxy as a sidecar container, we’ve demonstrated a reliable solution for enhancing security.

To stay current with the latest cloud technologies, make sure to subscribe to my weekly newsletter, Cloud Chirp. 🚀

--

--