🚀 Executive Summary

TL;DR: Manual kubeconfig management often leads to insecure, overly permissive, and unauditable access in Kubernetes, especially without a full Identity Provider. This article outlines three Kubernetes-native solutions, culminating in an automated custom controller, to provide secure, temporary, and auditable user access.

🎯 Key Takeaways

  • Distributing long-lived, highly privileged kubeconfig files (e.g., ‘admin.conf’) is a significant security risk, effectively granting root access to your Kubernetes infrastructure.
  • Kubernetes offers native mechanisms for secure user access without an external IdP, including dedicated ServiceAccounts with specific RBAC and temporary tokens generated via the TokenRequest API (for K8s 1.24+), or client certificates signed by the cluster’s CA using the CSR API.
  • The most scalable and secure approach involves building a custom Kubernetes controller (operator) that automates the lifecycle of user access requests, dynamically creating RBAC resources, generating short-lived kubeconfigs, and storing them in Secrets, integrating seamlessly with GitOps workflows.

A Kubernetes-native way to manage kubeconfigs and RBAC (no IdP)

Tired of juggling kubeconfig files for your team? Explore three Kubernetes-native methods for managing user access and RBAC without the overhead of a full Identity Provider (IdP).

Taming the Kubeconfig Hydra: A Kubernetes-Native Guide to RBAC

I still remember the Slack message at 2 AM. “URGENT: prod-api-gateway is down”. My heart sank. After a frantic 30 minutes, we traced it back to a newly deployed, misconfigured Horizontal Pod Autoscaler. The culprit? A junior engineer, let’s call him Alex, who was just trying to view logs in the production namespace. He’d been given a copy of the “ops” kubeconfig file, which unfortunately had permissions to, well, do pretty much anything. It wasn’t his fault; it was mine. We had treated kubeconfigs like shared passwords on a sticky note, and we finally paid the price. That night, I swore off manual kubeconfig management for good.

So, What’s the Real Problem?

Let’s be honest, the problem isn’t Kubernetes. The problem is our legacy mindset. We’re used to SSH keys or config files that we can just copy and paste. A kubeconfig file feels the same, but it’s fundamentally different. It’s not just a server address; it’s a portable identity with a specific set of permissions baked in, often via a long-lived token. When you email `admin.conf` to a new developer, you’re not just giving them a key; you’re giving them a clone of an identity. If that identity is `cluster-admin`, you’ve just handed over the root password to your entire infrastructure.

The core challenge is this: how do we grant individual, temporary, and auditable access to our clusters without the complexity and cost of a full-blown Identity Provider like Okta or Active Directory, especially for smaller teams or internal tooling?

The Fixes: From Battlefield Patch to Strategic Overhaul

We’re engineers, so let’s solve this. I’m going to walk you through three approaches, starting with the quick-and-dirty fix you might use during an incident, and ending with the robust, automated solution you should be aiming for.

Solution 1: The On-Call Fix (Manual ServiceAccount per User)

It’s 3 AM, your lead developer needs read-only access to the `prod-logging` namespace right now. You don’t have time for a beautiful, elegant solution. You need something that works in five minutes. This is it.

The idea is simple: We create a dedicated ServiceAccount for the user, bind it to a Role with the exact permissions they need, and then manually generate a kubeconfig from that ServiceAccount’s token.

Step 1: Create the RBAC and ServiceAccount YAML.


# dev-access.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: dev-namespace
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ana-dev-sa
  namespace: dev-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: pod-reader-role
  namespace: dev-namespace
rules:
- apiGroups: [""] # "" indicates the core API group
  resources: ["pods", "pods/log"]
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ana-read-pods-binding
  namespace: dev-namespace
subjects:
- kind: ServiceAccount
  name: ana-dev-sa
  namespace: dev-namespace
roleRef:
  kind: Role
  name: pod-reader-role
  apiGroup: rbac.authorization.k8s.io

Step 2: Apply it and script the kubeconfig generation.

After running kubectl apply -f dev-access.yaml, you can use a script to pull the token and cluster details to build the file.


#!/bin/bash
# A quick and dirty script to generate a kubeconfig for a ServiceAccount

SERVICE_ACCOUNT_NAME=ana-dev-sa
NAMESPACE=dev-namespace
CLUSTER_NAME=$(kubectl config current-context)
SERVER_URL=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')
SECRET_NAME=$(kubectl get sa $SERVICE_ACCOUNT_NAME -n $NAMESPACE -o jsonpath='{.secrets[0].name}')
TOKEN=$(kubectl get secret $SECRET_NAME -n $NAMESPACE -o jsonpath='{.data.token}' | base64 --decode)
CA_CERT_DATA=$(kubectl get secret $SECRET_NAME -n $NAMESPACE -o jsonpath='{.data.ca\.crt}')

cat << EOF > ana-dev.kubeconfig
apiVersion: v1
kind: Config
clusters:
- name: $CLUSTER_NAME
  cluster:
    server: $SERVER_URL
    certificate-authority-data: $CA_CERT_DATA
contexts:
- name: ${SERVICE_ACCOUNT_NAME}@${CLUSTER_NAME}
  context:
    cluster: $CLUSTER_NAME
    user: $SERVICE_ACCOUNT_NAME
    namespace: $NAMESPACE
current-context: ${SERVICE_ACCOUNT_NAME}@${CLUSTER_NAME}
users:
- name: $SERVICE_ACCOUNT_NAME
  user:
    token: $TOKEN
EOF

echo "Kubeconfig created: ana-dev.kubeconfig"

Verdict: It’s fast and effective in a pinch. But it’s not scalable, the tokens are often long-lived, and it’s a manual process prone to error. Use it, but feel a little guilty and plan to replace it.

A Word of Warning: Be aware that as of Kubernetes 1.24, ServiceAccount tokens are no longer auto-generated in Secrets. You’ll need to use the TokenRequest API to generate a temporary token (kubectl create token my-sa), which is actually much more secure! My script above is for older clusters, adapt accordingly.

Solution 2: The “Do It Right” Method (Client Certificates & CSR API)

Okay, the fire is out. Now let’s build a process that doesn’t rely on copy-pasting non-expiring tokens. The Kubernetes-native way to handle individual users is with client certificates. The cluster’s Certificate Authority (CA) can sign a Certificate Signing Request (CSR) for a user, granting them a unique, verifiable identity.

Here’s the workflow:

  1. The User: Generates a private key and a CSR on their own machine. This is critical—the private key never leaves their possession.
    openssl genrsa -out ana.key 2048
    openssl req -new -key ana.key -out ana.csr -subj "/CN=ana/O=developers"
  2. The Admin (You): The user sends you the ana.csr file. You create a CSR object in Kubernetes from this file.
  3. 
    # csr.yaml
    apiVersion: certificates.k8s.io/v1
    kind: CertificateSigningRequest
    metadata:
      name: ana-csr
    spec:
      request: $(cat ana.csr | base64 | tr -d '\n')
      signerName: kubernetes.io/kube-apiserver-client
      usages:
      - client auth
      expirationSeconds: 86400 # 24 hours
    
  4. You apply this, inspect it with kubectl get csr, and approve it: kubectl certificate approve ana-csr.
  5. The User Again: You send them back the signed certificate. They can now construct their own kubeconfig using their private key and the new cert.

Verdict: This is vastly more secure. Access is tied to an individual’s cryptographic identity, can be set to expire, and is auditable. The downside? It’s still a multi-step, manual process that requires coordination. Better, but not perfect.

Solution 3: The Automation-First Approach (The Custom Controller)

This is where we put on our architect hats. The previous two methods are still manual toil. True DevOps is about automating that toil away. The ultimate solution is to build a small operator to manage these requests for us, using a Custom Resource Definition (CRD).

Imagine this workflow:

A developer, Ana, needs access. She creates a simple YAML file:


# ana-access-request.yaml
apiVersion: techresolve.com/v1alpha1
kind: KubeconfigRequest
metadata:
  name: ana-request-prod-logging
  namespace: user-access
spec:
  user: "ana"
  cluster: "prod-cluster-1"
  role: "pod-reader-role"
  duration: "8h"

She commits this to a Git repository. ArgoCD or Flux (you are using GitOps, right?) picks it up and applies it to the cluster.

A custom controller you’ve written is watching for KubeconfigRequest objects. When it sees this new one, it:

  1. Validates that “ana” is an allowed user and “pod-reader-role” is a permissible role.
  2. Dynamically creates a temporary ServiceAccount and RoleBinding (like in Solution 1).
  3. Uses the TokenRequest API to generate a short-lived token.
  4. Constructs a complete kubeconfig file.
  5. Stores that kubeconfig file in a Kubernetes Secret that only Ana can access.
  6. Updates the status of the KubeconfigRequest object to “Ready” and points to the secret.

Verdict: This is the dream. It’s fully automated, auditable via Git history, self-service for developers, and enforces security policy through code. It requires development effort to build the controller (using Kubebuilder or Operator-SDK), but it solves the problem permanently and at scale.

Method Pros Cons
1. Manual ServiceAccount Fast, simple, no new tools needed. Insecure (long-lived tokens), manual, error-prone, doesn’t scale.
2. CSR API Very secure (private keys are not shared), native, auditable identity. Still manual, requires back-and-forth communication.
3. Custom Controller Fully automated, GitOps-friendly, self-service, highly secure and scalable. Requires significant upfront development effort.

Conclusion

Stop emailing kubeconfigs. Just stop. You have the tools within Kubernetes itself to build a secure, auditable, and user-friendly access management system. Start with the manual ServiceAccount method if you must, but have a clear path toward using the CSR API or, ideally, building an automated operator. Your 2 AM self will thank you.

Darian Vance - Lead Cloud Architect

Darian Vance

Lead Cloud Architect & DevOps Strategist

With over 12 years in system architecture and automation, Darian specializes in simplifying complex cloud infrastructures. An advocate for open-source solutions, he founded TechResolve to provide engineers with actionable, battle-tested troubleshooting guides and robust software alternatives.


🤖 Frequently Asked Questions

âť“ How can I manage Kubernetes user access and kubeconfigs securely without an Identity Provider?

You can use Kubernetes-native methods such as creating dedicated ServiceAccounts with specific RoleBindings and generating temporary tokens via the TokenRequest API, or by issuing client certificates signed by the cluster’s CA using the CSR API. For automated, self-service management, a custom controller watching KubeconfigRequest CRDs is the ideal solution.

âť“ How do these Kubernetes-native methods compare to using a full Identity Provider (IdP) for access management?

Kubernetes-native methods provide secure, auditable, and temporary access directly within the cluster’s RBAC framework, avoiding the overhead, cost, and complexity of integrating an external IdP like Okta or Active Directory. While an IdP offers centralized identity management across many systems, these solutions are tailored for Kubernetes-specific access control, making them efficient for smaller teams or internal tooling where an IdP might be excessive.

âť“ What is a common implementation pitfall when generating kubeconfigs for ServiceAccounts, especially with newer Kubernetes versions?

A common pitfall is relying on ServiceAccount tokens auto-generated in Secrets for long-lived access, which is insecure. As of Kubernetes 1.24, these tokens are no longer auto-generated in Secrets. The correct and more secure approach is to use the TokenRequest API (`kubectl create token my-sa`) to generate temporary, time-bound tokens, significantly limiting their exposure and validity.

Leave a Reply

Discover more from TechResolve - SaaS Troubleshooting & Software Alternatives

Subscribe now to keep reading and get access to the full archive.

Continue reading