If you want full control over your Kubernetes monitoring setup, using kube-state-metrics YAML is the most direct way to do it. Helm charts are convenient. But they hide details. YAML gives you clarity. You decide:
- what gets deployed
- how it behaves
- what it exposes
This guide shows you exactly how to deploy Kube-State-Metrics using YAML without unnecessary complexity. You will understand not just how to set it up, but why each part exists. If your goal is a clean, predictable monitoring pipeline, this is where it starts.
What Kube-State-Metrics YAML Actually Does
Before writing any YAML, it helps to understand what you are deploying. Kube-State-Metrics does one simple job:
- It reads the Kubernetes API
- Converts object states into metrics
- Exposes them for Prometheus
That is it. It does not:
- monitor resource usage
- trigger alerts
- store data
If you are unclear on how this fits into your stack, it is worth understanding the full data flow. A deeper breakdown of this is covered in the kube state metrics architecture article on your site, and it connects directly to what you are about to deploy here.
When You Should Use YAML Instead of Helm
YAML is not always the fastest option, but it is the most controlled. Use YAML when:
- you want full visibility into configuration
- you need to customize resources precisely
- you are working in production environments
- you want predictable, version-controlled deployments
Avoid YAML if:
- you just need a quick setup
- you are not managing configuration at scale
In short, Helm is fast and YAML is precise.
Kube-State-Metrics YAML Deployment (Step-by-Step)
This is a clean, minimal deployment. No unnecessary extras.
Step 1: Create a Namespace
Keeping monitoring components isolated is a good practice.
“`yaml
apiVersion: v1
kind: Namespace
metadata:
name: kube-state-metrics
“`
Apply it:
“`bash
kubectl apply -f namespace.yaml
“`
Step 2: Service Account and RBAC
Kube-State-Metrics needs read access to cluster resources.
“`yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-state-metrics
namespace: kube-state-metrics
“`
Cluster role:
“`yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kube-state-metrics
rules:
– apiGroups: [“”]
resources: [“pods”, “nodes”, “services”]
verbs: [“list”, “watch”]
– apiGroups: [“apps”]
resources: [“deployments”, “replicasets”, “statefulsets”]
verbs: [“list”, “watch”]
“`
Binding:
“`yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-state-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kube-state-metrics
subjects:
– kind: ServiceAccount
name: kube-state-metrics
namespace: kube-state-metrics
“`
Why this matters:
Without correct permissions, metrics simply will not exist. Most “it is not working” issues start here.
Step 3: Deployment
Now the core component.
“`yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-state-metrics
namespace: kube-state-metrics
spec:
replicas: 1
selector:
matchLabels:
app: kube-state-metrics
template:
metadata:
labels:
app: kube-state-metrics
spec:
serviceAccountName: kube-state-metrics
containers:
– name: kube-state-metrics
image: registry.k8s.io/kube-state-metrics/kube-state-metrics:latest
ports:
– containerPort: 8080
“`
Keep it minimal. You can scale later. Start clean.
Step 4: Service
Expose metrics to Prometheus.
“`yaml
apiVersion: v1
kind: Service
metadata:
name: kube-state-metrics
namespace: kube-state-metrics
spec:
selector:
app: kube-state-metrics
ports:
– port: 8080
targetPort: 8080
“`
Step 5: Apply Everything
“`bash
kubectl apply -f .
“`
Verify:
“`bash
kubectl get pods -n kube-state-metrics
“`
If the pod is running, your deployment is active.
How Prometheus Connects to It
Deployment alone is not enough. Prometheus must scrape it. You need a scrape config like:
“`yaml
– job_name: ‘kube-state-metrics’
static_configs:
– targets: [‘kube-state-metrics.kube-state-metrics.svc.cluster.local:8080’]
“`
This step is often missed. No scraping = no data.
Practical Configuration Tips
A working setup is easy. A useful setup requires intention.
1. Start With Core Resources
Do not enable everything immediately. Focus on:
- pods
- deployments
- nodes
These give you early signals without noise. They cover most real-world issues like failed pods, scaling problems, and node health. Starting here helps you understand your cluster behavior before adding complexity.
2. Control Label Usage
Labels can explode your data size. Too many labels:
- slow down queries
- increase storage
- make dashboards messy
Every extra label increases metric cardinality, which directly impacts performance. If you add labels without a clear purpose, you end up with data that is hard to query and even harder to interpret.
3. Keep One Replica Initially
Kube-State-Metrics is lightweight. Running multiple replicas without need:
- adds complexity
- increases duplicate metrics risk
In most setups, a single instance is enough to collect state data reliably. Scaling too early can introduce duplicate metrics and confusion in Prometheus queries, especially if deduplication is not configured.
4. Avoid “Set and Forget”
Even simple setups need review. Check:
- Are metrics being used?
- Are dashboards clean?
- Are queries efficient?
A static monitoring setup becomes outdated quickly as your cluster evolves. Regularly reviewing your metrics and dashboards ensures you are tracking what actually matters, not just what was configured initially.
Common Mistakes to Avoid
These are the issues that quietly break setups.
1. Treating It Like a Monitoring Tool
It does not monitor anything. It provides raw state data. You still need:
- Prometheus
- Alerting rules
- Dashboards
Kube-State-Metrics does not generate insights on its own. If you expect alerts or analysis directly from it, your setup will always feel incomplete.
2. Overloading Permissions
Giving full cluster access feels easier. But it creates:
- security risks
- unnecessary data exposure
Broad permissions often lead to collecting more data than needed. This not only increases risk but also makes your monitoring system heavier and harder to manage.
3. Collecting Too Many Metrics
More data does not mean better insight. It usually means:
- slower queries
- noisy dashboards
- harder troubleshooting
When everything is collected, nothing stands out. A focused metric set makes it easier to detect real issues and build meaningful alerts.
4. Skipping Service Configuration
Without a service:
- Prometheus cannot reach the endpoint
- metrics stay invisible
Even if your deployment is running perfectly, missing this step breaks the entire pipeline. Always verify that the service is correctly exposing the metrics endpoint.
5. Ignoring Data Flow
If you do not understand where data goes:
- scraping fails
- dashboards break
- alerts become unreliable
Monitoring is not just about collecting metrics, it is about how data moves through your system. When you understand the flow from API to visualization, troubleshooting becomes faster and far more predictable.
When You Should Customize Your YAML
The default setup works, but real environments need tuning. Customize when:
- you run multi-team clusters
- you need scoped monitoring
- you want fine-grained control over metrics
Examples:
- limit resources with flags
- control label exposure
- adjust container resources
But avoid over-engineering. If your setup is simple, keep it simple.
YAML vs Helm: A Practical Perspective
This is not about “which is better.” It is about control vs speed.
| Aspect | YAML | Helm |
| Control | High | Medium |
| Speed | Slower | Faster |
| Visibility | Full | Abstracted |
| Best for | Production, custom setups | Quick deployments |
Many teams use both:
- Helm for quick start
- YAML for production refinement
Conclusion
Deploying kube state metrics YAML is not complicated, but it does require clarity. If you keep the setup minimal, intentional and aligned with real use, you get a monitoring foundation that actually works.
Start small. Understand the flow. Expand only when needed. That is how you avoid noise and build something reliable.
FAQ Section
1. What is kube state metrics YAML used for?
It is used to deploy Kube-State-Metrics manually using Kubernetes manifests, giving you full control over configuration and behavior.
2. Is YAML better than Helm for Kube-State-Metrics?
Not always. YAML offers more control, while Helm is faster to deploy. Choose based on your use case.
3. Why am I not seeing any metrics?
Usually, this happens because Prometheus is not scraping the service, so no data is collected.
It can also be due to incorrect RBAC permissions blocking access to resources, or a missing service endpoint, which leaves nothing for Prometheus to connect to.
4. Can I run multiple replicas of Kube-State-Metrics?
Yes, but it is usually unnecessary. Start with one replica unless you have scaling needs.
5. Does Kube-State-Metrics generate alerts?
No. It only exposes metrics. Alerts must be created in Prometheus or another system.