KEMBAR78
Grafana&Prometheus | PDF | Computer Science | Data Management
0% found this document useful (0 votes)
16 views15 pages

Grafana&Prometheus

The document outlines the pull-based data collection method used by Prometheus, where the monitoring system retrieves metrics from defined targets via HTTP requests. It details the configuration, scraping process, and advantages of this approach, including centralized control and dynamic target discovery. Additionally, it provides a step-by-step guide for installing and configuring Prometheus and Grafana on a Kubernetes cluster using Helm, including commands for accessing services and managing deployments.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views15 pages

Grafana&Prometheus

The document outlines the pull-based data collection method used by Prometheus, where the monitoring system retrieves metrics from defined targets via HTTP requests. It details the configuration, scraping process, and advantages of this approach, including centralized control and dynamic target discovery. Additionally, it provides a step-by-step guide for installing and configuring Prometheus and Grafana on a Kubernetes cluster using Helm, including commands for accessing services and managing deployments.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

Pull-based data collection is a method where the monitoring system actively retrieves metrics from

the targets it monitors. In this approach, the monitoring system (like Prometheus) periodically
queries the endpoints of the services or applications to collect metrics data.

Here's how pull-based data collection works in the context of Prometheus:

1. Targets Configuration: You define a list of targets (applications, services, or nodes) in


Prometheus' configuration file. Each target exposes an HTTP endpoint that Prometheus can
scrape for metrics.

2. Scraping: Prometheus regularly sends HTTP requests to these endpoints to collect metrics.
The endpoints are expected to expose metrics in a format that Prometheus understands,
typically using a simple text-based format.

3. Data Storage: Once collected, Prometheus stores this data in its time-series database, where
it can be queried and analyzed.

4. Advantages:

o Centralized Control: The monitoring system controls when and how often data is
collected, making it easier to manage and scale.

o Dynamic Target Discovery: Prometheus can automatically discover targets using


service discovery mechanisms, adapting to changes in the environment.

5. Comparison with Push-based Collection: In a push-based model, the monitored systems


themselves send metrics to the monitoring system. This requires the monitored systems to
be configured to push data, which can be more complex to manage, especially in dynamic
environments.
USE Case 2

Step 1 : install exe file


Step 2: minikube start (with docker driver , so docker engine start by opening the applictaion)

Step 3:
kubectl get pods -A

step 5: kubernates operator to for life cycle policy , upgradation


currently using helm

.\helm version

.\helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

.\helm repo update

PS C:\Users\kreva\OneDrive\Desktop\Grafana> .\helm install prometheus


Prometheus-community/prometheus

NAME: prometheus

LAST DEPLOYED: Thu Apr 10 20:00:09 2025

NAMESPACE: default

STATUS: deployed
REVISION: 1

TEST SUITE: None

NOTES:

The Prometheus server can be accessed via port 80 on the following DNS name from within your
cluster:

prometheus-server.default.svc.cluster.local

Get the Prometheus server URL by running these commands in the same shell:

export POD_NAME=$(kubectl get pods --namespace default -l


"app.kubernetes.io/name=prometheus,app.kubernetes.io/instance=prometheus" -o
jsonpath="{.items[0].metadata.name}")

kubectl --namespace default port-forward $POD_NAME 9090

The Prometheus alertmanager can be accessed via port 9093 on the following DNS name from within
your cluster:

prometheus-alertmanager.default.svc.cluster.local

Get the Alertmanager URL by running these commands in the same shell:

export POD_NAME=$(kubectl get pods --namespace default -l


"app.kubernetes.io/name=alertmanager,app.kubernetes.io/instance=prometheus" -o
jsonpath="{.items[0].metadata.name}")

kubectl --namespace default port-forward $POD_NAME 9093

#################################################################################

###### WARNING: Pod Security Policy has been disabled by default since #####

###### it deprecated after k8s 1.25+. use #####

###### (index .Values "prometheus-node-exporter" "rbac" #####

###### . "pspEnabled") with (index .Values #####

###### "prometheus-node-exporter" "rbac" "pspAnnotations") #####

###### in case you still need it. #####

#################################################################################
The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within
your cluster:

prometheus-prometheus-pushgateway.default.svc.cluster.local

Get the PushGateway URL by running these commands in the same shell:

export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus-


pushgateway,component=pushgateway" -o jsonpath="{.items[0].metadata.name}")

kubectl --namespace default port-forward $POD_NAME 9091

For more information on running Prometheus, visit:

https://prometheus.io/

PS C:\Users\kreva\OneDrive\Desktop\Grafana>

kubectl get pods

kubectl get svc

Step 7: kubectl expose service prometheus-server --type=NodePort --target-port=9090 --


name=prometheus-server-ext

Step 8 : kubectl get pods -A


Step 9:

Step 10:

Step 11: Start-Job { kubectl port-forward svc/prometheus-server 9090:80 } to run in background

or

kubectl port-forward svc/prometheus-server 9090:80

step 12: ./ helm repo add grafana https://grafana.github.io/helm-charts

./helm repo update

Step 13:

Step 14: helm install grafana grafana/grafana


step 15 : generate admin password
$encoded = kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}"

[System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($encoded))

password: UJBXjzi7PbC60sexX71Yci8jT3yul9tzNU1Hsswk

Step 16: exposing Grafana externally

kubectl expose service grafana --type=NodePort --target-port=3000 --name=grafana-ext

step 17 :

step 18: minikube tunnel  if its not exposing

step 19: temporary solution  port forward

Start-Job {kubectl port-forward svc/Grafana-ext 3000:80}

step 20: reinstallation of grafana as the Grafana setup does not have persistent storage

./helm upgrade grafana grafana/grafana --set service.type=NodePort --set persistence.enabled=true


--set persistence.size=1Gi
Step 21: kubectl delete svc grafana-ext

delete the old one

Step 22: kubectl get pods --selector=app.kubernetes.io/name=Grafana

kubectl port-forward pod/grafana-75f954b567-57hjw 3000:3000


Step 23 :

Step 24:

You might also like