Actions

Kubernetes/Istio/Observability

From Ever changing code

< Kubernetes‎ | Istio

Prometheus

kubectl create ns prometheus
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install prom prometheus-community/kube-prometheus-stack --version 13.13.1 -n prometheus -f values.yaml

# Dashboards
kubectl -n prometheus port-forward statefulset/prometheus-prom-kube-prometheus-stack-prometheus 9090
kubectl -n prometheus port-forward svc/prom-grafana 3000:80

# Get rules. The Prometheus rule configmap is actually stored as a secret and is updated through the operator configurations
kubectl get secret -n prometheus prometheus-prom-kube-prometheus-stack-prometheus -o jsonpath="{.data['prometheus\.yaml\.gz']}" | base64 -d | gunzip


Access now:


The values.yaml file is default just with following components disabled:

defaultRules: ## Create default rules for monitoring the cluster
  create: false
alertmanager: ## Deploy alertmanager
  enabled: false
kubeApiServer: ## Component scraping the kube api server
  enabled: false
kubelet: ## Component scraping the kubelet and kubelet-hosted cAdvisor
  enabled: false
coreDns: ## Component scraping coreDns. Use either this or kubeDns
  enabled: false
kubeDns: ## Component scraping kubeDns. Use either this or coreDns
  enabled: false
kubeEtcd: ## Component scraping etcd
  enabled: false
kubeScheduler: ## Component scraping kube scheduler
  enabled: false
kubeProxy: ## Component scraping kube proxy
  enabled: false


Metrics merging

Envoy sidecar can merge Istio’s metrics with the application metrics, this is enabled by default --set meshConfig.enablePrometheusMerge=true. When enabled, appropriate prometheus.io annotations will be added to all data plane pods to set up scraping. The merged metrics will be scraped from /stats/prometheus:15020.

# Default way to set deployment to be scraped by Prometheus
#  template:
#    metadata:
#      annotations:
#        prometheus.io/path: /stats/prometheus
#        prometheus.io/port: "15020"
#        prometheus.io/scrape: "true"

# Check if an application emits any metrics. You get 404 if no metrics being emitted.
kubectl exec -it deploy/httpbin -n default -c istio-proxy -- curl http://localhost:15020/metrics

# Check the sidecar proxy
kubectl exec -it deploy/httpbin -n default -c istio-proxy -- curl http://localhost:15090/stats/prometheus

# View merged metrics, note :15020 port; which include any potential metrics from httpbin (none here) and its envoy sidecar and Istio agent
kubectl exec -it deploy/httpbin -n default -c istio-proxy -- curl http://localhost:15020/stats/prometheus

Grafana Istio dashboards

Get the dashboards from the Istio source repo.

git clone https://github.com/istio/istio
cd manifests/addons

# Create istio-dashboards configMap
kubectl -n prometheus create cm istio-dashboards \
--from-file=pilot-dashboard.json=dashboards/pilot-dashboard.json \
--from-file=istio-workload-dashboard.json=dashboards/istio-workload-dashboard.json \
--from-file=istio-service-dashboard.json=dashboards/istio-service-dashboard.json \
--from-file=istio-performance-dashboard.json=dashboards/istio-performance-dashboard.json \
--from-file=istio-mesh-dashboard.json=dashboards/istio-mesh-dashboard.json \
--from-file=istio-extension-dashboard.json=dashboards/istio-extension-dashboard.json

# Label this 'istio-dashboards' configmap for Grafana to pick it up
kubectl label -n prometheus cm istio-dashboards grafana_dashboard=1

New set of dashboards should appear in UI. These will be empty as we haven's set any metrics to be scraped.

Setup Prometheus to scrape metrics

We will use the Prometheus Operator CRs ServiceMonitor and PodMonitor. These Custom Resources are described in good detail in the design doc on the Prometheus Operator repo.

Scrape Istio control-plane

kubectl apply -f <(cat <<EOF
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: istio-component-monitor
  namespace: prometheus
  labels:
    monitoring: istio-components
    release: prom
spec:
  jobLabel: istio
  targetLabels: [app]
  selector:
    matchExpressions:
    - {key: istio, operator: In, values: [pilot]}
  namespaceSelector:
    any: true
  endpoints:
  - port: http-monitoring
    interval: 15s
EOF
) --dry-run=server


Scrape Istio data-plance

kubectl apply -f <(cat <<EOF
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
  name: envoy-stats-monitor
  namespace: prometheus
  labels:
    monitoring: istio-proxies
    release: prom
spec:
  selector:
    matchExpressions:
    - {key: istio-prometheus-ignore, operator: DoesNotExist}
  namespaceSelector:
    any: true
  jobLabel: envoy-stats
  podMetricsEndpoints:
  - path: /stats/prometheus
    interval: 15s
    relabelings:
    - action: keep
      sourceLabels: [__meta_kubernetes_pod_container_name]
      regex: "istio-proxy"
    - action: keep
      sourceLabels: [__meta_kubernetes_pod_annotationpresent_prometheus_io_scrape]
    - sourceLabels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
      action: replace
      regex: ([^:]+)(?::\d+)?;(\d+)
      replacement: $1:$2
      targetLabel: __address__
    - action: labeldrop
      regex: "__meta_kubernetes_pod_label_(.+)"
    - sourceLabels: [__meta_kubernetes_namespace]
      action: replace
      targetLabel: namespace
    - sourceLabels: [__meta_kubernetes_pod_name]
      action: replace
      targetLabel: pod_name
EOF
) --dry-run=server

Kiali

Install Kiali operator

kubectl create ns kiali-operator
helm install \
    --set cr.create=true \
    --set cr.namespace=istio-system \
    --namespace kiali-operator \
    --repo https://kiali.org/helm-charts \
    --version 1.29.1 \
    kiali-operator \
    kiali-operator


Install Kiali instance with Kiali CR. The installation below uses Token Auth, if you wish to use OIDC as well.

kubectl apply -f <(cat <<EOF
apiVersion: kiali.io/v1alpha1
kind: Kiali
metadata:
  namespace: istio-system
  name: kiali
spec:
  istio_namespace: "istio-system"  
  istio_component_namespaces:
    prometheus: prometheus
  auth:    
    strategy: token
  deployment:
    accessible_namespaces:
    - '**'
    image_version: operator_version
  external_services:    
    prometheus:
      cache_duration: 10
      cache_enabled: true
      cache_expiration: 300
      url: "http://prom-kube-prometheus-stack-prometheus.prometheus:9090"
EOF
) --dry-run=server

kubectl -n istio-system port-forward deploy/kiali 20001

Create ServiceAccount and ClusterRoleBinding for Token Auth

kubectl create serviceaccount kiali-dashboard -n istio-system
kubectl create clusterrolebinding kiali-dashboard-admin --clusterrole=cluster-admin --serviceaccount=istio-system:kiali-dashboard

# Get the token
kubectl get secret -n istio-system -o jsonpath="{.data.token}" $(kubectl get secret -n istio-system | grep kiali-dashboard | awk '{print $1}' ) | base64 --decode


Load Testing - fortio

Fortio (Φορτίο) started as, and is, Istio's load testing tool and now graduated to be its own project. Fortio runs at a specified query per second (qps) and records an histogram of execution time and calculates percentiles (e.g. p99 ie the response time such as 99% of the requests take less than that number (in seconds, SI unit)).

# Install
VERSION=$(curl --silent "https://api.github.com/repos/fortio/fortio/releases/latest" | jq -r .tag_name); echo $VERSION
curl -L https://github.com/fortio/fortio/releases/download/${VERSION}/fortio-linux_x64-${VERSION#v}.tgz \
 | sudo tar -C / -xvzpf -

# or the debian package
wget https://github.com/fortio/fortio/releases/download/${VERSION}/fortio_${VERSION#v}-1_amd64.deb
dpkg -i fortio_${VERSION#v}-1_amd64.deb

# or the rpm
rpm -i https://github.com/fortio/fortio/releases/download/${VERSION}/fortio-${VERSION#v}-1.x86_64.rpm

# Docker run
docker run -p 8080:8080 -p 8079:8079 fortio/fortio server & # For the server
docker run fortio/fortio load http://www.google.com/        # For a test run

References