Difference between revisions of "Kubernetes/Monitoring"

From Ever changing code
Jump to navigation Jump to search
Line 58: Line 58:
     livenessProbe:
     livenessProbe:
       httpGet:
       httpGet:
         path: /healthz # note not all containers have this endpoint
         path: /healthz # not all containers have this endpoint
         port: 8081
         port: 8081
    readinessProbe:
      httpGet:
        path: /
        port: 80
</source>
</source>



Revision as of 19:08, 12 October 2019

Monitor cluster resources

Metric-server

In order to get cluster resources you need a metric collector plugin. Popular one is heapster that exposes metric-server service. The below commands relay on its API to get data:

Install metrics-server

git clone https://github.com/kubernetes-incubator/metrics-server.git
kubectl apply -f ~/metrics-server/deploy/1.8+/

Get metrics

# verify metrics server API
kubectl get --raw /apis/metrics.k8s.io/

kubectl top node               # CPU,memory utilization of the nodes in your cluster
kubectl top pods               # CPU,memory utilization of the pods in your cluster
kubectl top pods -A            # CPU,memory of pods in all namespaces
kubectl top pod -l run=<label> # CPU and memory of pods with a label selector:
kubectl top pod <pod-name>     # CPU,memory of a specific pod
kubectl top pods group-context --containers # CPU,memory of the containers inside the pod

cAdvisor deprecated in v1.11

Every node in a Kubernetes cluster has a Kubelet process. Within each Kubelet is a cAdvisor process. The cAdvisor is continuously gathering metrics about the state of the cluster. It's always available

minikube start --extra-config=kubelet.CAdvisorPort=4194
kubectl proxy &          # open a proxy to the Kubernetes API port
open $(minikube ip):4194 # cAdvisor also serves up the metrics is a helpful HTML format

# Each node provide statistics that are provided by cAdvisor. Access the node stats
curl localhost:8001/api/v1/nodes/$(kubectl get nodes -o=jsonpath="{.items[0].metadata.name}")/proxy/stats/

# Kubernetes API also gather the cAdvisor metrics at /metrics
curl localhost:8001/metrics

Liveness and Readiness probes

Check this Visual explanation

Get service endpoints

kubectl get endpoint


Liveness

apiVersion: v1
kind: Pod
metadata:
  name: liveness-pod
spec:
  containers:
  - image: nginx
    name: main
    livenessProbe:
      httpGet:
        path: /healthz # not all containers have this endpoint
        port: 8081
    readinessProbe:
      httpGet:
        path: /
        port: 80

Logs

Container logs

Containerized applications usually write their logs to STDOUT and STDERR instead of writing their logs to files. Docker then redirects those streams to files. You can retrieve those files with the kubectl logs


These are stored on nodes in /var/log/ directory and contain everything containers send to STDOUT.

  • /var/log/containers/ contains container logs, these are symlinks to ../pods/
  • /var/log/containers/ contains directory per each pod in form <namespace-<rs|deployment>/<pod-name>/0.log(logfile)
  • 0.log it's a symlink to /var/lib/docker/containers/uid-part1/uid-part2-json.log
$ ls -l /var/log/containers
total 56
lrwxrwxrwx 1 root root 101 Oct  7 06:51 coredns-5644d7b6d9-hztth_kube-system_coredns-9de9395495186177f5112d795ca950dd0227e6f025f40c83ddf2a99c56802939.log -> /var/log/pods/kube-system_coredns-5644d7b6d9-hztth_5da159b3-64e7-48e4-b9f8-003f9623481d/coredns/0.log
...


In case your container logs multiple files, it will be difficult to distinguish them using kubectl logs command. Therefore you can introduce sidecars containers that tail individual logs and access them like that:

  • kubectl logs <pod> container-log-1
  • kubectl logs <pod> container-log-2

kubelet runs as a process therefore writes logs to system location /var/log journalctl -u kubelet.service </source>


Retrieve logs

kubectl logs <pod> <container> # container name is optional for a single container pods
kubectl logs <pod> <container> --previous | -p flag # in case the container has crashed
kubectl logs <pod> --all-containers=true
kubectl logs --since=10m <pod>
kubectl logs deployment/<pod> -c <container> # view the logs from a container within a pod within a deployment
kubectl logs --tail=20 haproxy               # tail x lines
kubectl logs -l app=haproxy                  # logs from containers matching a label


Termination message

WKubernetes allows to write a custom message to a custom file on termination. This message can be view directly using kubectl describe in Last State: Termination, Message: <custom message>

apiVersion: v1
kind: Pod
metadata:
  name: pod2
spec:
  containers:
  - image: busybox
    name: main
    command:
    - sh
    - -c
    - 'echo "I say that this container has been terminated at $(date)" > /var/termination-reason ; exit 1'
    terminationMessagePath: /var/termination-reason

Troubelshooting

<source lang=bash>

  1. get a yaml without status information (almost clean yaml manifest)

kubectl -n web pod <failing-pod> -oyaml --export <source>

References


Resources