How to Expose a Localhost-only Endpoint on GKE

|

In my previous post I wrote about how to load test GKE Workload Identity. In this post I’ll describe how to get metrics from gke-metadata-server, the part of Workload Identity that runs on your GKE clusters’ nodes. This solution is a temporary workaround until GKE provides a better way to get metrics on gke-metadata-server.

Gke-metadata-server runs as a K8s DaemonSet. It exposes metrics about itself in Prometheus text-based format. I want to have an external scraper make HTTP requests to periodically collect these metrics. Unfortunately, the Prometheus HTTP server only listens on the Container’s localhost interface. So how can we expose these metrics, i.e. make the HTTP endpoint available externally?

tl;dr lessons learned

  • socat is awesome.
  • If something you need is running on a computer you control, you can always find a way extract info from it if you’re resourceful enough.

My specific GKE cluster configuration

  • GKE masters and nodes running version 1.15.9-gke.22
  • regional cluster in Google Cloud Platform (GCP) (not on-premise)
  • 6 GKE nodes that are n1-standard-32 GCE instances in one node pool
  • each node is configured to have a maximum of 32 Pods
  • cluster and node pool have WI enabled

Notice the DaemonSet is configured with .spec.template.spec.hostNetwork: true below. This means the HTTP server is also listening on the GKE node’s localhost interface.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    k8s-app: gke-metadata-server
  name: gke-metadata-server
  namespace: kube-system
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: gke-metadata-server
  template:
    metadata:
      annotations:
        components.gke.io/component-name: gke-metadata-server
        components.gke.io/component-version: 0.2.21
        scheduler.alpha.kubernetes.io/critical-pod: '"''"'
      creationTimestamp: null
      labels:
        addonmanager.kubernetes.io/mode: Reconcile
        k8s-app: gke-metadata-server
    spec:
      containers:
      - command:
        - /gke-metadata-server
        - --logtostderr
        - --token-exchange-endpoint=https://securetoken.googleapis.com/v1/identitybindingtoken
        - --identity-namespace=[REDACTED]
        - --identity-provider-id=https://container.googleapis.com/v1/projects/[REDACTED]/locations/europe-west1/clusters/[REDACTED]
        - --passthrough-ksa-list=kube-system:container-watcher-pod-reader,kube-system:event-exporter-sa,kube-system:fluentd-gcp-scaler,kube-system:heapster,kube-system:kube-dns,kube-system:metadata-agent,kube-system:network-metering-agent,kube-system:securityprofile-controller,istio-system:istio-ingressgateway-service-account,istio-system:cluster-local-gateway-service-account,csm:csm-sync-agent,knative-serving:controller
        - --attributes=cluster-name=[REDACTED],cluster-uid=[REDACTED],cluster-location=europe-west1
        - --enable-identity-endpoint=true
        - --cluster-uid=[REDACTED]
        image: gke.gcr.io/gke-metadata-server:20200218_1145_RC0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            host: 127.0.0.1
            path: /healthz
            port: 54898
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: gke-metadata-server
        resources:
          limits:
            cpu: 100m
            memory: 100Mi
          requests:
            cpu: 100m
            memory: 100Mi
        securityContext:
          privileged: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/lib/kubelet/kubeconfig
          name: kubelet-credentials
          readOnly: true
        - mountPath: /var/lib/kubelet/pki/
          name: kubelet-certs
          readOnly: true
        - mountPath: /var/run/
          name: container-runtime-interface
        - mountPath: /etc/srv/kubernetes/pki
          name: kubelet-pki
          readOnly: true
        - mountPath: /etc/ssl/certs/
          name: ca-certificates
          readOnly: true
      dnsPolicy: Default
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/os: linux
        iam.gke.io/gke-metadata-server-enabled: "true"
      priorityClassName: system-node-critical
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: gke-metadata-server
      serviceAccountName: gke-metadata-server
      terminationGracePeriodSeconds: 30
      tolerations:
      - effect: NoExecute
        operator: Exists
      - effect: NoSchedule
        operator: Exists
      volumes:
      - hostPath:
          path: /var/lib/kubelet/pki/
          type: Directory
        name: kubelet-certs
      - hostPath:
          path: /var/lib/kubelet/kubeconfig
          type: File
        name: kubelet-credentials
      - hostPath:
          path: /var/run/
          type: Directory
        name: container-runtime-interface
      - hostPath:
          path: /etc/srv/kubernetes/pki/
          type: Directory
        name: kubelet-pki
      - hostPath:
          path: /etc/ssl/certs/
          type: Directory
        name: ca-certificates
  templateGeneration: 7
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate

We can run a separate workload on this cluster that uses socat to proxy HTTP requests to gke-metadata-server. socat stands for socket cat and is a multipurpose relay. It’s netcat on steroids and can relay any kind of packets not just TCP and UDP.

This proxy is deployed as a DaemonSet to make it easy to have a one-to-one correspondence with each node-local gke-metadata-server Pod. The DaemonSet will also need to have .spec.template.spec.hostNetwork: true so that it can share the same network namespace.

Here’s the proxy DaemonSet YAML. I use the Docker image alpine/socat:1.7.3.4-r0 which is a tiny 3.61MB. The arguments ["TCP-LISTEN:54899,reuseaddr,fork", "TCP:127.0.0.1:54898"] tell socat to forward traffic from 0.0.0.0:54899 to 127.0.0.1:54898 which is where the Prometheus metrics are. fork tells socat to

After establishing a connection, handles its channel in a child process and keeps the parent process attempting to produce more connections, either by listening or by connecting in a loop

http://www.dest-unreach.org/socat/doc/socat.html#OPTION_FORK

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
cat proxy-daemonset.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: gke-metadata-server-metrics-proxy
  namespace: monitoring
spec:
  selector:
    matchLabels:
      app: gke-metadata-server-metrics-proxy
  template:
    metadata:
      labels:
        app: gke-metadata-server-metrics-proxy
    spec:
      hostNetwork: true
      containers:
      - name: gke-metadata-server-metrics-proxy
        image: alpine/socat:1.7.3.4-r0@sha256:6786951b55e321e3968ba1c3786cb79b768f85d83d438f085336442b3bcef67a
        args: ["TCP-LISTEN:54899,reuseaddr,fork", "TCP:127.0.0.1:54898"]
        ports:
        - name: prom-metrics
          containerPort: 54899
          protocol: TCP
        livenessProbe:
          httpGet:
            host: 127.0.0.1
            path: /metricz
            port: 54899
            scheme: HTTP
        resources:
          limits:
            cpu: 100m
            memory: 100Mi
          requests:
            cpu: 100m
            memory: 100Mi

Apply the DaemonSet.

1
kubectl --context [CONTEXT] apply -f proxy-daemonset.yaml

Now make an HTTP request to any GKE node IP at port 54899.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
kubectl --context [CONTEXT] -n monitoring get pods --selector app=gke-metadata-server-metrics-proxy -o wide

NAME                                      READY   STATUS    RESTARTS   AGE     IP              NODE                             NOMINATED NODE   READINESS GATES
gke-metadata-server-metrics-proxy-dvlpg   1/1     Running   0          4d19h   10.200.208.6    my-cluster-n1-s-32-dfabe6b6-38px   <none>           <none>
gke-metadata-server-metrics-proxy-dx4lq   1/1     Running   0          4d19h   10.200.208.8    my-cluster-n1-s-32-dfabe6b6-mnlg   <none>           <none>
gke-metadata-server-metrics-proxy-j9p49   1/1     Running   0          4d19h   10.200.208.7    my-cluster-n1-s-32-dfabe6b6-vv9s   <none>           <none>
gke-metadata-server-metrics-proxy-jvvjw   1/1     Running   0          4d19h   10.200.208.12   my-cluster-n1-s-32-192fa3d9-wb2c   <none>           <none>
gke-metadata-server-metrics-proxy-k5sqd   1/1     Running   0          4d19h   10.200.208.10   my-cluster-n1-s-32-55dd75ff-6l40   <none>           <none>
gke-metadata-server-metrics-proxy-tdhkn   1/1     Running   0          4d19h   10.200.208.9    my-cluster-n1-s-32-55dd75ff-jqgk   <none>           <none>

http GET '10.200.208.6:54899/metricz' | head -n 20

# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 2.8295e-05
go_gc_duration_seconds{quantile="0.25"} 3.6269e-05
go_gc_duration_seconds{quantile="0.5"} 5.2122e-05
go_gc_duration_seconds{quantile="0.75"} 7.585e-05
go_gc_duration_seconds{quantile="1"} 0.099987877
go_gc_duration_seconds_sum 7.738486774
go_gc_duration_seconds_count 6809
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 47
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.14rc1"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 2.4743056e+07
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter

Voila. The important metrics are:

  • metadata_server_request_count
  • metadata_server_request_durations_bucket

I have these Prometheus recording rules to calculate RPS and request duration percentiles.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
groups:
- name: gke-metadata-server
  rules:
  # Compute a 5-minute rate for the counter `metadata_server_request_count`.
  - record: metadata_server_request_count:rate5m
    expr: rate(metadata_server_request_count[5m])
  # Compute latency percentiles for the histogram metric
  # `metadata_server_request_durations_bucket` over 5-minute increments for each label
  # combination.
  - record: metadata_server_request_duration:p99
    expr: histogram_quantile(0.99, rate(metadata_server_request_durations_bucket[5m]))
  - record: metadata_server_request_duration:p95
    expr: histogram_quantile(0.95, rate(metadata_server_request_durations_bucket[5m]))
  - record: metadata_server_request_duration:p90
    expr: histogram_quantile(0.90, rate(metadata_server_request_durations_bucket[5m]))
  - record: metadata_server_request_duration:p50
    expr: histogram_quantile(0.50, rate(metadata_server_request_durations_bucket[5m]))
  - record: metadata_server_request_duration:mean
    expr: rate(metadata_server_request_durations_sum[5m]) / rate(metadata_server_request_durations_count[5m])
  # Compute latency percentiles for the histogram metric
  # `metadata_server_request_durations_bucket` over 5-minute increments and aggregate all
  # labels. We must aggregate here instead of in Grafana because averaging percentiles doesn’t
  # work. To compute a percentile, you need the original population of events. The math is just
  # broken. An average of a percentile is meaningless.
  - record: metadata_server_all_request_duration:p99
    expr: histogram_quantile(0.99, sum(rate(metadata_server_request_durations_bucket[5m])) by (le))
  - record: metadata_server_all_request_duration:p95
    expr: histogram_quantile(0.95, sum(rate(metadata_server_request_durations_bucket[5m])) by (le))
  - record: metadata_server_all_request_duration:p90
    expr: histogram_quantile(0.90, sum(rate(metadata_server_request_durations_bucket[5m])) by (le))
  - record: metadata_server_all_request_duration:p50
    expr: histogram_quantile(0.50, sum(rate(metadata_server_request_durations_bucket[5m])) by (le))
  - record: metadata_server_all_request_duration:mean
    expr: rate(metadata_server_request_durations_sum[5m]) / rate(metadata_server_request_durations_count[5m])
  # Compute latency percentiles for the histogram metric `outgoing_request_latency_bucket` over
  # 5-minute increments for each label combination.
  - record: outgoing_request_latency:p99
    expr: histogram_quantile(0.99, rate(outgoing_request_latency_bucket[5m]))
  - record: outgoing_request_latency:p95
    expr: histogram_quantile(0.95, rate(outgoing_request_latency_bucket[5m]))
  - record: outgoing_request_latency:p90
    expr: histogram_quantile(0.90, rate(outgoing_request_latency_bucket[5m]))
  - record: outgoing_request_latency:p50
    expr: histogram_quantile(0.50, rate(outgoing_request_latency_bucket[5m]))
  - record: outgoing_request_latency:mean
    expr: rate(outgoing_request_latency_sum[5m]) / rate(outgoing_request_latency_count[5m])
  # Compute latency percentiles for the histogram metric `outgoing_request_latency_bucket` over
  # 5-minute increments and aggregate all labels. We must aggregate here instead of in Grafana
  # because averaging percentiles doesn’t work. To compute a percentile, you need the original
  # population of events. The math is just broken. An average of a percentile is meaningless.
  - record: outgoing_all_request_latency:p99
    expr: histogram_quantile(0.99, sum(rate(outgoing_request_latency_bucket[5m])) by (le))
  - record: outgoing_all_request_latency:p95
    expr: histogram_quantile(0.95, sum(rate(outgoing_request_latency_bucket[5m])) by (le))
  - record: outgoing_all_request_latency:p90
    expr: histogram_quantile(0.90, sum(rate(outgoing_request_latency_bucket[5m])) by (le))
  - record: outgoing_all_request_latency:p50
    expr: histogram_quantile(0.50, sum(rate(outgoing_request_latency_bucket[5m])) by (le))
  - record: outgoing_all_request_latency:mean
    expr: rate(outgoing_request_latency_sum[5m]) / rate(outgoing_request_latency_count[5m])

Thanks to @mikedanese for the intial idea of using socat.

Comments