Add namespace to kubernetes hostname - kubernetes

Currently my pods are all named "some-deployment-foo-bar" which does not help me track down issues when an error is reported with just the hostname.
So I want "$POD_NAMESPACE.$POD_NAME" as hostname.
I tried pod.beta.kubernetes.io/hostname: "foo" but that only sets an absolute name ... and subdomain did not work ...
The only other solution I saw was using a wrapper script that modifies the hostname and then executes the actual command ... which is pretty hacky and adds overhead ot every container.
Any way of doing this nicely?
current config is:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: foo
labels:
project: foo
spec:
selector:
matchLabels:
project: foo
template:
metadata:
name: foo
labels:
project: foo
spec:
containers:
- image: busybox
name: foo

PodSpec has a subdomain field, which can be used to specify the Pod’s subdomain. This field value takes precedence over the pod.beta.kubernetes.io/subdomain annotation value.
There is more information here, below is an example.
apiVersion: v1
kind: Service
metadata:
name: default-subdomain
spec:
selector:
name: busybox
clusterIP: None
ports:
- name: foo # Actually, no port is needed.
port: 1234
targetPort: 1234
---
apiVersion: v1
kind: Pod
metadata:
name: busybox1
labels:
name: busybox
spec:
hostname: busybox-1
subdomain: default-subdomain
containers:
- image: busybox
command:
- sleep
- "3600"
name: busybox
---
apiVersion: v1
kind: Pod
metadata:
name: busybox2
labels:
name: busybox
spec:
hostname: busybox-2
subdomain: default-subdomain
containers:
- image: busybox
command:
- sleep
- "3600"
name: busybox

Related

k8s custom metric exporter json returned values

I've tried to make a custom metric exporter for my kubernetes ..
And still failing to get the right value in order to exploit if within an hpa with this simplistic "a" variable ..
I don't really know how to export it according to different routes .. if I do {"a":"1"}, the error message on the hpa tells me it's missing kind .. then another fields ..
So I've did sum up the complete reproducible experiment below, in order to make it the simplest ever
Any Idea on how to complete this task ?
Thanks a lot for any clue, advice, enlightment, comment, notice
apiVersion: v1
kind: ConfigMap
metadata:
name: exporter
namespace: test
data:
os.php: |
<?php // If anyone knows a simple replacement other than swoole .. he's welcome --- Somehow I only think I'll need to know what the json output is expected for theses routes
$server = new Swoole\HTTP\Server("0.0.0.0",443,SWOOLE_PROCESS,SWOOLE_SOCK_TCP | SWOOLE_SSL);
$server->set(['worker_num' => 1,'ssl_cert_file' => __DIR__ . '/example.com+5.pem','ssl_key_file' => __DIR__ . '/example.com+5-key.pem']);
$server->on('Request', 'onMessage');
$server->start();
function onMessage($req, $res){
$value=1;
$url=$req->server['request_uri'];
file_put_contents('monolog.log',"\n".$url,8);//Log
if($url=='/'){
$res->end('{"status":"healthy"}');return;
} elseif($url=='/metrics'){
$res->end('a '.$value);return;
} elseif($url=='/apis/custom.metrics.k8s.io/v1beta1'){ // <-- This url is called lots of time in the logs
$res->end('{"kind": "APIResourceList","apiVersion": "v1","groupVersion": "custom.metrics.k8s.io/v1beta1","resources": [{"name": "namespaces/a","singularName": "","namespaced": false,"kind": "MetricValueList","verbs": ["get"]}]}');return;
} elseif($url=='/apis/custom.metrics.k8s.io/v1beta1/namespaces/test/services/test-metrics-exporter/a'){
$res->end('{"kind": "MetricValueList","apiVersion": "custom.metrics.k8s.io/v1beta1","metadata": {"selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/test/services/test-metrics-exporter-svc/a"},"items": [{"describedObject": {"kind": "Service","namespace": "test","name": "test-metrics-exporter-svc","apiVersion": "/v1"},"metricName": "a","timestamp": "2020-06-21T08:35:58Z","value": "'.$value.'","selector": null}]}');return;
}
$res->status(404);return;
}
---
apiVersion: v1
kind: Service
metadata:
name: test-metrics-exporter
namespace: test
annotations:
prometheus.io/port: '443'
prometheus.io/scrape: 'true'
spec:
ports:
- port: 443
protocol: TCP
selector:
app: test-metrics-exporter
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-metrics-exporter
namespace: test
spec:
selector:
matchLabels:
app: test-metrics-exporter
template:
metadata:
labels:
app: test-metrics-exporter
spec:
terminationGracePeriodSeconds: 1
volumes:
- name: exporter
configMap:
name: exporter
defaultMode: 0744
items:
- key: os.php
path: os.php
containers:
- name: test-metrics-exporter
image: openswoole/swoole:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- name: exporter
mountPath: /var/www/os.php
subPath: os.php
command:
- /bin/sh
- -c
- |
touch monolog.log;
apt update && apt install wget -y && wget https://github.com/FiloSottile/mkcert/releases/download/v1.4.3/mkcert-v1.4.3-linux-amd64
cp mkcert-v1.4.3-linux-amd64 /usr/local/bin/mkcert && chmod +x /usr/local/bin/mkcert
mkcert example.com "*.example.com" example.test localhost 127.0.0.1 ::1
php os.php &
tail -f monolog.log
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: alpine
namespace: test
spec:
selector:
matchLabels:
app: alpine
template:
metadata:
labels:
app: alpine
spec:
terminationGracePeriodSeconds: 1
containers:
- name: alpine
image: alpine
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- |
tail -f /dev/null
---
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
name: alpine
namespace: test
spec:
scaleTargetRef:
kind: Deployment
name: alpine
apiVersion: apps/v1
minReplicas: 1
maxReplicas: 10
metrics:
- type: Object
object:
target:
kind: Service
name: test-metrics-exporter
metricName: a
targetValue: '1'
---
# This is the Api hook for custom metrics
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.custom.metrics.k8s.io
namespace: test
spec:
insecureSkipTLSVerify: true
group: custom.metrics.k8s.io
groupPriorityMinimum: 1000
versionPriority: 5
version: v1beta1
service:
name: test-metrics-exporter
namespace: test
port: 443

Access consul-api from consul-connect-service-mesh

In a consul-connect-service-mesh (using k8) how do you get to the consul-api itself?
For example to access the consul-kv.
I'm working through this tutorial, and I'm wondering how
you can bind the consul (http) api in a service to localhost.
Do you have to configure the Helm Chart further?
I would have expected the consul-agent to always be an upstream service.
The only way i found to access the api is via the k8-service consul-server.
Environment:
k8 (1.22.5, docker-desktop)
helm consul (0.42)
consul (1.11.3)
used helm-yaml
global:
name: consul
datacenter: dc1
server:
replicas: 1
securityContext:
runAsNonRoot: false
runAsGroup: 0
runAsUser: 0
fsGroup: 0
ui:
enabled: true
service:
type: 'NodePort'
connectInject:
enabled: true
controller:
enabled: true
You can access the Consul API on the local agent by using the Kubernetes downward API to inject an environment variable in the pod with the IP address of the host. This is documented on Consul.io under Installing Consul on Kubernetes: Accessing the Consul HTTP API.
You will also need to exclude port 8500 (or 8501) from redirection using the consul.hashicorp.com/transparent-proxy-exclude-outbound-ports label.
My current final solution is a (connect)service based on reverse proxy (nginx) that targets consul.
apiVersion: v1
kind: ConfigMap
metadata:
name: consul-kv-proxy
data:
nginx.conf.template: |
error_log /dev/stdout info;
server {
listen 8500;
location / {
access_log off;
proxy_pass http://${MY_NODE_IP}:8500;
error_log /dev/stdout;
}
}
---
apiVersion: v1
kind: Service
metadata:
# This name will be the service name in Consul.
name: consul-kv-proxy
spec:
selector:
app: consul-kv-proxy
ports:
- protocol: TCP
port: 8500
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: consul-kv-proxy
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: consul-kv-proxy
spec:
replicas: 1
selector:
matchLabels:
app: consul-kv-proxy
template:
metadata:
name: consul-kv-proxy
labels:
app: consul-kv-proxy
annotations:
'consul.hashicorp.com/connect-inject': 'true'
spec:
containers:
- name: consul-kv-proxy
image: nginx:1.14.2
volumeMounts:
- name: config
mountPath: "/usr/local/nginx/conf"
readOnly: true
command: ['/bin/bash']
#we have to transform the nginx config to use the node ip address
args:
- -c
- envsubst < /usr/local/nginx/conf/nginx.conf.template > /etc/nginx/conf.d/consul-kv-proxy.conf && nginx -g 'daemon off;'
ports:
- containerPort: 8500
name: http
env:
- name: MY_NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
volumes:
- name: config
configMap:
name: consul-kv-proxy
# If ACLs are enabled, the serviceAccountName must match the Consul service name.
serviceAccountName: consul-kv-proxy
A downstream service (called static-client) now can be declared like this
apiVersion: v1
kind: Service
metadata:
name: static-client
spec:
selector:
app: static-client
ports:
- port: 80
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: static-client
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: static-client
spec:
replicas: 1
selector:
matchLabels:
app: static-client
template:
metadata:
name: static-client
labels:
app: static-client
annotations:
'consul.hashicorp.com/connect-inject': 'true'
'consul.hashicorp.com/connect-service-upstreams': 'consul-kv-proxy:8500'
spec:
containers:
- name: static-client
image: curlimages/curl:latest
# Just spin & wait forever, we'll use `kubectl exec` to demo
command: ['/bin/sh', '-c', '--']
args: ['while true; do sleep 30; done;']
serviceAccountName: static-client
Assume we have a key-value in consul called "test".
From a pod of the static-client we can now access the consul-web-api with:
curl http://localhost:8500/v1/kv/test
This solution still lacks fine-tuning (i have not try https, or ACL).

Kubernetes (K8s) Minikubes : how to use service URL in ConfigMap so other pods can use it

database-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: postgres
name: postgres-db
spec:
replicas:
selector:
matchLabels:
app: postgres-db
template:
metadata:
labels:
app: postgres-db
spec:
containers:
- name: postgres-db
image: postgres:latest
ports:
- protocol: TCP
containerPort: 1234
env:
- name: POSTGRES_DB
value: "classroom"
- name: POSTGRES_USER
value: temp
- name: POSTGRES_PASSWORD
value: temp
database-service.yaml
apiVersion: v1
kind: Service
metadata:
name: database-service
spec:
selector:
app: postgres-db
ports:
- protocol: TCP
port: 1234
targetPort: 1234
I want to use this database-service url for other deployment so i tried to add it in configMap
my-configMap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: classroom-configmap
data:
database_url: database-service
[Not Working] Expected - database_url : database-service (will be replaced with corresponding service URL)
ERROR - Driver org.postgresql.Driver claims to not accept jdbcUrl, database-service
$ kubectl describe configmaps classroom-configmap
Output :
Name: classroom-configmap
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
database_url:
----
database-service
BinaryData
====
Events: <none>
According to the error you are having:
Driver org.postgresql.Driver claims to not accept jdbcUrl
It seems that there are a few issues with that URL, and a latest PSQL driver may complain.
jdbc:postgres: isn't right, use jdbc:postgresql:instead
Do not use jdbc:postgresql://<username>:<passwor>..., user parameters instead: jdbc:postgresql://<host>:<port>/<dbname>?user=<username>&password=<password>
In some cases you have to force SSL connection by adding sslmode=require parameter
updated my-configMap.yaml (database_url)
apiVersion: v1
kind: ConfigMap
metadata:
name: classroom-configmap
data:
database_url: jdbc:postgresql://database-service.default.svc.cluster.local:5432/classroom
expected URL - jdbc:{DATABASE}://{DATABASE_SERVICE with NAMESPACE}:{DATABASE_PORT}/{DATABASE_NAME}
DATABASE_SERVICE - database-service
NAMESPACE - default
DATABASE_SERVICE with NAMESPACE - database-service.default.svc.cluster.local

How to "kubectl get ep" in deployment.yaml

I have a kubernetes deployment using environment variables and I wonder how to set dynamic endpoints in it.
For the moment, I use
$ kubectl get ep rtspcroatia
NAME ENDPOINTS AGE
rtspcroatia 172.17.0.8:8554 3h33m
And copy/paste the endpoint's value in my deployment.yaml. For me, it's not the right way to do it, but I can't find another method..
Here is a part of my deployment.yaml :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
io.kompose.service: person-cam0
name: person-cam0
spec:
template:
metadata:
labels:
io.kompose.service: person-cam0
spec:
containers:
- env:
- name: S2_LOGOS_INPUT_ADDRESS
value: rtsp://172.17.0.8:8554/live.sdp
image: ******************
name: person-cam0
EDIT : And the service of the rtsp container
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: rtspcroatia
name: rtspcroatia
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 8551
targetPort: 8554
selector:
io.kompose.service: rtspcroatia
Can you help me to have something like :
containers:
- env:
- name: S2_LOGOS_INPUT_ADDRESS
value: rtsp://$ENDPOINT_ADDR:$ENDPOINT_PORT/live.sdp
Thank you !
You could set dynamic ENDPOINTS values like "POD_IP:SERVICE_PORT" as shown on below sample yaml code.
containers:
- env:
- name: MY_ENDPOINT_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: S2_LOGOS_INPUT_ADDRESS
value: rtsp://$MY_ENDPOINT_IP:$RTSPCROATI_SERVICE_PORT/live.sdp

How to show hostname or pod infos in kubernetes with an echoserver?

I am searching for the command to print out the podname (or hostname) when I call my echoserver (gcr.io/google_containers/echoserver. I saw that in a video, regarding loadbalancing and ingress as a proof of concept, to show which server responds when I hit the refresh button in the browser. But I cannot remember how that worked or where that was. I searched the web but didn't find any hint.
At the moment my ReplicaSet looks like this:
Maybe I am missing an env variable or something like this.
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: echoserver
spec:
replicas: 1
template:
metadata:
name: echoserver
labels:
project: chapter5
service: echoserver
spec:
containers:
- name: echoserver
image: gcr.io/google_containers/echoserver:1.4
ports:
- containerPort: 8080
I got it: I have to raise the Version!
With versions greater than 1.4 it works :-)
So the correct one is the actual version 1.10:
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: echoserver
spec:
replicas: 1
template:
metadata:
name: echoserver
labels:
project: chapter5
service: echoserver
spec:
containers:
- name: echoserver
image: gcr.io/google_containers/echoserver:1.10
ports:
- containerPort: 8080