Istio direct Pod to Pod communication - kubernetes

I have a problem with the communication to a Pod from a Pod deployed with Istio? I actually need it to make Hazelcast discovery working with Istio, but I'll try to generalize the issue here.
Let's have a sample hello world service deployed on Kubernetes. The service replies to the HTTP request on the port 8000.
$ kubectl create deployment nginx --image=crccheck/hello-world
The created Pod has an internal IP assigned:
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
hello-deployment-84d876dfd-s6r5w 1/1 Running 0 8m 10.20.3.32 gke-rafal-test-istio-1-0-default-pool-91f437a3-cf5d <none>
In the job curl.yaml, we can use the Pod IP directly.
apiVersion: batch/v1
kind: Job
metadata:
name: curl
spec:
template:
spec:
containers:
- name: curl
image: byrnedo/alpine-curl
command: ["curl", "10.20.3.32:8000"]
restartPolicy: Never
backoffLimit: 4
Running the job without Istio works fine.
$ kubectl apply -f curl.yaml
$ kubectl logs pod/curl-pptlm
...
Hello World
...
However, when I try to do the same with Istio, it does not work. The HTTP request gets blocked by Envoy.
$ kubectl apply -f <(istioctl kube-inject -f curl.yaml)
$ kubectl logs pod/curl-s2bj6 curl
...
curl: (7) Failed to connect to 10.20.3.32 port 8000: Connection refused
I've played with Service Entries, MESH_INTERNAL, and MESH_EXTERNAL, but with no success. How to bypass Envoy and make a direct call to a Pod?
EDIT: The output of istioctl kube-inject -f curl.yaml.
$ istioctl kube-inject -f curl.yaml
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: null
name: curl
spec:
backoffLimit: 4
template:
metadata:
annotations:
sidecar.istio.io/status: '{"version":"dbf2d95ff300e5043b4032ed912ac004974947cdd058b08bade744c15916ba6a","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
creationTimestamp: null
spec:
containers:
- command:
- curl
- 10.16.2.34:8000/
image: byrnedo/alpine-curl
name: curl
resources: {}
- args:
- proxy
- sidecar
- --domain
- $(POD_NAMESPACE).svc.cluster.local
- --configPath
- /etc/istio/proxy
- --binaryPath
- /usr/local/bin/envoy
- --serviceCluster
- curl.default
- --drainDuration
- 45s
- --parentShutdownDuration
- 1m0s
- --discoveryAddress
- istio-pilot.istio-system:15010
- --zipkinAddress
- zipkin.istio-system:9411
- --connectTimeout
- 10s
- --proxyAdminPort
- "15000"
- --concurrency
- "2"
- --controlPlaneAuthPolicy
- NONE
- --statusPort
- "15020"
- --applicationPorts
- ""
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: ISTIO_META_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: ISTIO_META_CONFIG_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: ISTIO_META_INTERCEPTION_MODE
value: REDIRECT
image: docker.io/istio/proxyv2:1.1.1
imagePullPolicy: IfNotPresent
name: istio-proxy
ports:
- containerPort: 15090
name: http-envoy-prom
protocol: TCP
readinessProbe:
failureThreshold: 30
httpGet:
path: /healthz/ready
port: 15020
initialDelaySeconds: 1
periodSeconds: 2
resources:
limits:
cpu: "2"
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
securityContext:
readOnlyRootFilesystem: true
runAsUser: 1337
volumeMounts:
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /etc/certs/
name: istio-certs
readOnly: true
initContainers:
- args:
- -p
- "15001"
- -u
- "1337"
- -m
- REDIRECT
- -i
- '*'
- -x
- ""
- -b
- ""
- -d
- "15020"
image: docker.io/istio/proxy_init:1.1.1
imagePullPolicy: IfNotPresent
name: istio-init
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 10m
memory: 10Mi
securityContext:
capabilities:
add:
- NET_ADMIN
restartPolicy: Never
volumes:
- emptyDir:
medium: Memory
name: istio-envoy
- name: istio-certs
secret:
optional: true
secretName: istio.default
status: {}
---

When a pod with an istio side car is started, the follwing things happen
an init container changes the iptables rules so that all the outgoing tcp traffic is routed to the sidecar container (istio-proxy) on port 15001 .
the containers of the pod are started in parallel (curl and istio-proxy)
If your curl container is executed before istio-proxy listens on port 15001, you get the error.
I started this container with a sleep command, exec-d into the container and the curl worked.
$ kubectl apply -f <(istioctl kube-inject -f curl-pod.yaml)
$ k exec -it -n noistio curl -c curl bash
[root#curl /]# curl 172.16.249.198:8000
<xmp>
Hello World
## .
## ## ## ==
## ## ## ## ## ===
/""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
\______ o _,/
\ \ _,'
`'--.._\..--''
</xmp>
[root#curl /]#
curl-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: curl
spec:
containers:
- name: curl
image: centos
command: ["sleep", "3600"]

Make sure that you have configured a ingress "Gateway" and after doing that you need to configure a "VirtualService". See link below for simple example.
https://istio.io/docs/tasks/traffic-management/ingress/#configuring-ingress-using-an-istio-gateway
Once you have deployed the gateway along with the virtual service you should be able to 'curl' you service from outside the cluster from an external IP.
But if you want to check for traffic from INSIDE the cluster you will need to use istio's mirroring API to mirror the service (pod) from one pod to another pod, and THEN use your command (kubectl apply -f curl.yaml) to see the traffic.
See link below for mirroring example:
https://istio.io/docs/tasks/traffic-management/mirroring/
hope this helps

Related

Where to execute kube-proxy command?

From this article, I can specify 'userspace' as my proxy-mode, but I am unable to understand what command I need to use for it and at what stage? Like after creating deployment or service?
I am running a minikube cluster currently.
kube-proxy is a process that runs on each kubernetes node to manage network connections coming into and out of kubernetes.
You don't run the command as such, but your deployment method (usually kubeadm) configures the options for it to run.
As #Hang Du mentioned, in minikube you can modify it's options by editing the kube-proxy configmap and changing mode to userspace
kubectl -n kube-system edit configmap kube-proxy
Then delete the Pod.
kubectl -n kube-system get pod
kubectl -n kube-system delete pod kube-proxy-XXXXX
If you are using minikube, you can find a DaemonSet named kube-proxy like followings:
$ kubectl get ds -n kube-system kube-proxy -o yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
...
labels:
k8s-app: kube-proxy
name: kube-proxy
namespace: kube-system
...
spec:
...
spec:
containers:
- command:
- /usr/local/bin/kube-proxy
- --config=/var/lib/kube-proxy/config.conf
- --hostname-override=$(NODE_NAME)
env:
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: k8s.gcr.io/kube-proxy:v1.15.0
imagePullPolicy: IfNotPresent
name: kube-proxy
...
volumeMounts:
- mountPath: /var/lib/kube-proxy
name: kube-proxy
- mountPath: /run/xtables.lock
name: xtables-lock
- mountPath: /lib/modules
name: lib-modules
readOnly: true
dnsPolicy: ClusterFirst
...
volumes:
- configMap:
defaultMode: 420
name: kube-proxy
name: kube-proxy
- hostPath:
path: /run/xtables.lock
type: FileOrCreate
name: xtables-lock
- hostPath:
path: /lib/modules
type: ""
name: lib-modules
...
Look at the .spec.template.spec.containers[].command, the container runs the kube-proxy command. You can provide the flag --proxy-mode=userspace in the command array.
- command:
- /usr/local/bin/kube-proxy
- --config=/var/lib/kube-proxy/config.conf
- --hostname-override=$(NODE_NAME)
- --proxy-mode=userspace

Can't modify ETCD manifest for Kubernetes static pod

I'd like to modify etcd pod to listening 0.0.0.0(or host machine IP) instead of 127.0.0.1.
I'm working on a migration from a single master to multi-master kubernetes cluster, but I faced with an issue that after I modified /etc/kubernetes/manifests/etcd.yaml with correct settings and restart kubelet and even docker daemons, etcd still working on 127.0.0.1.
Inside docker container I'm steel seeing that etcd started with --listen-client-urls=https://127.0.0.1:2379 instead of host IP
cat /etc/kubernetes/manifests/etcd.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: etcd
tier: control-plane
name: etcd
namespace: kube-system
spec:
containers:
- command:
- etcd
- --advertise-client-urls=https://192.168.22.9:2379
- --cert-file=/etc/kubernetes/pki/etcd/server.crt
- --client-cert-auth=true
- --data-dir=/var/lib/etcd
- --initial-advertise-peer-urls=https://192.168.22.9:2380
- --initial-cluster=test-master-01=https://192.168.22.9:2380
- --key-file=/etc/kubernetes/pki/etcd/server.key
- --listen-client-urls=https://192.168.22.9:2379
- --listen-peer-urls=https://192.168.22.9:2380
- --name=test-master-01
- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
- --peer-client-cert-auth=true
- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
- --snapshot-count=10000
- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
image: k8s.gcr.io/etcd-amd64:3.2.18
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- /bin/sh
- -ec
- ETCDCTL_API=3 etcdctl --endpoints=https://[192.168.22.9]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt
--cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key
get foo
failureThreshold: 8
initialDelaySeconds: 15
timeoutSeconds: 15
name: etcd
resources: {}
volumeMounts:
- mountPath: /var/lib/etcd
name: etcd-data
- mountPath: /etc/kubernetes/pki/etcd
name: etcd-certs
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /var/lib/etcd
type: DirectoryOrCreate
name: etcd-data
- hostPath:
path: /etc/kubernetes/pki/etcd
type: DirectoryOrCreate
name: etcd-certs
status: {}
[root#test-master-01 centos]# kubectl -n kube-system get po etcd-test-master-01 -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/config.hash: c3eef2d48a776483adc00311df8cb940
kubernetes.io/config.mirror: c3eef2d48a776483adc00311df8cb940
kubernetes.io/config.seen: 2019-05-24T13:50:06.335448715Z
kubernetes.io/config.source: file
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: 2019-05-24T14:08:14Z
labels:
component: etcd
tier: control-plane
name: etcd-test-master-01
namespace: kube-system
resourceVersion: "6288"
selfLink: /api/v1/namespaces/kube-system/pods/etcd-test-master-01
uid: 5efadb1c-7e2d-11e9-adb7-fa163e267af4
spec:
containers:
- command:
- etcd
- --advertise-client-urls=https://127.0.0.1:2379
- --cert-file=/etc/kubernetes/pki/etcd/server.crt
- --client-cert-auth=true
- --data-dir=/var/lib/etcd
- --initial-advertise-peer-urls=https://127.0.0.1:2380
- --initial-cluster=test-master-01=https://127.0.0.1:2380
- --key-file=/etc/kubernetes/pki/etcd/server.key
- --listen-client-urls=https://127.0.0.1:2379
- --listen-peer-urls=https://127.0.0.1:2380
- --name=test-master-01
- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
- --peer-client-cert-auth=true
- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
- --snapshot-count=10000
- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
image: k8s.gcr.io/etcd-amd64:3.2.18
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- /bin/sh
- -ec
- ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt
--cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key
get foo
First check your kubelet option --pod-manifest-path, put your correct yaml in this path.
To make sure etcd pod has been deleted, move yaml file out of pod-manifest-path, wait this pod has been deleted by docker ps -a. Then put your correct yaml file into pod-manifest-path.
Reviewed my automation scripts step by step and found that I've performed a backup of etcd yaml in the same folder with .bak extension. Looks like kubelet daemon uploads all the files inside the manifests folder and despite the file extension.

kubernetes how to copy a cfg file into container before contaner running?

My service need a cfg file which need to be changed before containers start running. So it is not suitable to pack the cfg into docker image.
I need to copy from cluster to container, and then the service in container start and reads this cfg.
How can I do this?
I think for your use-case , Init Containers might be the best fit. Init Containers are like small scripts that you can run before starting your own containers in kubernetes pod , they must exit. You can have this config file updated in shared Persistent Volume between your Init container and your container.
Following article gives a nice example as to how this can be done
https://medium.com/#jmarhee/using-initcontainers-to-pre-populate-volume-data-in-kubernetes-99f628cd4519
UPDATE :
I found another answer from stackoverflow which might be related and give you a better approach in handling this
can i use a configmap created from an init container in the pod
An example
ouput
NAME READY STATUS RESTARTS AGE
cassandra-0 1/1 Running 0 70s
[root#green--1 ~]# k exec -it cassandra-0 -n green -- /bin/bash
root#cassandra-0:/# ls /config/cassandra/
cassandra.yaml
How the directory is shared
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra
namespace: green
labels:
app: cassandra
spec:
serviceName: cassandra
replicas: 1
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
terminationGracePeriodSeconds: 1800
initContainers:
- name: copy
image: busybox:1.28
command: ["/bin/sh", "-c", "cp /config/cassandra.yaml /config/cassandra/"]
volumeMounts:
- name: tmp-config
mountPath: /config/cassandra/
- name: cassandraconfig
mountPath: /config/
containers:
- name: cassandra
#image: gcr.io/google-samples/cassandra:v13
image: cassandra:3.11.6
imagePullPolicy: Always
#command: [ "/bin/sh","-c","su cassandra && mkdir -p /etc/cassandra/ && cp /config/cassandra/cassandra.yaml /etc/cassandra/" ]
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
resources:
limits:
cpu: "500m"
memory: 4Gi
requests:
cpu: "500m"
memory: 4Gi
securityContext:
capabilities:
add:
- IPC_LOCK
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- "cp /config/cassandra/cassandra.yaml /etc/cassandra/"
preStop:
exec:
command:
- /bin/sh
- -c
- nodetool drain
env:
- name: MAX_HEAP_SIZE
value: 1G
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_SEEDS
value: "cassandra-0.cassandra.green.svc.cluster.local"
- name: CASSANDRA_CLUSTER_NAME
value: "green"
- name: CASSANDRA_DC
value: "ee1-green"
- name: CASSANDRA_RACK
value: "Rack1-green"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
readinessProbe:
exec:
command:
- /bin/bash
- -c
- /ready-probe.sh
initialDelaySeconds: 15
timeoutSeconds: 55
volumeMounts:
- name: cassandradata
mountPath: /cassandra_data
- name: tmp-config
mountPath: /config/cassandra/
volumes:
- name: cassandraconfig
configMap:
name: cassandraconfig
- name: tmp-config
emptyDir: {}
# 1 Creating a volume to move date from init container to main container without making mount ReadOnly
# These are converted to volume claims by the controller
# and mounted at the paths mentioned above.
volumeClaimTemplates:
- metadata:
name: cassandradata
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: rook-ceph-block
resources:
requests:
storage: 5Gi
You could use ConfigMap. Create a config map resource and config your container to load the config map accordingly. You container will then be able to load those variables from its environment variable.
Here is the reference: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/

Cannot execute netcat inside a Docker/Kubernetes container

I am trying to deploy a few Docker containers via Kubernetes within an AWS cluster. I have no problems with the deployment itself, but I am unable to execute netcat from inside a container.
If I enter the container (kubectl exec -it example-0 bash) and I attempt to execute: nc -w 1 -q 1 127.0.0.1 2181, it fails with:
(UNKNOWN) [127.0.0.1] 2181 (?) : Connection refused
This becomes a problem for some of my containers because I am using the netcat command to implement the corresponding readiness probes.
Example:
Deploying a Zookeeper container:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zookeeper
labels:
app: zookeeper
spec:
serviceName: zookeeper
replicas: 1
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
spec:
terminationGracePeriodSeconds: 1800
nodeSelector:
proc_host: "yes"
host_role: "iw"
containers:
- name: zookeeper
image: imageregistry:443/mydomain/zookeeper
imagePullPolicy: Always
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: peer
- containerPort: 3888
name: leader-election
resources:
limits:
memory: 120Mi
requests:
cpu: "10m"
memory: 100Mi
lifecycle:
preStop:
exec:
command: ["sh", "-ce", "kill -s TERM 1; while $(kill -0 1 2>/dev/null); do sleep 1; done"]
env:
- name: LOGGING_LEVEL
value: WARN
- name: ID_OFFSET
value: "4"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
readinessProbe:
exec:
command:
- /bin/bash
- -c
- '[ "imok" = "$(echo ruok | nc -w 1 -q 1 127.0.0.1 2181)" ]'
initialDelaySeconds: 15
timeoutSeconds: 5
volumeMounts:
- name: zookeeper
mountPath: /tmp/zookeeper
volumes:
- name: zookeeper
hostPath:
path: /var/lib/kafka/zookeeper
and it deploys fine, but the readiness probe fails with the following error:
Readiness probe failed: (UNKNOWN) [127.0.0.1] 2181 (?) : Connection refused
I am new to Kubernetes, does anybody know what I am missing?
Thank you for your help.

Service not exposing in kubernetes

I have a deployment and a service in GKE. I exposed the deployment as a Load Balancer but I cannot access it through the service (curl or browser). I get an:
curl: (7) Failed to connect to <my-Ip-Address> port 443: Connection refused
I can port forward directly to the pod and it works fine:
kubectl --namespace=redfalcon port-forward web-service-rf-76967f9c68-2zbhm 9999:443 >> /dev/null
curl -k -v --request POST --url https://localhost:9999/auth/login/ --header 'content-type: application/json' --header 'x-profile-key: ' --data '{"email":"<testusername>","password":"<testpassword>"}'
I have most likely misconfigured my service but cannot see how. Any help on what I did would be very much appreciated.
Service Yaml:
---
apiVersion: v1
kind: Service
metadata:
name: red-falcon-lb
namespace: redfalcon
spec:
type: LoadBalancer
ports:
- name: https
port: 443
protocol: TCP
selector:
app: web-service-rf
Deployment YAML
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: web-service-rf
spec:
selector:
matchLabels:
app: web-service-rf
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: web-service-rf
spec:
initContainers:
- name: certificate-init-container
image: proofpoint/certificate-init-container:0.2.0
imagePullPolicy: Always
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
args:
- "-namespace=$(NAMESPACE)"
- "-pod-name=$(POD_NAME)"
- "-query-k8s"
volumeMounts:
- name: tls
mountPath: /etc/tls
containers:
- name: web-service-rf
image: gcr.io/redfalcon-186521/redfalcon-webserver-minimal:latest
# image: gcr.io/redfalcon-186521/redfalcon-webserver-full:latest
command:
- "./server"
- "--port=443"
imagePullPolicy: Always
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
ports:
- containerPort: 443
resources:
limits:
memory: "500Mi"
cpu: "100m"
volumeMounts:
- mountPath: /etc/tls
name: tls
- mountPath: /var/secrets/google
name: google-cloud-key
volumes:
- name: tls
emptyDir: {}
- name: google-cloud-key
secret:
secretName: pubsub-key
output: kubectl describe svc red-falcon-lb
Name: red-falcon-lb
Namespace: redfalcon
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"red-falcon-lb","namespace":"redfalcon"},"spec":{"ports":[{"name":"https","port...
Selector: app=web-service-rf
Type: LoadBalancer
IP: 10.43.245.9
LoadBalancer Ingress: <EXTERNAL IP REDACTED>
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 31524/TCP
Endpoints: 10.40.0.201:443,10.40.0.202:443
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 39m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 38m service-controller Ensured load balancer
I figured out what it was...
My golang app was listening on localhost instead of 0.0.0.0. This meant that port forwarding on kubectl worked but any service exposure didn't work.
I had to add "--host 0.0.0.0" to my k8s command and it then listened to requests from outside localhost.
My command ended up being...
"./server --port 8080 --host 0.0.0.0"