Istio egress traffic is not routed through istio istio-proxy sidecar - kubernetes

we have a minimal example of routing egress traffic to an external service outside of the service mesh.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: nexus-test
namespace: REDACTED
spec:
hosts:
- nexus.REDACTED
ports:
- number: 443
name: https
protocol: HTTPS
resolution: DNS
location: MESH_EXTERNAL
But we are not seeing the traffic pass the sidecar istio-proxy using the command
kubectl logs $SOURCE_POD -c istio-proxy | tail
Also we are not seeing the traffic on the mixer using the command:
kubectl -n istio-system logs -l istio-mixer-type=telemetry -c mixer | grep 'nexus'
as suggested in the documentation https://istio.io/docs/tasks/traffic-management/egress/egress-control/#access-an-external-https-service
Can anyone help us what could be wrong?
Best regards,
rforberger

It works now. It was maybe conflicting with other services in the service mesh.

Related

Allowing traffic between different pods using pod network policy

I have created the below 'pod` in default namespace
kubectl run myhttpd --image="docker.io/library/nginx:latest" --restart=Never -l app=httpd-server --port 80
I was creating another Pod on a different namespace to check the connectivity on port 80 on default namespace with the below command
kubectl run cli-httpd --rm -it --image=busybox --restart=Never -l app=myhttpd -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ # wget --spider --timeout=1 100.64.9.198 (IP of application in default namespace)
In order to allow the connectivity between both the namespace , I have created the below Pod network policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-port-ingress-80
namespace: default
spec:
podSelector:
matchLabels:
app: myhttpd
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 10.64.8.0/22
ports:
- protocol: TCP
port: 80
10.64.8.0/22 is the Pods network range.
But the connectivity is timing out. Please suggest to allow this connectivty
In NetworkPolicy, the ipBlock is usually meant to allow communications from outside your SDN.
What you want to do is to filter based on pod labels.
Having started your test pod, check for its labels
kubectl get pods --show-labels
Pick one that identify your Pod, while not matching anything else, then fix your NetworkPolicy. Should look something like:
spec:
ingress:
- from:
- podSelector: # assuming client pod belongs to same namespace as application
matchLabels:
app: my-test # netpol allows connections from any pod with label app=my-test
ports:
- port: 80 # netpol allows connections to port 80 only
protocol: TCP
podSelector:
matchLabels:
app: myhttpd # netpol applies to any pod with label app=myhttpd
policyTypes:
- Ingress
While ... I'm not certain what the NetworkPolicy specification says regarding ipBlocks (can they refer to SDN ranges?) ... depending on your SDN, I guess your configuration "should" work, in some cases, maybe. Maybe your issue is only related to label selectors?
Note, allowing connections from everywhere, I would use:
spec:
ingress:
- {}
....

Ingress-nginx how to set externalIPs of nginx ingress to 1 ip externalIP only

i installed nginx ingress with the yaml file
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml
when deploy i can see that the endpoints/externalIPs by default are all the ip of my nodes
but i only want 1 externalIPs to be access able to my applications
i had tried bind-address(https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#bind-address) in a configuration file and applied it but it doesn't work, my ConfigMap file:
apiVersion: v1
data:
bind-address: "192.168.30.16"
kind: ConfigMap
metadata:
name: ingress-nginx-controller
I tried kubectl edit svc/ingress-nginx-controller -n ingress-nginx to edit the svc adding externalIPs but it still doesn't work.
The only thing the nginx ingress document mentioned is https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#external-ips but i tried editing the svc, after i changed, it was set to single IP, but later it re-add the IPs again. Seems like there an automatic update of external IPs mechanic in ingress-nginx?
Is there anyway to set nginx ingress externals ip to only 1 of the node ip? i'm running out of option for googling this. Hope someone can help me
but I only want 1 external IPs to be access able to my applications
If you wish to "control" who can access your service(s) and from which ip/subnet/namesapce etc you should use NetworkPolicy
https://kubernetes.io/docs/concepts/services-networking/network-policies/
The entities that a Pod can communicate with are identified through a combination of the following 3 identifiers:
Other pods that are allowed (exception: a pod cannot block access to itself)
Namespaces that are allowed.
IP blocks (exception: traffic to and from the node where a Pod is running is always allowed, regardless of the IP address of the Pod or the node)
When defining a pod- or namespace-based NetworkPolicy, you use a selector to specify what traffic is allowed to and from the Pod(s) that match the selector.
Meanwhile, when IP-based NetworkPolicies are created, we define policies based on IP blocks (CIDR ranges).
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
Dependent on whether there is a LoadBalancer implementation for your cluster that might as intended.
If you want to use a specified node use type: NodePort
https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
It might then also be useful to use a nodeSelector so you can control what node the nxinx controller gets scheduled to, for DNS reasons.

Find pod/VM instance that is serving behind kubernetes service

I have a legacy system (the guy who build it has resigned and can not be contacted). So it is microservice app hosted in GKE. There is one particular service that is quite strange.
This is a redis service (as the other pods who use this service via its internal IP address can use the redis service and do the redis PING-PONG). However, I see that there are no pods for this service 0/0. Any idea how this could happen?
The YAML file of the service is below:
I don't see any services, deployments, nor pods called node-1, node-2, nor node-3 in our kubernetes cluster. SO it is quite strange for me.
Anyone knows about this?
I have read kubernetes documentation and googled for solutions but I could not find any explanation.
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --record=true --filename=production/svc/dispatchcache-shard-service.yaml"},"name":"dispatchcache-shard","namespace":"default"},"spec":{"ports":[{"name":"node-1","port":7000,"protocol":"TCP","targetPort":7000},{"name":"node-2","port":7001,"protocol":"TCP","targetPort":7000},{"name":"node-3","port":7002,"protocol":"TCP","targetPort":7000}],"type":"ClusterIP"}}
kubernetes.io/change-cause: kubectl apply --record=true --filename=production/svc/dispatchcache-shard-service.yaml
creationTimestamp: 2018-10-03T08:11:41Z
name: dispatchcache-shard
namespace: default
resourceVersion: "297308103"
selfLink: /api/v1/namespaces/default/services/dispatchcache-shard
uid: f55bd4d0-c6e3-11e8-9489-42010af00219
spec:
clusterIP: 10.171.255.152
ports:
- name: node-1
port: 7000
protocol: TCP
targetPort: 7000
- name: node-2
port: 7001
protocol: TCP
targetPort: 7000
- name: node-3
port: 7002
protocol: TCP
targetPort: 7000
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
I expected I could find the pods/deployment/or instances that actually serving the redis service.
You can easily find IPs of pods that are serving this service using this command:
kubectl get endpoints dispatchcache-shard
After that you may find actual pods by IP addresses with this command:
kubectl get pod -o wide
I want to add that there should be very strong reason to define service without label selectors. Not sure this is your case.

Istio JWT authentication passes traffic without token

Background:
There was a similar question: Here but it didn't offer a solution to my issue.
I have deployed an application which is working as expected to my Istio Cluster. I wanted to enable JWT authentication, so adapting the instructions Here to my use-case.
ingressgateway:
I first applied the following policy to the istio-ingressgateway. This worked and any traffic sent without a JWT token was blocked.
kubectl apply -n istio-system -f mypolicy.yaml
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: core-api-policy
namespace: istio-system
spec:
targets:
- name: istio-ingressgateway
ports:
- number: 80
origins:
- jwt:
issuer: "https://cognito-idp.ap-northeast-1.amazonaws.com/ap-northeast-1_pa9vj7sbL"
jwksUri: "https://cognito-idp.ap-northeast-1.amazonaws.com/ap-northeast-1_pa9vj7sbL/.well-known/jwks.json"
principalBinding: USE_ORIGIN
Once that worked I deleted this policy and installed a new policy for my service.
kubectl delete -n istio-system -f mypolicy.yaml
service/core-api-service:
After editing the above policy, changing the namespace and target as below, I reapplied the policy to the correct namespace.
Policy:
kubectl apply -n solarmori -f mypolicy.yaml
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: core-api-policy
namespace: solarmori
spec:
targets:
- name: core-api-service
ports:
- number: 80
origins:
- jwt:
issuer: "https://cognito-idp.ap-northeast-1.amazonaws.com/ap-northeast-1_pa9vj7sbL"
jwksUri: "https://cognito-idp.ap-northeast-1.amazonaws.com/ap-northeast-1_pa9vj7sbL/.well-known/jwks.json"
principalBinding: USE_ORIGIN
Service:
apiVersion: v1
kind: Service
metadata:
name: core-api-service
spec:
type: LoadBalancer
ports:
- port: 80
name: api-svc-port
targetPort: api-app-port
selector:
app: core-api-app
The outcome of this action didn't appear to change anything in processing of traffic. I was still able to reach my service even though I did not provide a JWT.
I checked the istio-proxy of my service deployment and there was no creation of a local_jwks in the logs as described Here.
[procyclinsur#P-428 istio]$ kubectl logs -n solarmori core-api-app-5dd9666777-qhf5v -c istio-proxy | grep local_jwks
[procyclinsur#P-428 istio]$
If anyone knows where I am going wrong I would greatly appreciate any help.
For a Service to be part of Istio's service mesh you need to fulfill some requirements as shown in the official docs.
In your case, the service port name needs to be updated to:
<protocol>[-<suffix>] with the <protocol> as either:
grpc
http
http2
https
mongo
mysql
redis
tcp
tls
udp
At that point requests forwarded to the service will go through the service mesh; Currently, requests are resolved by Kubernetes networking.

Mapping incoming port in kubernetes service to different port on docker container

This is the way I understand the flow in question:
When requesting a kubernetes service (via http for example) I am using port 80.
The request is forwarded to a pod (still on port 80)
The port forwards the request to the (docker) container that exposes port 80
The container handles the request
However my container exposes a different port, let's say 3000.
How can make a port mapping like 80:3000 in step 2 or 3?
There are confusing options like targetport and hostport in the kubernetes docs which didn't help me. kubectl port-forward seems to forward only my local (development) machine's port to a specific pod for debugging.
These are the commands I use for setting up a service in the google cloud:
kubectl run test-app --image=eu.gcr.io/myproject/my_app --port=80
kubectl expose deployment test-app --type="LoadBalancer"
I found that I needed to add some arguments to my second command:
kubectl expose deployment test-app --type="LoadBalancer" --target-port=3000 --port=80
This creates a service which directs incoming http traffic (on port 80) to its pods on port 3000.
A nicer way to do this whole thing is with yaml files service.yaml and deployment.yaml and calling
kubectl create -f deployment.yaml
kubectl create -f service.yaml
where the files have these contents
# deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: test-app
spec:
containers:
- name: user-app
image: eu.gcr.io/myproject/my_app
ports:
- containerPort: 3000
and
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: test-app
ports:
- port: 80
targetPort: 3000
type: LoadBalancer
Note that the selector of the service must match the label of the deployment.