Trying to set up an ingress with tls and open to some IPs only on GKE - kubernetes

I'm having trouble setting up an ingress open only to some specific IPs, checked docs, tried a lot of stuff and an IP out of the source keep accessing. that's a Zabbix web interface on an alpine with nginx, set up a service on node-port 80 then used an ingress to set up a loadbalancer on GCP, it's all working, the web interface is working fine, but how can I make it accessible only to desired IPs?
my firewall rules are ok and it's only accessible through load balancer IP
Also, I have a specific namespace for this deploy.
Cluster version 1.11.5-gke.5
EDIT i'm using GKE standard ingress GLBC
My template is config as follow can someone help enlighten me on what is missing:
apiVersion: v1
kind: ReplicationController
metadata:
name: zabbix-web
namespace: zabbix-prod
labels:
app: zabbix
tier: frontend
spec:
replicas: 1
template:
metadata:
labels:
name: zabbix-web
app: zabbix
spec:
volumes:
- name: cloudsql-instance-credentials
secret:
defaultMode: 420
secretName: cloudsql-instance-credentials
containers:
- command:
- /cloud_sql_proxy
- -instances=<conection>
- -credential_file=/secrets/cloudsql/credentials.json
image: gcr.io/cloudsql-docker/gce-proxy:1.11
imagePullPolicy: IfNotPresent
name: cloudsql-proxy
resources: {}
securityContext:
allowPrivilegeEscalation: false
runAsUser: 2
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /secrets/cloudsql
name: credentials
readOnly: true
- name: zabbix-web
image: zabbix/zabbix-web-nginx-mysql:alpine-3.2-latest
ports:
- containerPort: 80
env:
- name: MYSQL_USER
valueFrom:
secretKeyRef:
key: <user>
name: <user>
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
key: <pass>
name: <pass>
- name: DB_SERVER_HOST
value: 127.0.0.1
- name: MYSQL_DATABASE
value: <db>
- name: ZBX_SERVER_HOST
value: <db>
readinessProbe:
failureThreshold: 3
httpGet:
path: /index.php
port: 80
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: "zabbix-web-service"
namespace: "zabbix-prod"
labels:
app: zabbix
spec:
ports:
- port: 80
targetPort: 80
selector:
name: "zabbix-web"
type: "NodePort"
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: zabbix-web-ingress
namespace: zabbix-prod
annotations:
ingress.kubernetes.io/service.spec.externalTrafficPolicy: local
ingress.kubernetes.io/whitelist-source-range: <xxx.xxx.xxx.xxx/32>
spec:
tls:
- secretName: <tls-cert>
backend:
serviceName: zabbix-web-service
servicePort: 80

You can whitelist IPs by configuring Ingress and Cloud Armour:
Switch to project:
gcloud config set project $PROJECT
Create a policy:
gcloud compute security-policies create $POLICY_NAME --description "whitelisting"
Change default policy to deny:
gcloud compute security-policies rules update 2147483647 --action=deny-403 \
--security-policy $POLICY_NAME
On lower priority than the default whitelist all IPs you want to whitelist:
gcloud compute security-policies rules create 2 \
--action allow \
--security-policy $POLICY_NAME \
--description "allow friends" \
--src-ip-ranges "93.184.17.0/24,151.101.1.69/32"
With a maximum of ten per range.
Note you need valid CIDR ranges, for that you can use CIDR to IP Range -> IP Range to CIDR.
View the policy as follows:
gcloud compute security-policies describe $POLICY_NAME
To throw away an entry:
gcloud compute security-policies rules delete $PRIORITY --security-policy $POLICY_NAME
or the full policy:
gcloud compute security-policies delete $POLICY_NAME
Create a BackendConfig for the policy:
# File backendconfig.yaml:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
namespace: <namespace>
name: <name>
spec:
securityPolicy:
name: $POLICY_NAME
$ kubectl apply -f backendconfig.yaml
backendconfig.cloud.google.com/backendconfig-name created
Add the BackendConfig to the Service:
metadata:
namespace: <namespace>
name: <service-name>
labels:
app: my-app
annotations:
cloud.google.com/backend-config: '{"ports": {"80":"backendconfig-name"}}'
spec:
type: NodePort
selector:
app: hello-app
ports:
- port: 80
protocol: TCP
targetPort: 8080
Use the right selectors and point the receiving port of the Service to the BackendConfig created earlier.
Now Cloud Armour will add the policy to the GKE service.
Visible in https://console.cloud.google.com/net-security/securitypolicies (after selecting $PROJECT).

AFAIK, you can't restrict IP addresses through GLBC or on GCP L7 Load Balancer itself. Note that GLBC is also a work in progress as of this writing.
ingress.kubernetes.io/whitelist-source-range works great but when you are using something like an nginx ingress controller because nginx itself can restrict IP addresses.
The general way to restrict/whitelist IP addresses is using VPC Firewall Rules (which seems like you are doing already). Essentially you can restrict/whitelist the IP addresses to the network where your K8s nodes are running on.

One of the best options to accomplish your goal is using firewall rules since you can't restrict IP addresses through the Global LB or on GCP L7 LB itself. However, another option if you are using Ingress on your Kubernetes cluster, it is possible to restrict access to your application based on dedicated IP addresses.
One possible use case would be that you have a development setup and don’t want to make all the fancy new features available to everyone, especially competitors. In such cases, IP whitelisting to restrict access can be used.
This can be done with specifying the allowed client IP source ranges through the ingress.kubernetes.io/whitelist-source-range annotation.
The value is a comma separated list of CIDR block.
For example:
10.0.0.0/24, 1.1.1.1/32.
Please get more information here.

For anyone who stumbles on this question via Google like I did, there is now a solution. You can implement this via a BackendConfig from the cloud.google.com Kubernetes API in conjunction with a GCE CloudArmor policy.
https://cloud.google.com/kubernetes-engine/docs/how-to/cloud-armor-backendconfig

Related

How can I resolve connection issues between my frontend and backend pods in Minikube?

I am trying to deploy an application to Minikube. However I am having issues connecting the frontend pod to the backend pod.
Each Deployment have a ClusterIP service, and a NodePort service.
I access the frontend via browser, executing the command: minikube service frontend-entrypoint. When the frontend tries to query the backend it requests the URL: http://fastapi-cluster-ip-service:8000/api/v1/baseline/building_type?building_type=commercial, but the status response is: (failed)net::ERR_EMPTY_RESPONSE.
If I access the frontend via cmd, executing the command: kubectl exec -it react-deployment-xxxxxxxxxx-xxxxx -- sh, and execute inside it the command: curl -X GET "http://fastapi-cluster-ip-service:8000/api/v1/baseline/building_type?building_type=commercial" I get what I expect.
So, I understand that NodePorts are used to route external traffic to services inside the cluster by opening a specific port on each node in the cluster and forwarding traffic from that port to the service, and that ClusterIPs, on the other hand, are used to expose services only within the cluster and are not directly accessible from outside the cluster. What I don't understand is why when reaching the frontend via browser, the same is not able to connect internally to the backend? Once playing with the frontend I consider I am inside the cluster...
I tried to expose the cluster using other services such as Ingress or LoadBalancer, but I didn't have success connecting to the frontend, so I rollback to the NodePort solution.
References:
Kubernetes Guide - Deploying a machine learning app built with Django, React and PostgreSQL using Kubernetes
Exposing External-Facing Services In Kubernetes
Files:
component_backend.yaml:
apiVersion: v1
kind: Service
metadata:
name: backend-entrypoint
spec:
selector:
component: fastapi
ports:
- name: http2
port: 8000
targetPort: 8000
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: fastapi-deployment
spec:
replicas: 1
selector:
matchLabels:
component: fastapi
template:
metadata:
labels:
component: fastapi
spec:
containers:
- name: fastapi-container
image: xxx/yyy:zzz
ports:
- containerPort: 8000
env:
- name: DB_USERNAME
valueFrom:
configMapKeyRef:
name: app-variables
key: DB_USERNAME
[...]
imagePullSecrets:
- name: myregistrykey
---
apiVersion: v1
kind: Service
metadata:
name: fastapi-cluster-ip-service
spec:
type: LoadBalancer
selector:
component: fastapi
ports:
- port: 8000
targetPort: 8000
externalIPs:
- <minikube ip>
componente_frontend.yaml:
apiVersion: v1
kind: Service
metadata:
name: frontend-entrypoint
spec:
selector:
component: react
ports:
- name: http1
port: 3000
targetPort: 3000
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: react-deployment
spec:
replicas: 1
selector:
matchLabels:
app: react
template:
metadata:
labels:
app: react
spec:
containers:
- name: react-container
image: xxx/yyy:zzz
ports:
- containerPort: 3000
env:
- name: BASELINE_API_URL
valueFrom:
configMapKeyRef:
name: app-variables
key: BASELINE_API_URL
imagePullSecrets:
- name: myregistrykey
---
apiVersion: v1
kind: Service
metadata:
name: react-cluster-ip-service
spec:
type: LoadBalancer
selector:
component: react
ports:
- port: 3000
targetPort: 3000
externalIPs:
- <minikube ip>
BASELINE_API_URL is declared with the backend ClusterIP service name (i.e., fastapi-cluster-ip-service).
ingress_service.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: sfs.baseline
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-entrypoint
port:
name: http1
- path: /api
pathType: Prefix
backend:
service:
name: backend-entrypoint
port:
name: http2
You have to think about this: If you call the api through your browser or Postman from your host, they dont know anything about the url inside your pod.
For production ingress configuration is needed, so you can call the api like:
https://api.mydomain.com/api/myroute
When you deploy the frontend the paths inside your code should be created dynamically.
Inside your code define the path with env variables and use them inside your code.
On Kubernetes define a configMap with the paths and bind it to your container.
So when you call the Frontend it will have the right paths.
Your Frontend on local can be reached with the Ip address of your masternode and the nodePort.
To not use the ip address you can create an entry in your local hosts file.
nodesipaddress mydomain.local
nodesipaddress api.mydomain.local
so from you browser you can reach the frontend with mydomain.local:nodeportOfFrontend
And your frontend code should call the backend with api.mydomain.local:nodeportOfApi
If you enable Ingress inside your cluster and create an ingress resource in your deployment and a service of type LoadBalancer, then you can call the Frontend and api without the nodePort.
If you are getting in issues with that, please post all your kubernetes yamls. Deployments, Services, configMap and ingress if you decide to use it.
UPDATE
Check if you have ingress enabled on minikube
Modify your ingress-recource
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: mydomain.local
http:
paths:
- path: /
Im not sure if minikube supports ClusterIp
Change the type to LoadBalancer in your service:
apiVersion: v1
kind: Service
metadata:
name: fastapi-cluster-ip-service
spec:
type: LoadBalancer
selector:
component: fastapi
ports:
- port: 8000
targetPort: 8000
If you verify your service:
kubectl get svc -o wide
kubectl describe services my-service
Look if there is external IP pending if so add externalIp:
ports:
- port: 8000
targetPort: 8000
externalIPs:
- xxx.xxx.xxx.xxx # your minikube ip
However i would try first to create a service type NodePort and then access with xxx.xxx.xxx.xxx:NodePort.
In the yamls you posted i see only the backend.
Think if you use ingress or NodePort, your code must be adapted:
Your Frontend must call the api by api ingress domain or xxx.xxx.xxx.xxx:NodePort / mydomain.local/api
I think you are not understanding how the frontend part of a website works.
To display the website in your browser, you actually download all the required content from the server/pod where you host the frontend.
Once the front is downloaded and displayed on your browser, now you try to hit the backend directly from your browser/computer using either an IP address or an URL.
When using an URL, you browser will try to resolve the hostname using a DNS server
So when you read a website from your browser you are not in the server/pod and you cannot resolve the URL because that URL is not mapped to any IP address (or not to your server).
Also that is why it works when you go inside the pod using kubectl exec, because you are inside the network and you are using the internal DNS.
As David said, to make this work, you need to call the backend from the frontend using sone kind of ingress.
And also you will need to create a DNS entry on your domain (if using an URL).
If not you can directly use the IP of the pod/service.

How to get kubernetes service external ip dynamically inside manifests file?

We are creating a deployment in which the command needs the IP of the pre-existing service pointing to a statefulset. Below is the manifest file for the deployment. Currently, we are manually entering the service external IP inside this deployment manifest. Now we would like it to auto-populate during runtime. Is there a way to achieve this dynamically using environment variables or another way?
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-api
namespace: app-api
spec:
selector:
matchLabels:
app: app-api
replicas: 1
template:
metadata:
labels:
app: app-api
spec:
containers:
- name: app-api
image: asia-south2-docker.pkg.dev/rnd20/app-api/api:09
command: ["java","-jar","-Dallow.only.apigateway.request=false","-Dserver.port=8084","-Ddedupe.searcher.url=http://10.10.0.6:80","-Dspring.cloud.zookeeper.connect-string=10.10.0.6:2181","-Dlogging$.file.path=/usr/src/app/logs/springboot","/usr/src/app/app_api/dedupe-engine-components.jar",">","/usr/src/app/out.log"]
livenessProbe:
httpGet:
path: /health
port: 8084
httpHeaders:
- name: Custom-Header
value: ""
initialDelaySeconds: 60
periodSeconds: 60
ports:
- containerPort: 4016
resources:
limits:
cpu: 1
memory: "2Gi"
requests:
cpu: 1
memory: "2Gi"
NOTE: The IP in question here is the Internal load balancer IP, i.e. the external IP for the service and the service is in a different namespace. Below is the manifest for the same
apiVersion: v1
kind: Service
metadata:
name: app
namespace: app
annotations:
cloud.google.com/load-balancer-type: "Internal"
labels:
app: app
spec:
selector:
app: app
type: LoadBalancer
ports:
- name: container
port: 80
targetPort: 8080
protocol: TCP
You could use the following command instead:
command:
- /bin/bash
- -c
- |-
set -exuo pipefail
ip=$(dig +search +short servicename.namespacename)
exec java -jar -Dallow.only.apigateway.request=false -Dserver.port=8084 -Ddedupe.searcher.url=http://$ip:80 -Dspring.cloud.zookeeper.connect-string=$ip:2181 -Dlogging$.file.path=/usr/src/app/logs/springboot /usr/src/app/app_api/dedupe-engine-components.jar > /usr/src/app/out.log
It first resolves the ip address using dig (if you don't have dig in your image - you need to substitute it with something else you have), then execs your original java command.
As of today I'm not aware of any "native" kubernetes way to provide IP meta information directly to the pod.
If you are sure they exist before, and you deploy into the same namespace, you can read them from environment variables. It's documented here: https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables.
When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service. It adds {SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT variables, where the Service name is upper-cased and dashes are converted to underscores. It also supports variables (see makeLinkVariables) that are compatible with Docker Engine's "legacy container links" feature.
For example, the Service redis-master which exposes TCP port 6379 and has been allocated cluster IP address 10.0.0.11, produces the following environment variables:
REDIS_MASTER_SERVICE_HOST=10.0.0.11
REDIS_MASTER_SERVICE_PORT=6379
REDIS_MASTER_PORT=tcp://10.0.0.11:6379
REDIS_MASTER_PORT_6379_TCP=tcp://10.0.0.11:6379
REDIS_MASTER_PORT_6379_TCP_PROTO=tcp
REDIS_MASTER_PORT_6379_TCP_PORT=6379
REDIS_MASTER_PORT_6379_TCP_ADDR=10.0.0.11
Note, those wont update after the container is started.

How do I create an internal gateway using Istio?

Currently, we successfully setup Istio to create a couple ingress-gateways like api.example.com and app.example.com, that route traffic to a variety of services with destination rules, etc. In addition to this, we would love to use Istio's features for internal-only APIs, but we are unsure of how to set something like this up. Is it possible to use Istio's Gateway and VirtualServices CRDs to route traffic without exiting the cluster? If so, how would we go about setting that up?
Istio gateways are for traffic coming into the cluster or traffic leaving out the cluster. For traffic inside the cluster you should not use ingress/egress gateways. If you have configured Istio in the cluster to create a service mesh then you get all these benefits because Istio will inject a sidecar envoy for all your services inside the cluster. It's this sidecars which provides all the benefits of the mesh.When you use ingress or egress gateway you are actually using the sidecar deployed as ingress or egress gatway.You can use virtual service, destination rule, service entries, sidecars etc without a gateway because at that point you will using the sidecars deployed alongside your application pods.
Here is an example of virtual service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- match:
- headers:
end-user:
exact: jason
route:
- destination:
host: reviews
subset: v2
- route:
- destination:
host: reviews
subset: v3
The virtual service hostname can be an IP address, a DNS name, or, depending on the platform, a short name (such as a Kubernetes service short name) that resolves, implicitly or explicitly, to a fully qualified domain name (FQDN). You can also use wildcard (”*”) prefixes, letting you create a single set of routing rules for all matching services. Virtual service hosts don’t actually have to be part of the Istio service registry, they are simply virtual destinations. This lets you model traffic for virtual hosts that don’t have routable entries inside the mesh.
I would add some things to Arghya Sadhu answer.
I think my example in another post is the answer to your question, specifically virtual service gateways and hosts. This example need additional Destination Rule since we have subsets which mark the route to proper subset of nginx here and they're defined in destination rule.
So, as an example, I would call something like internal-gateway/a or internal-gateway/b, and they would get routed to services A or B
I made something like that
2 nginx pods -> 2 services -> virtual service
Deployment1
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx1
spec:
selector:
matchLabels:
run: nginx1
replicas: 1
template:
metadata:
labels:
run: nginx1
app: frontend
spec:
containers:
- name: nginx1
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx1 > /usr/share/nginx/html/index.html"]
Deployment2
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx2
spec:
selector:
matchLabels:
run: nginx2
replicas: 1
template:
metadata:
labels:
run: nginx2
app: frontend2
spec:
containers:
- name: nginx2
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx2 > /usr/share/nginx/html/index.html"]
Service1
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: frontend
spec:
ports:
- port: 80
protocol: TCP
selector:
app: frontend
Service2
apiVersion: v1
kind: Service
metadata:
name: nginx2
labels:
app: frontend2
spec:
ports:
- port: 80
protocol: TCP
selector:
app: frontend2
Virtual Service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginxvirt
spec:
hosts:
- nginx.default.svc.cluster.local
- nginx2.default.svc.cluster.local
http:
- name: a
match:
- uri:
prefix: /a
rewrite:
uri: /
route:
- destination:
host: nginx.default.svc.cluster.local
port:
number: 80
- name: b
match:
- uri:
prefix: /b
rewrite:
uri: /
route:
- destination:
host: nginx2.default.svc.cluster.local
port:
number: 80
Above virtual service works only internal in mesh gateway.
You have 2 matches for 2 nginx services.
root#ubu1:/# curl nginx/a
Hello nginx1
root#ubu1:/# curl nginx/b
Hello nginx2
I would recommend to check istio documentation and read about :
Gateways
Virtual Services
Destination Rules
And istio examples:
bookinfo
httpbin
So I can make up a DNS name or IP address that doesn't really exist
I think You misunderstood, it must exist, but not in the mesh. For example some database which is not in the mesh but You still can use, for example service entry to connect it to the mesh.
There is example with wikipedia in istio documentation and whole external services documentation.
I hope it will help You. Let me know if You have any more questions.

Health Checks in GKE in GCloud resets after I change it from HTTP to TCP

I'm working on a Kubernetes cluster where I am directing service from GCloud Ingress to my Services. One of the services endpoints fails health check as HTTP but passes it as TCP.
When I change the health check options inside GCloud to be TCP, the health checks pass, and my endpoint works, but after a few minutes, the health check on GCloud resets for that port back to HTTP and health checks fail again, giving me a 502 response on my endpoint.
I don't know if it's a bug inside Google Cloud or something I'm doing wrong in Kubernetes. I have pasted my YAML configuration here:
namespace
apiVersion: v1
kind: Namespace
metadata:
name: parity
labels:
name: parity
storageclass
apiVersion: storage.k8s.io/v1
metadata:
name: classic-ssd
namespace: parity
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
zones: us-central1-a
reclaimPolicy: Retain
secret
apiVersion: v1
kind: Secret
metadata:
name: tls-secret
namespace: ingress-nginx
data:
tls.crt: ./config/redacted.crt
tls.key: ./config/redacted.key
statefulset
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: parity
namespace: parity
labels:
app: parity
spec:
replicas: 3
selector:
matchLabels:
app: parity
serviceName: parity
template:
metadata:
name: parity
labels:
app: parity
spec:
containers:
- name: parity
image: "etccoop/parity:latest"
imagePullPolicy: Always
args:
- "--chain=classic"
- "--jsonrpc-port=8545"
- "--jsonrpc-interface=0.0.0.0"
- "--jsonrpc-apis=web3,eth,net"
- "--jsonrpc-hosts=all"
ports:
- containerPort: 8545
protocol: TCP
name: rpc-port
- containerPort: 443
protocol: TCP
name: https
readinessProbe:
tcpSocket:
port: 8545
initialDelaySeconds: 650
livenessProbe:
tcpSocket:
port: 8545
initialDelaySeconds: 650
volumeMounts:
- name: parity-config
mountPath: /parity-config
readOnly: true
- name: parity-data
mountPath: /parity-data
volumes:
- name: parity-config
secret:
secretName: parity-config
volumeClaimTemplates:
- metadata:
name: parity-data
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: "classic-ssd"
resources:
requests:
storage: 50Gi
service
apiVersion: v1
kind: Service
metadata:
labels:
app: parity
name: parity
namespace: parity
annotations:
cloud.google.com/app-protocols: '{"my-https-port":"HTTPS","my-http-port":"HTTP"}'
spec:
selector:
app: parity
ports:
- name: default
protocol: TCP
port: 80
targetPort: 80
- name: rpc-endpoint
port: 8545
protocol: TCP
targetPort: 8545
- name: https
port: 443
protocol: TCP
targetPort: 443
type: LoadBalancer
ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-parity
namespace: parity
annotations:
#nginx.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.global-static-ip-name: cluster-1
spec:
tls:
secretName: tls-classic
hosts:
- www.redacted.com
rules:
- host: www.redacted.com
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 8080
- path: /rpc
backend:
serviceName: parity
servicePort: 8545
Issue
I've redacted hostnames and such, but this is my basic configuration. I've also run a hello-app container from this documentation here for debugging: https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
Which is what the endpoint for ingress on / points to on port 8080 for the hello-app service. That works fine and isn't the issue, but just mentioned here for clarification.
So, the issue here is that, after creating my cluster with GKE and my ingress LoadBalancer on Google Cloud (the cluster-1 global static ip name in the Ingress file), and then creating the Kubernetes configuration in the files above, the Health-Check fails for the /rpc endpoint on Google Cloud when I go to Google Compute Engine -> Health Check -> Specific Health-Check for the /rpc endpoint.
When I edit that Health-Check to not use HTTP Protocol and instead use TCP Protocol, health-checks pass for the /rpc endpoint and I can curl it just fine after and it returns me the correct response.
The issue is that a few minutes after that, the same Health-Check goes back to HTTP protocol even though I edited it to be TCP, and then the health-checks fail and I get a 502 response when I curl it again.
I am not sure if there's a way to attach the Google Cloud Health Check configuration to my Kubernetes Ingress prior to creating the Ingress in kubernetes. Also not sure why it's being reset, can't tell if it's a bug on Google Cloud or something I'm doing wrong in Kubernetes. If you notice on my statefulset deployment, I have specified livenessProbe and readinessProbe to use TCP to check the port 8545.
The delay of 650 seconds was due to this ticket issue here which was solved by increasing the delay to greater than 600 seconds (to avoid mentioned race conditions): https://github.com/kubernetes/ingress-gce/issues/34
I really am not sure why the Google Cloud health-check is resetting back to HTTP after I've specified it to be TCP. Any help would be appreciated.
I found a solution where I added a new container for health check on my stateful set on /healthz endpoint, and configured the health check of the ingress to check that endpoint on the 8080 port assigned by kubernetes as an HTTP type of health-check, which made it work.
It's not immediately obvious why the reset happens when it's TCP.

Kubernetes node port can't expose successfully

I installed kubernetes cluster on my 3 virtualbox vms. 3 vms all run Ubuntu14.04 with ufw disabled. Kubernetes versin is 1.6. Here is my config files for creating pod and service.
Pod pod.yaml:
apiVersion: v1
kind: ReplicationController
metadata:
name: frontend
labels:
name: frontend
spec:
replicas: 3
selector:
name: frontend
template:
metadata:
labels:
name: frontend
spec:
imagePullSecrets:
- name: regsecret
containers:
- name: frontend
image: hub.allinmoney.com/kubeguide/guestbook-php-frontend
env:
- name: GET_HOSTS_FROM
value: env
ports:
- containerPort: 80
Service service.yaml:
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
name: frontend
spec:
type: NodePort
ports:
- port: 80
targetPort: 31000
nodePort: 31000
selector:
name: frontend
I create service with type NodePort. When I run command kubectl create -f service.yaml, it outputs like below and I can't find the exposed port 31000 in any kube nodes:
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:31000) to serve traffic.
See http://releases.k8s.io/release-1.3/docs/user-guide/services-firewalls.md for more details.
Could anyone tell how to solve this or give me any tips?
As it says in the error message you need to set up firewall rules for your nodes to accept traffic on the node ports (default: 30000-32767).
Firewall rule example
Name: [firewall-rule-name]
Targets: [node-target-name, node-target2-name]
Source filters: IP ranges: 0.0.0.0/0
Protocols / ports: tcp:80,443,30000-32767
Action: Allow
Priority: 1000
Network: default
Your targetPort is also incorrect it needs to point to the corresponding port in the Pod (Port 80).