Can't resolve 'kubernetes' by skydns serivce in Kubernetes - kubernetes

core#core-1-94 ~ $ kubectl exec -it busybox -- nslookup kubernetes
Server: 10.100.0.10
Address 1: 10.100.0.10
nslookup: can't resolve 'kubernetes'
core#core-1-94 ~ $ kubectl get svc --namespace=kube-system
NAME LABELS SELECTOR IP(S) PORT(S)
kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns 10.100.0.10 53/UDP
53/TCP
kube-ui k8s-app=kube-ui,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeUI k8s-app=kube-ui 10.100.110.236 80/TCP
core#core-1-94 ~ $ kubectl exec -it busybox -- nslookup kubernetes 10.100.0.10:53
Server: 10.100.0.10
Address 1: 10.100.0.10
nslookup: can't resolve 'kubernetes'
core#core-1-94 ~ $ kubectl get endpoints --namespace=kube-system
NAME ENDPOINTS
kube-dns 10.244.31.16:53,10.244.31.16:53
kube-ui 10.244.3.2:8080
core#core-1-94 ~ $ kubectl exec -it busybox -- nslookup kubernetes 10.244.31.16:53
Server: 10.244.31.16
Address 1: 10.244.31.16
Name: kubernetes
Address 1: 10.100.0.1
I think the service of kube-dns is Not available.
the skydns-svc.yaml :
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.100.0.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
Who can help ?

For DNS to work, the kubelet needs to be passed the flags --cluster_dns= and --cluster_domain=cluster.local at startup. This flag isn't included in the set of flags passed to the kubelet, so the kubelet won't try to contact the DNS pod that you've created for name resolution services. To fix this, you can modify the script to add these two flags to the kubelet and then when you create a DNS service, you need to make sure that you set the same ip address that you passed to the --cluster_dns flag as the portalIP field of the service spec like this.
For any other information, you can look it.

Related

How to get apiserver endpoint URL in helm template?

As per Helm docs, The lookup function can be used to look up resources in a running cluster.
Is there any way to get the api server endpoint URL using that function?
So far I was able to get the endpoint in two ways.
kubectl describe svc kubernetes -n default
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.43.0.1
IPs: 10.43.0.1
Port: https 443/TCP
TargetPort: 6443/TCP
Endpoints: 10.0.50.111:6443
Session Affinity: None
Events: <none>
kubectl config view -o jsonpath="{.clusters[?(#.name==\"joseph-rancher-cluster-2\")].cluster.server}"
https://api.joseph-rancher-cluster-2.rancher.aveshalabs.io
But having trouble to use them with lookup. Thanks in advance.
Update:
I tried to extract the ip & https port from the Endpoints resource on the running cluster.
kubectl get ep -n default -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: "2023-01-04T05:08:43Z"
labels:
endpointslice.kubernetes.io/skip-mirror: "true"
name: kubernetes
namespace: default
resourceVersion: "208"
uid: db5e0476-9169-41cf-bd00-f6f52162c0ef
subsets:
- addresses:
- ip: 10.0.50.111
ports:
- name: https
port: 6443
protocol: TCP
kind: List
metadata:
resourceVersion: ""
But the problem is, it returns a private IP which is unusable in case of cloud clusters. What I need is this;
kubectl cluster-info
Kubernetes control plane is running at https://api.joseph-rancher-cluster-2.rancher.aveshalabs.io
CoreDNS is running at https://api.joseph-rancher-cluster-2.rancher.aveshalabs.io/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Which can be extracted from the kubeconfig file as well. So is there any template function which I can use to get the api server endpoint ?

Kubernetes: Not able to communicate within two services (different pod, same namespace)

I am not able to communicate between two services.
post-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: python-data-deployment
labels:
spec:
replicas: 1
selector:
matchLabels:
app: python-web-selector
tier: backend
template:
metadata:
labels:
app: python-web-selector
tier: backend
spec:
containers:
- name: python-web-pod
image: sakshiarora2012/python-backend:v10
ports:
- containerPort: 5000
post-deployment2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: python-data-deployment2
labels:
spec:
replicas: 1
selector:
matchLabels:
app: python-web-selector2
tier: backend
template:
metadata:
labels:
app: python-web-selector2
tier: backend
spec:
containers:
- name: python-web-pod2
image: sakshiarora2012/python-backend:v8
ports:
- containerPort: 5000
post-service.yml
apiVersion: v1
kind: Service
metadata:
name: python-data-service
spec:
selector:
app: python-web-selector
tier: backend
ports:
- port: 5000
nodePort: 30400
type: NodePort
post-service2.yml
apiVersion: v1
kind: Service
metadata:
name: python-data-service2
spec:
selector:
app: python-web-selector2
tier: backend
ports:
- port: 5000
type: ClusterIP
When I go and try to ping from 1 container to another, it is not able to ping
root#python-data-deployment-7bd65dc685-htxmj:/project# ping python-data-service.default.svc.cluster.local
PING python-data-service.default.svc.cluster.local (10.107.11.236) 56(84) bytes of data.
^C
--- python-data-service.default.svc.cluster.local ping statistics ---
7 packets transmitted, 0 received, 100% packet loss, time 139ms
If I see dns entry it is showing
sakshiarora#Sakshis-MacBook-Pro Student_Registration % kubectl exec -i -t dnsutils -- nslookup python-data-service
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: python-data-service.default.svc.cluster.local
Address: 10.107.11.236
sakshiarora#Sakshis-MacBook-Pro Student_Registration %
sakshiarora#Sakshis-MacBook-Pro Student_Registration % kubectl exec -i -t dnsutils -- nslookup python-data-service2
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: python-data-service2.default.svc.cluster.local
Address: 10.103.97.40
sakshiarora#Sakshis-MacBook-Pro Student_Registration % kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dnsutils 1/1 Running 0 5m54s 172.17.0.9 minikube <none> <none>
python-data-deployment-7bd65dc685-htxmj 1/1 Running 0 47m 172.17.0.6 minikube <none> <none>
python-data-deployment2-764744b97d-mc9gm 1/1 Running 0 43m 172.17.0.8 minikube <none> <none>
python-db-deployment-d54f6b657-rfs2b 1/1 Running 0 44h 172.17.0.7 minikube <none> <none>
sakshiarora#Sakshis-MacBook-Pro Student_Registration % kubectl describe svc python-data-service
Name: python-data-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"python-data-service","namespace":"default"},"spec":{"ports":[{"no...
Selector: app=python-web-selector,tier=backend
Type: NodePort
IP: 10.107.11.236
Port: <unset> 5000/TCP
TargetPort: 5000/TCP
NodePort: <unset> 30400/TCP
Endpoints: 172.17.0.6:5000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
sakshiarora#Sakshis-MacBook-Pro Student_Registration % kubectl describe svc python-data-service2
Name: python-data-service2
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"python-data-service2","namespace":"default"},"spec":{"ports":[{"p...
Selector: app=python-web-selector2,tier=backend
Type: ClusterIP
IP: 10.103.97.40
Port: <unset> 5000/TCP
TargetPort: 5000/TCP
Endpoints: 172.17.0.8:5000
Session Affinity: None
Events: <none>
sakshiarora#Sakshis-MacBook-Pro Student_Registration %
I think if in DNS table it show if of range 172,17.0.X then it will work, but not sure why it is not showing in dns entry, Any pointers?
If you want to access python-data-service from outside the cluster using NodePort and you are using minikube you should be able to do so by using curl $(minikube service python-data-service --url) from anywhere outside the cluster i.e from your system
If you want to communicate between two microservice within the cluster then simply use ClusterIP type service instead of NodePort type.
To identity if it's a service issue or pod issue use PODIP directly in the curl command. From the output of kubectl describe svc python-data-service the Pod IP for service python-data-service is 172.17.0.6. So try curl 172.17.0.6:5000/getdata
In order to start debugging your services I would suggest the following steps:
Check that your service 1 is accessible as a Pod:
kubectl run test1 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - http://172.17.0.6:5000
Check that your service 2 is accessible as a Pod:
kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - 172.17.0.8:5000
Then, check that your service 1 is accessible as a Service using the corresponding cluster IP and then DNS Name:
kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - 10.107.11.236:5000
kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - http://python-data-service:5000
Then, check that your service 2 is accessible as a Service using the corresponding cluster IP and then DNS Name:
kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - 10.103.97.40:5000
kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - http://python-data-service2:5000
Then, if needed, check that your service 2 is accessible through your node port (you would need to know the IP Address of the Node where the service has been exposed, for instance in minikube it should work:)
wget -O - http://192.168.99.101:30400
From your Service manifest I can recommend as a good practice to specify both port and targetPort as you can see at
https://canterafonseca.eu/kubernetes/certification/application/developer/cncf/k8s/cloud/native/computing/ckad/deployments/services/preparation-k8s-ckad-exam-part4-services.html#-services
On the other hand if you only need to expose to the outside world one of the services you can create a headless service (see also my blog post above).
Ping doesn't work on services ClusterIP addresses because they are from virtual addresses created by iptables rules that redirect packets to the endpoints(pods).
You should be able to ping a pod, but not a service.
You can use curl or wget
For example wget -qO- POD_IP:80
or You can try
wget -qO- http://your-service-name:port/yourpath
curl POD_IP:port_number
Are you able to connect to your pods maybe try port-forward to see if you can connect & then check the connectivity in two pods.
Last check if there is default deny network policy set there - maybe you have some restrictions at network level.
kubectl get networkpolicy -n <namespace>
Try to look into the logs using kubectl logs PODNAME so that you know what's happening. From first sight, I think you need to expose the ports of both services: kubectl port-forward yourService PORT:PORT.

Google Kubernetes Engine Ingress doesn't work

Create ingress followed the guide of 'Kubernetes in Action' book on GKE, but the ingress doesn't work, can' be accessed from the public IP address of Ingress.
Create the replicaset to create pod.
Create Service. (followed the nodeport method on 'Kubernetes in Action').
Create ingress.
ReplicaSet, Service, Ingress are created successfully, nodeport can be accessed from the public IP address, no UNHEALTHY in ingress.
replicaset:
apiVersion: apps/v1beta2
kind: ReplicaSet
metadata:
name: kubia
spec:
replicas: 3
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: sonyfaye/kubia
Service:
apiVersion: v1
kind: Service
metadata:
name: kubia-nodeport
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
nodePort: 30123
selector:
app: kubia
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubia
spec:
rules:
- host: kubia.example.com
http:
paths:
- path: /
backend:
serviceName: kubia-nodeport
servicePort: 80
The nodeport itself can be accessed from public IP addresses.
C:\kube>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.59.240.1 <none> 443/TCP 8d
kubia-nodeport NodePort 10.59.253.10 <none> 80:30123/TCP 20h
C:\kube>kubectl get node
NAME STATUS ROLES AGE VERSION
gke-kubia-default-pool-08dd2133-qbz6 Ready <none> 8d v1.12.8-gke.6
gke-kubia-default-pool-183639fa-18vr Ready <none> 8d v1.12.8-gke.6
gke-kubia-default-pool-42725220-43q8 Ready <none> 8d v1.12.8-gke.6
C:\kube>kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gke-kubia-default-pool-08dd2133-qbz6 Ready <none> 8d v1.12.8-gke.6 10.140.0.17 35.201.224.238 Container-Optimized OS from Google 4.14.119+ docker://17.3.2
gke-kubia-default-pool-183639fa-18vr Ready <none> 8d v1.12.8-gke.6 10.140.0.18 35.229.152.12 Container-Optimized OS from Google 4.14.119+ docker://17.3.2
gke-kubia-default-pool-42725220-43q8 Ready <none> 8d v1.12.8-gke.6 10.140.0.16 34.80.225.64 Container-Optimized OS from Google 4.14.119+ docker://17.3.2
C:\kube>curl http://34.80.225.64:30123
You've hit kubia-j2lnr
But the ingress can't be accessed from outside.
hosts file:
34.98.92.110 kubia.example.com
C:\kube>kubectl describe ingress
Name: kubia
Namespace: default
Address: 34.98.92.110
Default backend: default-http-backend:80 (10.56.0.7:8080)
Rules:
Host Path Backends
---- ---- --------
kubia.example.com
/ kubia-nodeport:80 (10.56.0.14:8080,10.56.1.6:8080,10.56.3.4:8080)
Annotations:
ingress.kubernetes.io/backends: {"k8s-be-30123--c4addd497b1e0a6d":"HEALTHY","k8s-be-30594--c4addd497b1e0a6d":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-kubia--c4addd497b1e0a6d
ingress.kubernetes.io/target-proxy: k8s-tp-default-kubia--c4addd497b1e0a6d
ingress.kubernetes.io/url-map: k8s-um-default-kubia--c4addd497b1e0a6d
Events:
<none>
C:\kube>curl http://kubia.example.com
curl: (7) Failed to connect to kubia.example.com port 80: Timed out
C:\kube>telnet kubia.example.com 80
Connecting To kubia.example.com...
C:\kube>telnet 34.98.92.110 80
Connecting To 34.98.92.110...Could not open connection to the host, on port 80: Connect failed
Tried from intranet.
curl 34.98.92.110 IP can get some resule, and 80 port of 34.98.92.110 is accessible from intranet.
C:\kube>kubectl exec -it kubia-lrt9x bash
root#kubia-lrt9x:/# curl http://kubia.example.com
curl: (6) Could not resolve host: kubia.example.com
root#kubia-lrt9x:/# curl http://34.98.92.110
default backend - 404root#kubia-lrt9x:/# curl http://34.98.92.110
default backend - 404root#kubia-lrt9x:/#
root#kubia-lrt9x:/# curl http://10.56.0.7:8080
default backend - 404root#kubia-lrt9x:/#
Does anybody know how to debug this?
The nodeport is been added to the firewall, or else nodeport is not accessible. The Ingress IP seems don't need to be added to the firewall.
Try to expose replicaset to be able to connect from the outside:
$ kubectl expose rs hello-world --type=NodePort --name=my-service
remember to first delete service kubia-nodeport and delete selector and section with service in Ingress configuration file and then apply changes using kubectl apply command.
More information you can find here: exposing-externalip.
Useful doc: kubectl-expose.

Kubernetes.default nslookup not able to resolve from different namespaces

I'm facing a problem resolving kubernetes.default.svc.cluster.local from outside default namespace
I'm running two busybox:1.30 pods on each namespace and the name successfully resolves from the default namespace only
[admin#devsvr3 ~]$ kubectl exec -n default -ti busybox -- nslookup kubernetes
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
[admin#devsvr3 ~]$ kubectl exec -n namespace-dev -ti busybox -- nslookup kubernetes
Server: 10.96.0.10
Address: 10.96.0.10:53
** server can't find kubernetes.namespace-dev.svc.cluster.local: NXDOMAIN
*** Can't find kubernetes.svc.cluster.local: No answer
*** Can't find kubernetes.cluster.local: No answer
*** Can't find kubernetes.namespace-dev.svc.cluster.local: No answer
*** Can't find kubernetes.svc.cluster.local: No answer
*** Can't find kubernetes.cluster.local: No answer
[admin#devsvr3 ~]$
I'm running CentOS 7 kubernetes cluster on an air-gaped environment and using weave net CNI add-on and this is my CoreDNS config
apiVersion: v1
data:
Corefile: |
.:53 {
log
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
reload
loadbalance
}
kind: ConfigMap
metadata:
creationTimestamp: "2019-01-28T10:59:25Z"
name: coredns
namespace: kube-system
resourceVersion: "1177652"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
uid: c6b5ddae-22eb-11e9-8689-0017a4770068
Following your steps indeed I approached the same issue. But if you create the pod using this yaml it works correctly. Changing the busybox image seems to end up with your described error. Will try to find out why. But for now this is the solution.
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: namespace-dev
spec:
containers:
- name: busybox
image: busybox:1.28
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
and then:
kubectl exec -ti -n=namespace-dev busybox -- nslookup kubernetes.default
it works as intended and explained here.
/ # nslookup kubernetes
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
nslookup: can't resolve 'kubernetes'
/ # nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

Kubernetes ingress (hostNetwork=true), can't reach service by node IP - GCP

I am trying to expose deployment using Ingress where DeamonSet has hostNetwork=true which would allow me to skip additional LoadBalancer layer and expose my service directly on the Kubernetes external node IP. Unfortunately I can't reach the Ingress controller from the external network.
I am running Kubernetes version 1.11.16-gke.2 on GCP.
I setup my fresh cluster like this:
gcloud container clusters get-credentials gcp-cluster
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller --upgrade
helm install --name ingress --namespace ingress-nginx --set rbac.create=true,controller.kind=DaemonSet,controller.service.type=ClusterIP,controller.hostNetwork=true stable/nginx-ingress
I run the deployment:
cat <<EOF | kubectl create -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-node
spec:
selector:
matchLabels:
app: hello-node
template:
metadata:
labels:
app: hello-node
spec:
containers:
- name: hello-node
image: gcr.io/google-samples/node-hello:1.0
ports:
- containerPort: 8080
EOF
Then I create service:
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
name: hello-node
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: hello-node
EOF
and ingress resource:
cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: hello-node-single-ingress
spec:
backend:
serviceName: hello-node
servicePort: 80
EOF
I get the node external IP:
12:50 $ kubectl get nodes -o json | jq '.items[] | .status .addresses[] | select(.type=="ExternalIP") | .address'
"35.197.204.75"
Check if ingress is running:
12:50 $ kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
hello-node-single-ingress * 35.197.204.75 80 8m
12:50 $ kubectl get pods --namespace ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-ingress-controller-7kqgz 1/1 Running 0 23m
ingress-nginx-ingress-default-backend-677b99f864-tg6db 1/1 Running 0 23m
12:50 $ kubectl get svc --namespace ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-ingress-controller ClusterIP 10.43.250.102 <none> 80/TCP,443/TCP 24m
ingress-nginx-ingress-default-backend ClusterIP 10.43.255.43 <none> 80/TCP 24m
Then trying to connect from the external network:
curl 35.197.204.75
Unfortunately it times out
On Kubernetes Github there is a page regarding ingress-nginx (host-netork: true) setup:
https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network
which mentions:
"This approach does not leverage any Service object to expose the NGINX Ingress controller. If the ingress-nginx Service exists in the target cluster, it is recommended to delete it."
I've tried to follow that and delete ingress-nginx services:
kubectl delete svc --namespace ingress-nginx ingress-nginx-ingress-controller ingress-nginx-ingress-default-backend
but this doesn't help.
Any ideas how to set up the Ingress on the node external IP? What I am doing wrong? The amount of confusion over running Ingress reliably without the LB overwhelms me. Any help much appreciated !
EDIT:
When another service accessing my deployment with NodePort gets created:
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
name: hello-node2
spec:
ports:
- port: 80
targetPort: 8080
type: NodePort
selector:
app: hello-node
EOF
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node ClusterIP 10.47.246.91 <none> 80/TCP 2m
hello-node2 NodePort 10.47.248.51 <none> 80:31151/TCP 6s
I still can't access my service e.g. using: curl 35.197.204.75:31151.
However when I create 3rd service with LoadBalancer type:
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
name: hello-node3
spec:
ports:
- port: 80
targetPort: 8080
type: LoadBalancer
selector:
app: hello-node
EOF
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node ClusterIP 10.47.246.91 <none> 80/TCP 7m
hello-node2 NodePort 10.47.248.51 <none> 80:31151/TCP 4m
hello-node3 LoadBalancer 10.47.250.47 35.189.106.111 80:31367/TCP 56s
I can access my service using the external LB: 35.189.106.111 IP.
The problem was missing firewall rules on GCP.
Found the answer: https://stackoverflow.com/a/42040506/2263395
Running:
gcloud compute firewall-rules create myservice --allow tcp:80,tcp:30301
Where 80 is the ingress port and 30301 is the NodePort port. On production you would probabaly use just the ingress port.