What does the default helm create chart do? - kubernetes-helm

Does the default helm chart actually run and do something I can observe?
I've tried running it (the default helm chart) and it does run; So, what does it do?
To recreate the problem I'm asking about, do the following:
helm create helm-it # Create a helm chart (the default)
helm install helm-it ./helm-it # Run it
helm list # See it running
helm get manifest helm-it # See the manifest (YAML) that is running (I think)
By examining the manifest (using helm get manifest helm-it), I can see how it's configured. The important bit is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: helm-it
labels:
helm.sh/chart: helm-it-0.1.0
app.kubernetes.io/name: helm-it
app.kubernetes.io/instance: helm-it
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: helm-it
app.kubernetes.io/instance: helm-it
template:
metadata:
labels:
app.kubernetes.io/name: helm-it
app.kubernetes.io/instance: helm-it
spec:
serviceAccountName: helm-it
securityContext:
{}
containers:
- name: helm-it
securityContext:
{}
image: "nginx:1.16.0"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{}
It seems to be running nginx on port 80 but when trying to access it using curl, I got an error (see output below).
curl http://localhost:80
curl: (7) Failed to connect to localhost port 80: Connection refused
I looked for a solution
When looking for a solution I started with the helm create documentation at https://helm.sh/docs/helm/helm_create/
Searches for does helm create run brought up several links for how to create a helm chart in 5 minutes (too many to review) but the answer might be in one of these web pages.
o https://phoenixnap.com/kb/create-helm-chart - was one of the results. But it did not answer my question.
Why I care
The reason I'm asking is because I'm trying to convert a k8s yaml file into a helm chart and want to know what I'm starting with to know what I can delete and what I need to add. I found this link:
How to convert k8s yaml to helm chart - and it said I could just
just drop that file under templates/ and add a Chart.yml.
which I tried but it didn't work.

The templates created by the helm create command run Nginx as a stateless application. I found this in the book Learning Helm, by Matt Butcher, Matt Farina, Josh Dolitsky on page 67. Available in OReilly online books and Google Books.
To access the NGINX application, you might need to forward data from your host to the K8S cluster.
When performing the helm install it gives this output:
helm install myapp anvil
NAME: myapp
LAST DEPLOYED: Fri Apr 23 12:23:46 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=anvil,app.kubernetes.io/instance
=myapp" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].c
ontainerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT
The complicated looking commands simply get the name of the pod and the port that is being used by NGINX to create a command that will forward data from your localhost to the kubernetes cluster. That command is:
kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT
After port-forwarding is running, you can then access http://localhost:8080/ and get this display that says NGINX is running. You'll know it's working if the port forwarding displays more logging information. Mine displayed the following:
kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Handling connection for 8080
Handling connection for 8080
Each time you hit the URL, an additional logging line Handling connection for 8080 is displayed (and the web page displays the Welcome to NGINX page).

Related

How to create ClusterPodMonitoring in GCP?

I'm trying to follow their docs and create this pod monitoring
i apply it and i see nothing in metrics
what am i doing wrong?
apiVersion: monitoring.googleapis.com/v1
kind: ClusterPodMonitoring
metadata:
name: monitoring
spec:
selector:
matchLabels:
app: blah
namespaceSelector:
any: true
endpoints:
- port: metrics
interval: 30s
As mentioned in the offical documnentation:
The following manifest defines a PodMonitoring resource, prom-example, in the NAMESPACE_NAME namespace. The resource uses a Kubernetes label selector to find all pods in the namespace that have the label app with the value prom-example. The matching pods are scraped on a port named metrics, every 30 seconds, on the /metrics HTTP path.
apiVersion: monitoring.googleapis.com/v1
kind: PodMonitoring
metadata:
name: prom-example
spec:
selector:
matchLabels:
app: prom-example
endpoints:
- port: metrics
interval: 30s
To apply this resource, run the following command:
kubectl -n NAMESPACE_NAME apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/prometheus-engine/v0.5.0/examples/pod-monitoring.yaml
Also check the document on Obeserving your GKE clusters.
UPDATE:
After applying the manifests, the managed collection will be running but no metrics will be generated. You must deploy a PodMonitoring resource that scrapes a valid metrics endpoint to see any data in the Query UI.
Check the logs by running the below commands:
kubectl logs -f -ngmp-system -lapp.kubernetes.io/part-of=gmp
kubectl logs -f -ngmp-system -lapp.kubernetes.io/name=collector -c prometheus
If you see any error follow this link to troubleshoot.

Nginx Ingress Controller - Failed Calling Webhook

I set up a k8s cluster using kubeadm (v1.18) on an Ubuntu virtual machine.
Now I need to add an Ingress Controller. I decided for nginx (but I'm open for other solutions). I installed it according to the docs, section "bare-metal":
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-0.31.1/deploy/static/provider/baremetal/deploy.yaml
The installation seems fine to me:
kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-admission-create-b8smg 0/1 Completed 0 8m21s
pod/ingress-nginx-admission-patch-6nbjb 0/1 Completed 1 8m21s
pod/ingress-nginx-controller-78f6c57f64-m89n8 1/1 Running 0 8m31s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller NodePort 10.107.152.204 <none> 80:32367/TCP,443:31480/TCP 8m31s
service/ingress-nginx-controller-admission ClusterIP 10.110.191.169 <none> 443/TCP 8m31s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 8m31s
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-78f6c57f64 1 1 1 8m31s
NAME COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create 1/1 2s 8m31s
job.batch/ingress-nginx-admission-patch 1/1 3s 8m31s
However, when trying to apply a custom Ingress, I get the following error:
Error from server (InternalError): error when creating "yaml/xxx/xxx-ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s: Temporary Redirect
Any idea what could be wrong?
I suspected DNS, but other NodePort services are working as expected and DNS works within the cluster.
The only thing I can see is that I don't have a default-http-backend which is mentioned in the docs here. However, this seems normal in my case, according to this thread.
Last but not least, I tried as well the installation with manifests (after removing ingress-nginx namespace from previous installation) and the installation via Helm chart. It has the same result.
I'm pretty much a beginner on k8s and this is my playground-cluster. So I'm open to alternative solutions as well, as long as I don't need to set up the whole cluster from scratch.
Update:
With "applying custom Ingress", I mean:
kubectl apply -f <myIngress.yaml>
Content of myIngress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /someroute/fittingmyneeds
pathType: Prefix
backend:
serviceName: some-service
servicePort: 5000
Another option you have is to remove the Validating Webhook entirely:
kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
I found I had to do that on another issue, but the workaround/solution works here as well.
This isn't the best answer; the best answer is to figure out why this doesn't work. But at some point, you live with workarounds.
I'm installing on Docker for Mac, so I used the cloud rather than baremetal version:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/cloud/deploy.yaml
In my case I'd mixed the installations up.
I resolved the issue by executing the following steps:
$ kubectl get validatingwebhookconfigurations
I iterated through the list of configurations received from the above steps and deleted the configuration using
$ `kubectl delete validatingwebhookconfigurations [configuration-name]`
In my case I didn't need to delete the ValidatingWebhookConfiguration. The issue was that I was using a private cluster on GCP version 1.17.14-gke.1600. If I got it correctly, on a default Kubernetes installation, the valitaingwebhook API (which of course is running on the master node), is exposed at port 443. But with GCP they changed the port to 8443 due to security reasons because in order to allocate port 443, the service needs to have root access to the node. Since they didn't want that, they changed to 8443. Now, since a private cluster only has the ports 80/443 externally allowed for Ingress on the nodes (that is, all the nodes will only accept requests to these ports), when the Kubernetes tries to validate your Ingress against the validatingwebhook-address:8443 it will fail - it would not fail if it ran on 443. This thread contains more detailed information.
So the current workaround for that, as recommended by Google itself (but very poorly documented) is adding a Firewall rule on GCP, that will allow inbound (Ingress) TCP requests to your master node at port 8443, so that the other nodes within the cluster can reach the master for validatingwebhook API running on it with that very port.
As to how to create the rule, this is how I did it:
Went to Firewall Rules and added a new one.
At the field Network I selected the VPC from which my cluster is.
Direction of traffic I set as Ingress
Action on match to Allow
Targets to Specified target tags
The Target tags can be found on the master node details in a property called Network tags. To find it, I opened a new window, went to my cluster node pools, found the master node pool. Then entered one of the nodes to look for the Virtual Machine details. There I found Network Tags. Copied its value and went back to the Firewall Rule form.
Pasted the copied network tag to the tag field
At Protocols and ports, checked the Specified protocols and ports
Then checked TCP and placed 8443
Saved the rule and applied the manifest again.
NOTE: Most threads out there will say it's the port 9443. It may work. But I first attempted 8443 since it was reported to work on this thread. It worked for me so I didn't even try 9443.
Might be because of a previous nginx-ingress-controller configuration.
You can try to run the following command -
kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
I've solved this issue. The problem was that you use Kubernetes version 1.18, but the ValidatingWebhookConfiguration in current ingress-Nginx uses the oldest API; see the doc:
https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
Ensure that the Kubernetes cluster is at least as new as v1.16 (to use admissionregistration.k8s.io/v1), or v1.9 (to use admissionregistration.k8s.io/v1beta1).
And in current yaml :
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1beta1
and in rules :
apiVersions:
- v1beta1
So you need to change it on v1 :
apiVersion: admissionregistration.k8s.io/v1
and add rule -v1 :
apiVersions:
- v1beta1
- v1
After you change it and redeploy -your custom ingress service will deploy sucessfull
Finally, I managed to run Ingress Nginx properly by changing the way of installation. I still don't understand why the previous installation didn't work, but I'll share nevertheless the solution along with some more insights into the original problem.
Solution
Uninstall ingress nginx: Delete the ingress-nginx namespace. This does not remove the validating webhook configuration - delete this one manually. Then install MetalLB and install ingress nginx again. I now used the version from the Helm stable repo. Now everything works as expected. Thanks to Long on the kubernetes slack channel!
Some more insights into the original problem
The yamls provided by the installation guide contain a ValidatingWebHookConfiguration:
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
name: ingress-nginx-admission
namespace: ingress-nginx
webhooks:
- name: validate.nginx.ingress.kubernetes.io
rules:
- apiGroups:
- extensions
- networking.k8s.io
apiVersions:
- v1beta1
operations:
- CREATE
- UPDATE
resources:
- ingresses
failurePolicy: Fail
clientConfig:
service:
namespace: ingress-nginx
name: ingress-nginx-controller-admission
path: /extensions/v1beta1/ingresses
Validation is performed whenever I create or update an ingress (the content of my ingress.yaml doesn't matter). The validation failed, because when calling the service, the response is a Temporary Redirect. I don't know why.
The corresponding service is:
apiVersion: v1
kind: Service
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller-admission
namespace: ingress-nginx
spec:
type: ClusterIP
ports:
- name: https-webhook
port: 443
targetPort: webhook
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
The pod matching the selector comes from this deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
spec:
dnsPolicy: ClusterFirst
containers:
- name: controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=ingress-nginx/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 100m
memory: 90Mi
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
Something in this validation chain goes wrong. Would be interesting to know, what and why, but I can continue working with my MetalLB solution. Note that this solution does not contain a validating webhook at all.
I am not sure if this helps this late, but might it be, that your cluster was behind proxy? Because in that case you have to have no_proxy configured correctly. Specifically, it has to include .svc,.cluster.local otherwise validation webhook requests such as https://ingress-nginx-controller-admission.ingress-nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s will be routed via proxy server (note that .svc in the URL).
I had exactly this issue and adding .svc into no_proxy variable helped. You can try this out quickly by modifying /etc/kubernetes/manifests/kube-apiserver.yaml file which will in turn automatically recreate your kubernetes api server pod.
This is not the case just for ingress validation, but also for other things that might refer URL in your cluster ending with .svc or .namespace.svc.cluster.local (i.e. see this bug)
On a baremetal cluster, I disabled the admissionWebhooks during the Helm3 install:
kubectl create ns ingress-nginx
helm install [RELEASE_NAME] ingress-nginx/ingress-nginx -n ingress-nginx --set controller.admissionWebhooks.enabled=false
In my case, it was the AWS EKS module, which now comes with harden security group. but nginx-ingress requires the cluster to communicate with the ingress controller so I have to whitelist below port in the node security group
node_security_group_additional_rules = {
cluster_to_node = {
description = "Cluster to ingress-nginx webhook"
protocol = "-1"
from_port = 8443
to_port = 8443
type = "ingress"
source_cluster_security_group = true
}
}
input_node_security_group_additional_rules
I had this error. Basically I have a script installing the nginx controller with helm; the script then immediately installs an application that uses ingress, also with helm. That app install failed, just the ingress part.
Solution was to wait 60s after the install of the nginx, to give the WebAdmissionHook time to come up and be ready.
If using terraform and helm disable the Validating Webhook
resource "helm_release" "nginx_ingress" {
...
set {
name = "controller.admissionWebhooks.enabled"
value = "false"
}
...
}
what worked for me was to increase the timeout while waiting for ingress to come up.
I was bringing up a cluster with a known-good configuration and another had been created just last week in essentially the same way. And my error message was a little more specific about what failed in the webhook :
│ Error: Failed to create Ingress
'auth-system/alertmanager-oauth2-proxy'
because: Internal error occurred: failed calling webhook
"validate.nginx.ingress.kubernetes.io": Post
"https://nginx-nginx-ingress-controller-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s":
x509: certificate signed by unknown authority
It turns out that in my many configs, one of them had a typo in the DNS names input to nginx creation. So nginx thought it had one domain name, but it got a certificate for a slightly different dns name, which caused the validating web hook to fail.
The solution was not to delete the hook, but to address the underlying config problem in nginx dns so that it matched its X.509 certificate domain.
just use v1 instead v1beta1 in deploy.yaml
This is a solution for those using GKE cluster.
I tested two ways to fix this issue.
Terraform
GCP Console
Terraform
resource "google_compute_firewall" "validate-nginx" {
project = "${YOUR_PROJECT_ID}"
name = "access-master-to-validatenginx"
network = "${YOUR_NETWORK}"
allow {
protocol = "tcp"
ports = ["8443"]
}
target_tags = ["${NODE_NETWORK_TAG}"]
source_ranges = ["${CONTROL_PLANE_ADDRESS_RANGE}"]
}
GCP Console
To add a terraform example for GCP, extending #mauricio
resource "google_container_cluster" "primary" {
...
}
resource "google_compute_firewall" "validate_nginx" {
project = local.project
name = "validate-nginx"
network = google_compute_network.vpc.name
allow {
protocol = "tcp"
ports = ["8443"]
}
direction = "INGRESS"
source_ranges = [google_container_cluster.primary.private_cluster_config[0].master_ipv4_cidr_block]
}

Docker Desktop + k8s plus https proxy multiple external ports to pods on http in deployment?

I'm trying to do a straight up thing that I would think is simple. I need to have https://localhost:44301, https://localhost:5002, https://localhost:5003 to be listened to in my k8s environment in docker desktop, and be proxied using a pfx file/password that I specify and have it forward by the port to pods listening on specific addresses (could be port 80, doesn't matter)
The documentation is mind numbingly complex for what looks like it should be straight forward. I can get the pods running, I can use kubectl port-forward and they work fine, but I can't figure out how to get ingress working with ha-proxy or nginx or anything else in a way that makes any sense.
Can someone do an ELI5 telling me how to turn this on? I'm on Windows 10 2004 with WSL2 and Docker experimental so I should have access to the ingress stuff they reference in the docs and make clear as mud.
Thanks!
As discussed in the comments this is a community wiki answer:
I have managed to create Ingress resource in Kubernetes on Docker in Windows.
Steps to reproduce:
Enable Hyper-V
Install Docker for Windows and enable Kubernetes
Connect kubectl
Enable Ingress
Create deployment
Create service
Create ingress resource
Add host into local hosts file
Test
Enable Hyper-V
From Powershell with administrator access run below command:
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All
System could ask you to reboot your machine.
Install Docker for Windows and enable Kubernetes
Install Docker application with all the default options and enable Kubernetes
Connect kubectl
Install kubectl .
Enable Ingress
Run this commands:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
Edit: Make sure no other service is using port 80
Restart your machine. From a cmd prompt running as admin, do:
net stop http
Stop the listed services using services.msc
Use: netstat -a -n -o -b and check for other processes listening on port 80.
Create deployment
Below is simple deployment with pods that will reply to requests:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello
spec:
selector:
matchLabels:
app: hello
version: 2.0.0
replicas: 3
template:
metadata:
labels:
app: hello
version: 2.0.0
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:2.0"
env:
- name: "PORT"
value: "50001"
Apply it by running command:
$ kubectl apply -f file_name.yaml
Create service
For pods to be able for you to communicate with them you need to create a service.
Example below:
apiVersion: v1
kind: Service
metadata:
name: hello-service
spec:
type: NodePort
selector:
app: hello
version: 2.0.0
ports:
- name: http
protocol: TCP
port: 80
targetPort: 50001
Apply this service definition by running command:
$ kubectl apply -f file_name.yaml
Create Ingress resource
Below is simple Ingress resource using service created above:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-ingress
spec:
rules:
- host: kubernetes.docker.internal
http:
paths:
- path: /
backend:
serviceName: hello-service
servicePort: http
Take a look at:
spec:
rules:
- host: hello-test.internal
hello-test.internal will be used as the hostname to connect to your pods.
Apply your Ingress resource by invoking command:
$ kubectl apply -f file_name.yaml
Add host into local hosts file
I found this Github link that will allow you to connect to your Ingress resource by hostname.
To achieve that add a line 127.0.0.1 hello-test.internal to your C:\Windows\System32\drivers\etc\hosts file and save it.
You will need Administrator privileges to do that.
Edit: The newest version of Docker Desktop for Windows already adds a hosts file entry:
127.0.0.1 kubernetes.docker.internal
Test
Display the information about Ingress resources by invoking command:
kubectl get ingress
It should show:
NAME HOSTS ADDRESS PORTS AGE
hello-ingress hello-test.internal localhost 80 6m2s
Now you can access your Ingress resource by opening your web browser and typing
http://kubernetes.docker.internal/
The browser should output:
Hello, world!
Version: 2.0.0
Hostname: hello-84d554cbdf-2lr76
Hostname: hello-84d554cbdf-2lr76 is the name of the pod that replied.
If this solution is not working please check connections with the command:
netstat -a -n -o
(with Administrator privileges) if something is not using port 80.

Google Kubernetes Ingress health check always failing

I have configured a web application pod exposed via apache on port 80. I'm unable to configure a service + ingress for accessing from the internet. The issue is that the backend services always report as UNHEALTHY.
Pod Config:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
name: webapp
name: webapp
namespace: my-app
spec:
replicas: 1
selector:
matchLabels:
name: webapp
template:
metadata:
labels:
name: webapp
spec:
containers:
- image: asia.gcr.io/my-app/my-app:latest
name: webapp
ports:
- containerPort: 80
name: http-server
Service Config:
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort
selector:
name: webapp
ports:
- protocol: TCP
port: 50000
targetPort: 80
Ingress Config:
kind: Ingress
metadata:
name: webapp-ingress
spec:
backend:
serviceName: webapp-service
servicePort: 50000
This results in backend services reporting as UNHEALTHY.
The health check settings:
Path: /
Protocol: HTTP
Port: 32463
Proxy protocol: NONE
Additional information: I've tried a different approach of exposing the deployment as a load balancer with external IP and that works perfectly. When trying to use a NodePort + Ingress, this issue persists.
With GKE, the health check on the Load balancer is created automatically when you create the ingress. Since the HC is created automatically, so are the firewall rules.
Since you have no readinessProbe configured, the LB has a default HC created (the one you listed). To debug this properly, you need to isolate where the point of failure is.
First, make sure your pod is serving traffic properly;
kubectl exec [pod_name] -- wget localhost:80
If the application has curl built in, you can use that instead of wget.
If the application has neither wget or curl, skip to the next step.
get the following output and keep track of the output:
kubectl get po -l name=webapp -o wide
kubectl get svc webapp-service
You need to keep the service and pod clusterIPs
SSH to a node in your cluster and run sudo toolbox bash
Install curl:
apt-get install curl`
Test the pods to make sure they are serving traffic within the cluster:
curl -I [pod_clusterIP]:80
This needs to return a 200 response
Test the service:
curl -I [service_clusterIP]:80
If the pod is not returning a 200 response, the container is either not working correctly or the port is not open on the pod.
if the pod is working but the service is not, there is an issue with the routes in your iptables which is managed by kube-proxy and would be an issue with the cluster.
Finally, if both the pod and the service are working, there is an issue with the Load balancer health checks and also an issue that Google needs to investigate.
As Patrick mentioned, the checks will be created automatically by GCP.
By default, GKE will use readinessProbe.httpGet.path for the health check.
But if there is no readinessProbe configured, then it will just use the root path /, which must return an HTTP 200 (OK) response (and that's not always the case, for example, if the app redirects to another path, then the GCP health check will fail).

Http 400 from envoy on BAN request

Istio Newbie here,
I’m doing my first tests with Istio (on version 1.3.0). Most things run nice without much effort.
What I’m having an issue is a service that talks with varnish to clean up the cache. This service makes a HTTP request to every pod behind a headless service and its failing with a HTTP 400 (Bad Request) error. This request uses the HTTP Method “BAN” which I believe is the source of the problem since other request method reach varnish without problems.
As a temporary workaround I changed the port name from http to varnish and everything start working again
I installed istio using the helm chart for 1.3.0:
helm install istio install/kubernetes/helm/istio --set kiali.enabled=true --set global.proxy.accessLogFile="/dev/stdout" --namespace istio-system --version 1.3.0
Running on GKE 1.13.9-gke.3 and Varnish is version 6.2
I was able to get it working using Istio without mTLS using the following definitions:
ConfigMap
Just allowing the pod and service CIDRs for BAN requests and expecting them to come from the Varnish service FQDN.
apiVersion: v1
kind: ConfigMap
metadata:
name: varnish-configuration
data:
default.vcl: |
vcl 4.0;
import std;
backend default {
.host = "varnish-service";
.port = "80";
}
acl purge {
"localhost";
"10.x.0.0"/14; #Pod CIDR
"10.x.0.0"/16; #Service CIDR
}
sub vcl_recv {
# this code below allows PURGE from localhost and x.x.x.x
if (req.method == "BAN") {
if (!client.ip ~ purge) {
return(synth(405,"Not allowed."));
}
return (purge);
}
}
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: varnish
spec:
replicas: 1
selector:
matchLabels:
app: varnish
template:
metadata:
labels:
app: varnish
spec:
containers:
- name: varnish
image: varnish:6.3
ports:
- containerPort: 80
name: varnish-port
imagePullPolicy: IfNotPresent
volumeMounts:
- name: varnish-conf
mountPath: /etc/varnish
volumes:
- name: varnish-conf
configMap:
name: varnish-configuration
Service
apiVersion: v1
kind: Service
metadata:
name: varnish-service
labels:
workload: varnish
spec:
selector:
app: varnish
ports:
- name: varnish-port
protocol: TCP
port: 80
targetPort: 80
After deploying these, you can run a cURL enabled pod:
kubectl run bb-$RANDOM --rm -i --image=yauritux/busybox-curl --restart=Never --tty -- /bin/sh
And then, from the tty try curling it:
curl -v -X BAN http://varnish-service
From here, either you'll get 200 purged or 405 Not allowed. Either way, you've hit the Varnish pod across the mesh.
Your issue might be related to mTLS in your cluster. You can check if it's enabled by issuing this command*:
istioctl authn tls-check $(k get pod -l app=varnish -o jsonpath={.items..metadata.name}) varnish-service.default.svc.cluster.local
*The command assumess that you're using the definitions shared in this post. If not, you can adjust accordingly.
I tested this running GKE twice: One with open source Istio via Helm install and another using the Google managed Istio installation (in permissive mode).