Istio Newbie here,
I’m doing my first tests with Istio (on version 1.3.0). Most things run nice without much effort.
What I’m having an issue is a service that talks with varnish to clean up the cache. This service makes a HTTP request to every pod behind a headless service and its failing with a HTTP 400 (Bad Request) error. This request uses the HTTP Method “BAN” which I believe is the source of the problem since other request method reach varnish without problems.
As a temporary workaround I changed the port name from http to varnish and everything start working again
I installed istio using the helm chart for 1.3.0:
helm install istio install/kubernetes/helm/istio --set kiali.enabled=true --set global.proxy.accessLogFile="/dev/stdout" --namespace istio-system --version 1.3.0
Running on GKE 1.13.9-gke.3 and Varnish is version 6.2
I was able to get it working using Istio without mTLS using the following definitions:
ConfigMap
Just allowing the pod and service CIDRs for BAN requests and expecting them to come from the Varnish service FQDN.
apiVersion: v1
kind: ConfigMap
metadata:
name: varnish-configuration
data:
default.vcl: |
vcl 4.0;
import std;
backend default {
.host = "varnish-service";
.port = "80";
}
acl purge {
"localhost";
"10.x.0.0"/14; #Pod CIDR
"10.x.0.0"/16; #Service CIDR
}
sub vcl_recv {
# this code below allows PURGE from localhost and x.x.x.x
if (req.method == "BAN") {
if (!client.ip ~ purge) {
return(synth(405,"Not allowed."));
}
return (purge);
}
}
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: varnish
spec:
replicas: 1
selector:
matchLabels:
app: varnish
template:
metadata:
labels:
app: varnish
spec:
containers:
- name: varnish
image: varnish:6.3
ports:
- containerPort: 80
name: varnish-port
imagePullPolicy: IfNotPresent
volumeMounts:
- name: varnish-conf
mountPath: /etc/varnish
volumes:
- name: varnish-conf
configMap:
name: varnish-configuration
Service
apiVersion: v1
kind: Service
metadata:
name: varnish-service
labels:
workload: varnish
spec:
selector:
app: varnish
ports:
- name: varnish-port
protocol: TCP
port: 80
targetPort: 80
After deploying these, you can run a cURL enabled pod:
kubectl run bb-$RANDOM --rm -i --image=yauritux/busybox-curl --restart=Never --tty -- /bin/sh
And then, from the tty try curling it:
curl -v -X BAN http://varnish-service
From here, either you'll get 200 purged or 405 Not allowed. Either way, you've hit the Varnish pod across the mesh.
Your issue might be related to mTLS in your cluster. You can check if it's enabled by issuing this command*:
istioctl authn tls-check $(k get pod -l app=varnish -o jsonpath={.items..metadata.name}) varnish-service.default.svc.cluster.local
*The command assumess that you're using the definitions shared in this post. If not, you can adjust accordingly.
I tested this running GKE twice: One with open source Istio via Helm install and another using the Google managed Istio installation (in permissive mode).
Related
I am trying to autoscale a deployment and a statefulset, by running respectivly these two commands:
kubectl autoscale statefulset mysql --cpu-percent=50 --min=1 --max=10
kubectl expose deployment frontend --type=LoadBalancer --name=frontend
Sadly, on the minikube dashboard, this error appears under both services:
failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
Searching online I read that it might be a dns error, so I checked but CoreDNS seems to be running fine.
Both workloads are nothing special, this is the 'frontend' deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
labels:
app: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: hubuser/repo
ports:
- containerPort: 3000
Has anyone got any suggestions?
First of all, could you please verify if the API is working fine? To do so, please run kubectl get --raw /apis/metrics.k8s.io/v1beta1.
If you get an error similar to:
“Error from server (NotFound):”
Please follow these steps:
1.- Remove all the proxy environment variables from the kube-apiserver manifest.
2.- In the kube-controller-manager-amd64, set --horizontal-pod-autoscaler-use-rest-clients=false
3.- The last scenario is that your metric-server add-on is disabled by default. You can verify it by using:
$ minikube addons list
If it is disabled, you will see something like metrics-server: disabled.
You can enable it by using:
$minikube addons enable metrics-server
When it is done, delete and recreate your HPA.
You can use the following thread as a reference.
Does the default helm chart actually run and do something I can observe?
I've tried running it (the default helm chart) and it does run; So, what does it do?
To recreate the problem I'm asking about, do the following:
helm create helm-it # Create a helm chart (the default)
helm install helm-it ./helm-it # Run it
helm list # See it running
helm get manifest helm-it # See the manifest (YAML) that is running (I think)
By examining the manifest (using helm get manifest helm-it), I can see how it's configured. The important bit is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: helm-it
labels:
helm.sh/chart: helm-it-0.1.0
app.kubernetes.io/name: helm-it
app.kubernetes.io/instance: helm-it
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: helm-it
app.kubernetes.io/instance: helm-it
template:
metadata:
labels:
app.kubernetes.io/name: helm-it
app.kubernetes.io/instance: helm-it
spec:
serviceAccountName: helm-it
securityContext:
{}
containers:
- name: helm-it
securityContext:
{}
image: "nginx:1.16.0"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{}
It seems to be running nginx on port 80 but when trying to access it using curl, I got an error (see output below).
curl http://localhost:80
curl: (7) Failed to connect to localhost port 80: Connection refused
I looked for a solution
When looking for a solution I started with the helm create documentation at https://helm.sh/docs/helm/helm_create/
Searches for does helm create run brought up several links for how to create a helm chart in 5 minutes (too many to review) but the answer might be in one of these web pages.
o https://phoenixnap.com/kb/create-helm-chart - was one of the results. But it did not answer my question.
Why I care
The reason I'm asking is because I'm trying to convert a k8s yaml file into a helm chart and want to know what I'm starting with to know what I can delete and what I need to add. I found this link:
How to convert k8s yaml to helm chart - and it said I could just
just drop that file under templates/ and add a Chart.yml.
which I tried but it didn't work.
The templates created by the helm create command run Nginx as a stateless application. I found this in the book Learning Helm, by Matt Butcher, Matt Farina, Josh Dolitsky on page 67. Available in OReilly online books and Google Books.
To access the NGINX application, you might need to forward data from your host to the K8S cluster.
When performing the helm install it gives this output:
helm install myapp anvil
NAME: myapp
LAST DEPLOYED: Fri Apr 23 12:23:46 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=anvil,app.kubernetes.io/instance
=myapp" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].c
ontainerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT
The complicated looking commands simply get the name of the pod and the port that is being used by NGINX to create a command that will forward data from your localhost to the kubernetes cluster. That command is:
kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT
After port-forwarding is running, you can then access http://localhost:8080/ and get this display that says NGINX is running. You'll know it's working if the port forwarding displays more logging information. Mine displayed the following:
kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Handling connection for 8080
Handling connection for 8080
Each time you hit the URL, an additional logging line Handling connection for 8080 is displayed (and the web page displays the Welcome to NGINX page).
My question is totally simple that is related to kubernetes awesome tool called KinD. I am using KinD for my flask application:
import os
import requests
from flask import Flask
from jaeger_client import Config
from flask_opentracing import FlaskTracing
app = Flask(__name__)
config = Config(
config={
'sampler':
{'type': 'const',
'param': 1},
'logging': True,
'reporter_batch_size': 1,},
service_name="service")
jaeger_tracer = config.initialize_tracer()
tracing = FlaskTracing(jaeger_tracer, True, app)
def get_counter(counter_endpoint):
counter_response = requests.get(counter_endpoint)
return counter_response.text
def increase_counter(counter_endpoint):
counter_response = requests.post(counter_endpoint)
return counter_response.text
#app.route('/')
def hello_world():
counter_service = os.environ.get('COUNTER_ENDPOINT', default="https://localhost:5000")
counter_endpoint = f'{counter_service}/api/counter'
counter = get_counter(counter_endpoint)
increase_counter(counter_endpoint)
return f"""Hello, World!
You're visitor number {counter} in here!\n\n"""
FROM python:3.7-alpine
RUN mkdir /app
RUN apk add --no-cache py3-pip python3 && \
pip3 install flask Flask-Opentracing jaeger-client
WORKDIR /app
ADD ./app /app/
ADD ./requirements.txt /app/
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python", "/app/main.py"]
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-flask-deployment
spec:
selector:
matchLabels:
app: my-flask-pod
replicas: 2
template:
metadata:
labels:
app: my-flask-pod
spec:
containers:
- name: my-flask-container
image: yusufkaratoprak/awsflaskeks:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 5000
service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-flask-service
spec:
selector:
app: my-flask-pod
ports:
- port: 6000
targetPort: 5000
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 203.0.113.10
Configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.42.42.100-172.42.42.105 #Update this with your Nodes IP range
Results:
Everything looks good. When I write my browser: http://172.42.42.101:6000/
Result:
Also , I want to add my service events. Everything looks good :
and Also I want to add #Kaan Mersin advise : I added 80 as default port.
Actually for bare metal server you should use another solution except of LoadBalancer to expose your application such as NodePort, Ingress and so on. Because LoadBalancer service type just for Cloud providers like Google, Azure, AWS and etc.
Check official documentation for LoadBalancer
Follow below way with thinking that you are using NodePort instead of LoadBalancer, NodePort service type also load balance traffic between pods ıf you are looking forward just balancing feature ;
You need to use http://172.42.42.101:30160 from outside of the host which is running containers on. Port 6000 is just accessible inside the cluster with internal IP (10.96.244.204 is in your case). Whenever you expose your deployment, automatically (also you can define manually) one of the NodePort (by default between 30000 - 32767)assign to the service for external request.
For service details, you need to run the below command. The command output will give you NodePort and another details.
kubectl describe services my-service
Please check related kubernetes documentation
The error Chrome is throwing is ERR_UNSAFE_PORT. Change your Service port to 80 and then hit http://172.42.42.101/. Alternatively you can choose any other port you like so long as Chrome doesn't consider it an unsafe port. See this answer on SuperUser for a list of unsafe ports.
I have configured a web application pod exposed via apache on port 80. I'm unable to configure a service + ingress for accessing from the internet. The issue is that the backend services always report as UNHEALTHY.
Pod Config:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
name: webapp
name: webapp
namespace: my-app
spec:
replicas: 1
selector:
matchLabels:
name: webapp
template:
metadata:
labels:
name: webapp
spec:
containers:
- image: asia.gcr.io/my-app/my-app:latest
name: webapp
ports:
- containerPort: 80
name: http-server
Service Config:
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort
selector:
name: webapp
ports:
- protocol: TCP
port: 50000
targetPort: 80
Ingress Config:
kind: Ingress
metadata:
name: webapp-ingress
spec:
backend:
serviceName: webapp-service
servicePort: 50000
This results in backend services reporting as UNHEALTHY.
The health check settings:
Path: /
Protocol: HTTP
Port: 32463
Proxy protocol: NONE
Additional information: I've tried a different approach of exposing the deployment as a load balancer with external IP and that works perfectly. When trying to use a NodePort + Ingress, this issue persists.
With GKE, the health check on the Load balancer is created automatically when you create the ingress. Since the HC is created automatically, so are the firewall rules.
Since you have no readinessProbe configured, the LB has a default HC created (the one you listed). To debug this properly, you need to isolate where the point of failure is.
First, make sure your pod is serving traffic properly;
kubectl exec [pod_name] -- wget localhost:80
If the application has curl built in, you can use that instead of wget.
If the application has neither wget or curl, skip to the next step.
get the following output and keep track of the output:
kubectl get po -l name=webapp -o wide
kubectl get svc webapp-service
You need to keep the service and pod clusterIPs
SSH to a node in your cluster and run sudo toolbox bash
Install curl:
apt-get install curl`
Test the pods to make sure they are serving traffic within the cluster:
curl -I [pod_clusterIP]:80
This needs to return a 200 response
Test the service:
curl -I [service_clusterIP]:80
If the pod is not returning a 200 response, the container is either not working correctly or the port is not open on the pod.
if the pod is working but the service is not, there is an issue with the routes in your iptables which is managed by kube-proxy and would be an issue with the cluster.
Finally, if both the pod and the service are working, there is an issue with the Load balancer health checks and also an issue that Google needs to investigate.
As Patrick mentioned, the checks will be created automatically by GCP.
By default, GKE will use readinessProbe.httpGet.path for the health check.
But if there is no readinessProbe configured, then it will just use the root path /, which must return an HTTP 200 (OK) response (and that's not always the case, for example, if the app redirects to another path, then the GCP health check will fail).
I use minikube to create local kubernetes cluster.
I create ReplicationController via webapp-rc.yaml file.
apiVersion: v1
kind: ReplicationController
metadata:
name: webapp
spec:
replicas: 2
template:
metadata:
name: webapp
labels:
app: webapp
spec:
containers:
- name: webapp
image: tomcat
ports:
- containerPort: 8080
and, I print the pods' ip to stdout:
kubectl get pods -l app=webapp -o yaml | grep podIP
podIP: 172.17.0.18
podIP: 172.17.0.1
and, I want to access pod using curl
curl 172.17.0.18:8080
But, the stdout give me: curl: (52) Empty reply from server
I know I can access my application in docker container in pod via service.
I find this code in a book. But the book does not give the context for executing this code.
Using minikube, how to access pod via pod ip using curl in host machine?
update 1
I find a way using kubectl proxy:
➜ ~ kubectl proxy
Starting to serve on 127.0.0.1:8001
and then I can access pod via curl like this:
curl http://localhost:8001/api/v1/namespaces/default/pods/webapp-jkdwz/proxy/
webapp-jkdwz can be found by command kubectl get pods -l app=webapp
update 2
minikube ssh - log into minikube VM
and then, I can use curl <podIP>:<podPort>, for my case is curl 172.17.0.18:8080
First of all, tomcat image expose port 8080 not 80, so the correct YAML would be:
apiVersion: v1
kind: ReplicationController
metadata:
name: webapp
spec:
replicas: 2
template:
metadata:
name: webapp
labels:
app: webapp
spec:
containers:
- name: webapp
image: tomcat
ports:
- containerPort: 8080
minikube is executed inside a virtual machine, so the curl 172.17.0.18:8080 would only work from inside that virtual machine.
You can always create a service to expose your apps:
kubectl expose rc webapp --type=NodePort
And use the following command to get the URL:
minikube service webapp --url
If you need to query a specific pod, use port forwarding:
kubectl port-forward <POD NAME> 8080
Or just ssh into minikube's virtual machine and query from there.
That command is correct, but it only works from a machine that has access to the overlay network. (In case of minikube the host machine does not have that by default).
You can set up a proxy to your pod with:
kubectl port-forward [name of your pod] [pod port]
Thereafter you can (from another shell):
curl 127.0.0.1:port/path
See also: https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/#forward-a-local-port-to-a-port-on-the-pod