I've simple single page golang web application, I'm trying to migrate to istio.
My prod setup (via nginx ingress):
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: goapp
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
spec:
tls:
- hosts:
- mycustomapp.mycustomapp.com
secretName: go-tls
rules:
- host: mycustomapp.mycustomapp.com
http:
paths:
- path: /
backend:
serviceName: mycustomapp
servicePort: 80
And I'm trying to build at least http configuration for istio
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: goapp
annotations:
kubernetes.io/ingress.class: istio
spec:
rules:
- host: mycustomapp.mycustomapp.com
http:
paths:
- path: /
backend:
serviceName: mycustomapp
servicePort: 80
But I always get 404 from istio lb on clean cluster with istio 0.7.1 only installed. Samples like bookinfo and httpbin works well
Application yaml:
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: mycustomapp
name: mycustomapp
spec:
replicas: 1
selector:
matchLabels:
k8s-app: mycustomapp
template:
metadata:
labels:
k8s-app: mycustomapp
spec:
containers:
- name: mycustomapp
image: xxxx.azurecr.io/mycustomapp:999
ports:
- containerPort: 80
protocol: TCP
imagePullSecrets:
- name: xxxx
serviceAccountName: mycustomapp
---
kind: Service
apiVersion: v1
metadata:
annotations:
prometheus.io/scrape: 'true'
labels:
k8s-app: mycustomapp
name: mycustomapp
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
selector:
k8s-app: mycustomapp
To get rid of the 404 error in your case, it should be enough to add the correct port name to the service and deployment YAML files, and add istio sidecar to the deployment YAML file. Then you should redeploy all changed files.
Perhaps you may need to add label app: mycustomapp to the service and deployment, but I'm not sure is it required or optional.
Here is example of the service.yaml file with the correct port name (more about the port names you can read here):
kind: Service
apiVersion: v1
metadata:
annotations:
prometheus.io/scrape: 'true'
labels:
app: mycustomapp
k8s-app: mycustomapp
name: mycustomapp
spec:
type: ClusterIP
ports:
- name: http-80
port: 80
targetPort: 80
selector:
k8s-app: mycustomapp
Ensure you have also the correct port name in your deployment file.
You can add the istio sidecar to the container manually, following these steps:
download and unpack latest istio release, suitable for your OS from https://github.com/istio/istio/releases
Change directory to istio package. For example, if the package is istio-0.7
cd istio-0.7
Create inject config:
kubectl create -f install/kubernetes/istio-sidecar-injector-configmap-release.yaml --dry-run -o=jsonpath='{.data.config}' > inject-config.yaml
Create mesh config:
kubectl -n istio-system get configmap istio -o=jsonpath='{.data.mesh}' > mesh-config.yaml
Add istio sidecar container to your deployment:
bin/istioctl kube-inject \
--injectConfigFile inject-config.yaml \
--meshConfigFile mesh-config.yaml \
--filename path/to/original/deployment.yaml \
--output deployment-injected.yaml
Deploy new deployment:
kubectl apply -f deployment-injected.yaml
If you want to have automatic sidecar injection, follow this manual.
You can check if the sidecar has been injected into the deployment:
$ kubectl get deployment mycustomapp -o wide
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
mycustomapp 1 1 1 1 3h mycustomapp,istio-proxy nginx:1.7.9,docker.io/istio/proxy:0.7.1 k8s-app=mycustomapp
Related
i have deployed the nginx-ingress controller in kubernetes cluster hosted on aws using the KOPS(kubernetes operations)
as part of the ingress controller testing, i have installed a nginx-ingress controller.
to test that, i have created one deployment, service and ingress. below is the yaml code that i used.
i am able to see the backends in the ingress.
and when i run different pod on same cluster and tried to connect to the service from that i am able to reach deployment.
when i expose same deployment with NodePort. i am able to reach the deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
service
---
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
name: nginx-deployment
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
Ingress
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: nginx.gopinallani.xyz
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-deployment
port:
number: 80
ingress-controller --> https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/aws/deploy.yaml
kubernets version:
**
Client Version: v1.25.0
Kustomize Version: v4.5.7
Server Version: v1.24.4
**
DNS entry I have added in the aws Route53 Zone
I have setup MinIO in kubernetes (k3s) - one node implementation.
Services
apiVersion: v1
kind: Service
metadata:
name: minio
namespace: minio
labels:
app: minio
spec:
clusterIP: None
selector:
app: minio
ports:
- port: 9011
name: minio
---
apiVersion: v1
kind: Service
metadata:
name: minio-service
namespace: minio
labels:
app: minio
spec:
type: LoadBalancer
selector:
app: minio
ports:
- port: 9012
targetPort: 9011
protocol: TCP
Sateful Set
[. . .]
containers:
- name: ches
image: minio/minio
args:
- server
- /data
[. . .]
- containerPort: 9000
hostPort: 9011
[. . .]
Command kubectl logs minio-0 -n minio returns the following:
API: http://10.42.0.14:9000 http://127.0.0.1:9000
Console: http://10.42.0.14:41989 http://127.0.0.1:41989
I am trying to setup Ingress. The steps that followed are:
Setup an Ingress Controller
From here: https://kubernetes.github.io/ingress-nginx/deploy/
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
Create an Internal Service
apiVersion: v1
kind: Service
metadata:
name: minio-service-ingress
namespace: minio
labels:
app: minio
spec:
selector:
app: minio
ports:
- port: 9011
name: minio
Create Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minio-ingress
namespace: minio
spec:
rules:
- host: minio.com
http:
paths:
- backend:
service:
name: minio-service-ingress
port:
number: 9011
path: /
pathType: Prefix
When executing kubectl get ing -n minio:
NAME CLASS HOSTS ADDRESS PORTS AGE
minio-ingress <none> minio.com 192.168.1.14 80 43m
In /etc/hosts I added the entry:
192.168.1.14 minio.com
However, when I am trying to enter http://minio.com/ in browser I get:
Bad Gateway
Am I missing something here?
Hello when I use node port to expose my redis service it works fine. I am able to access it.
But if I try switch to Ingress Nginx controller it refuse to connect.. Other apps work fine with ingress.
Here is my service:
apiVersion: v1
kind: Service
metadata:
name: redis-svc
spec:
# type: NodePort
ports:
- name: http
port: 6379
targetPort: 6379
protocol: TCP
# nodePort: 30007
selector:
app: redis
And here is ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: redis-ing
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
# nginx.ingress.kubernetes.io/enable-cors: "true"
# nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
# nginx.ingress.kubernetes.io/cors-allow-origin: "https://test.hefest.io"
# nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
spec:
tls:
- secretName: letsencrypt-prod
hosts:
- redis-dev.domain.com
rules:
- host: redis-dev.domain.com
http:
paths:
- path: /
backend:
serviceName: redis-svc
servicePort: 6379
Any idea what can be an issue?
I am using this ingress controller: https://github.com/nginxinc/kubernetes-ingress
Redis works on 6379 which is non HTTP port(80,443). So you need to enable TCP/UDP support in nginx ingress controller. The minikube docs here shows how to do it for redis.
Update the TCP and/or UDP services configmaps
Borrowing from the tutorial on configuring TCP and UDP services with the ingress nginx controller we will need to edit the configmap which is installed by default when enabling the minikube ingress addon.
There are 2 configmaps, 1 for TCP services and 1 for UDP services. By default they look like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
---
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-services
namespace: ingress-nginx
Since these configmaps are centralized and may contain configurations, it is best if we only patch them rather than completely overwrite them.
Let’s use this redis deployment as an example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
namespace: default
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
imagePullPolicy: Always
name: redis
ports:
- containerPort: 6379
protocol: TCP
Create a file redis-deployment.yaml and paste the contents above. Then install the redis deployment with the following command:
kubectl apply -f redis-deployment.yaml
Next we need to create a service that can route traffic to our pods:
apiVersion: v1
kind: Service
metadata:
name: redis-service
namespace: default
spec:
selector:
app: redis
type: ClusterIP
ports:
- name: tcp-port
port: 6379
targetPort: 6379
protocol: TCP
Create a file redis-service.yaml and paste the contents above. Then install the redis service with the following command:
kubectl apply -f redis-service.yaml
To add a TCP service to the nginx ingress controller you can run the following command:
kubectl patch configmap tcp-services -n kube-system --patch '{"data":{"6379":"default/redis-service:6379"}}'
Where:
6379 : the port your service should listen to from outside the minikube virtual machine
default : the namespace that your service is installed in
redis-service : the name of the service
We can verify that our resource was patched with the following command:
kubectl get configmap tcp-services -n kube-system -o yaml
We should see something like this:
apiVersion: v1
data:
"6379": default/redis-service:6379
kind: ConfigMap
metadata:
creationTimestamp: "2019-10-01T16:19:57Z"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
name: tcp-services
namespace: kube-system
resourceVersion: "2857"
selfLink: /api/v1/namespaces/kube-system/configmaps/tcp-services
uid: 4f7fac22-e467-11e9-b543-080027057910
The only value you need to validate is that there is a value under the data property that looks like this:
"6379": default/redis-service:6379
Patch the ingress-nginx-controller
There is one final step that must be done in order to obtain connectivity from the outside cluster. We need to patch our nginx controller so that it is listening on port 6379 and can route traffic to your service. To do this we need to create a patch file.
spec:
template:
spec:
containers:
- name: ingress-nginx-controller
ports:
- containerPort: 6379
hostPort: 6379
Create a file called ingress-nginx-controller-patch.yaml and paste the contents above.
Next apply the changes with the following command:
kubectl patch deployment ingress-nginx-controller --patch "$(cat ingress-nginx-controller-patch.yaml)" -n kube-system
The way I made it work is by enabling ssl-passthrough on nginx-ingress controller.
Once my nginx-ingress controller was patched, I was able to connect.
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: redis-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-passthrough: 'true'
spec:
tls:
- hosts:
- <host_address>
secretName: <k8s_secret_name>
rules:
- host: <host_address>
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: redis-service
port:
number: 6380
python snippet to connect
import redis
r = redis.StrictRedis(host='<host_address>',
port=443, db=0, ssl=True,
ssl_ca_certs='server.pem')
print(r.ping())
I do use redistls in my case.
Getting error msg when I trying to access with public IP:
"{"message":"failure to get a peer from the ring-balancer"}"
Looks like Kong is unable to the upstream services.
I am using voting app
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: telehealth-ingress
namespace: kong
annotations:
kubernetes.io/ingress.class: "kong"
spec:
rules:
- http:
paths:
- backend:
serviceName: voting-service
servicePort: 80
service.yaml
apiVersion: v1
kind: Service
metadata:
name: voting-service
labels:
name: voting-service
app: voting-app
spec:
ports:
- targetPort: 80
port: 80
selector:
name: voting-app-pod
app: voting-app
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: voting-app-pod
labels:
name: voting-app-pod
app: voting-app
spec:
template:
metadata:
labels:
name: voting-app-pod
app: voting-app
spec:
containers:
- name: voting-app
image: dockersamples/examplevotingapp_vote
ports:
- containerPort: 80
replicas: 2
selector:
matchLabels:
app: voting-app
There could be one of many things wrong here. But essentially your ingress cannot get to your backend.
If your backend up and running?
Check backend pods are "Running"
kubectl get pods
Check backend deployment has all replicas up
kubectl get deploy
Connect to the app pod and run a localhost:80 request
kubectl exec -it <pod-name> sh
# curl http://localhost
Connect to the ingress pod and see if you can reach the service from there
kubectl exec -it <ingress-pod-name> sh
# dig voting-service (can you DNS resolve it)
# telnet voting-sevice 80
# curl http://voting-service
This issue might shed some insights as to why you can't reach the backend service. What http error code are you seeing?
The problem is resolved after deploying services and deployments in kong namespace instead of default namespace. Now I can access the application with Kong ingress public IP.
Looks like kong ingress is not able to resolve DNS with headless DNS. We need mention FQDN in ingress yaml
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: telehealth-ingress
namespace: kong
annotations:
kubernetes.io/ingress.class: "kong"
spec:
rules:
- http:
paths:
- backend:
name: voting-service
Port:
number: 80
Try this i thing it will work
I am trying to set micro-services on Kubernetes on Google Cloud Platform. I've created a deployment, clusterIp and ingress configuration files.
First after creating a cluster, I run this command to install nginx ingress.
helm install my-nginx stable/nginx-ingress --set rbac.create=true
I use helm v3.
Then I apply deployment and clusterIp configurations.
deployment and clusterIp configurations:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-production-deployment
spec:
replicas: 2
selector:
matchLabels:
component: app-production
template:
metadata:
labels:
component: app-production
spec:
containers:
- name: app-production
image: eu.gcr.io/my-project/app:1.0
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: app-production-cluser-ip-service
spec:
type: ClusterIP
selector:
component: app-production
ports:
- port: 80
targetPort: 80
protocol: TCP
My ingress config is:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: app.example.com
http:
paths:
- path: /
backend:
serviceName: app-production-cluster-ip-service
servicePort: 80
I get this error from Google Cloud Platform logs withing ingress controller:
Error obtaining Endpoints for Service "default/app-production-cluster-ip-service": no object matching key "default/app-production-cluster-ip-service" in local store
But when I do kubectl get endpoints command the output is this:
NAME ENDPOINTS AGE
app-production-cluser-ip-service 10.60.0.12:80,10.60.1.13:80 17m
I am really not sure what I'm doing wrong.
The service name mentioned in the ingress not matching. Please recreate a service and check
apiVersion: v1
kind: Service
metadata:
name: app-production-cluster-ip-service
spec:
type: ClusterIP
selector:
component: app-production
ports:
- port: 80
targetPort: 80
protocol: TCP