kong ingress controller has not effect on ingress resource - kubernetes

I have kubernetes Cluster v1.10 Over Centos 7
I installed kubernetes by hard-way
I have installed Kong ingress controller using helm
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm install stable/kong
and this output
NOTES:
1. Kong Admin can be accessed inside the cluster using:
DNS=guiding-wombat-kong-admin.default.svc.cluster.local
PORT=8444
To connect from outside the K8s cluster:
HOST=$(kubectl get nodes --namespace default -o jsonpath='{.items[0].status.addresses[0].address}')
PORT=$(kubectl get svc --namespace default guiding-wombat-kong-admin -o jsonpath='{.spec.ports[0].nodePort}')
2. Kong Proxy can be accessed inside the cluster using:
DNS=guiding-wombat-kong-proxy.default.svc.cluster.local
PORT=8443
To connect from outside the K8s cluster:
HOST=$(kubectl get nodes --namespace default -o jsonpath='{.items[0].status.addresses[0].address}')
PORT=$(kubectl get svc --namespace default guiding-wombat-kong-proxy -o jsonpath='{.spec.ports[0].nodePort}')
and I deployed dummy file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: http-svc
spec:
replicas: 1
selector:
matchLabels:
app: http-svc
template:
metadata:
labels:
app: http-svc
spec:
containers:
- name: http-svc
image: gcr.io/google_containers/echoserver:1.8
ports:
- containerPort: 8080
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
---
apiVersion: v1
kind: Service
metadata:
name: http-svc
labels:
app: http-svc
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: http-svc
---
and I deployed ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: foo-bar
spec:
rules:
- host: foo.bar
http:
paths:
- path: /
backend:
serviceName: http-svc
servicePort: 80
and when I run :
kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
foo-bar foo.bar 80 1m
and when I browse
https://node-IP:controller-admin
{"next":null,"data":[]}
How can I troubleshoot this issue and find the solution?
Thank you :D

I recommend installing it using this guide only not using minikube.
It work for me on AWS:
$ curl -H 'Host: foo.bar' http://35.162.32.30
Hostname: http-svc-66ffffc458-jkxsl
Pod Information:
node name: ip-x-x-x-x.us-west-2.compute.internal
pod name: http-svc-66ffffc458-jkxsl
pod namespace: default
pod IP: 192.168.x.x
Server values:
server_version=nginx: 1.13.3 - lua: 10008
Request Information:
client_address=192.168.x.x
method=GET
real path=/
query=
request_version=1.1
request_uri=http://192.168.x.x:8080/
Request Headers:
accept=*/*
connection=keep-alive
host=192.168.x.x:8080
user-agent=curl/7.58.0
x-forwarded-for=172.x.x.x
x-forwarded-host=foo.bar
x-forwarded-port=8000
x-forwarded-proto=http
x-real-ip=172.x.x.x
Request Body:
-no body in request-

Related

Get service IP:PORT as environment variable in pod

I have a service running and attached to a pod. In the pod, I need to define env variable which has to point to itself. If I run locally, I would set path to localhost:8080 and it works. How can I set env variable to point to the service itself?
user#user % kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service LoadBalancer 10.96.116.26 129.153.28.245 8080:31495/TCP 21h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP,12250/TCP 5d18h
If the configuration is:
spec:
containers:
- name: myapp
image: path/to/imageregistry/image:v1.0.0-amd64
env:
- name: BASE_PATH
value: "129.153.28.245:8080"
App is working, in a sense that If I open in browser 129.153.28.245:8080/app/pages it will open the website. If I replace <EXTERTNAL-IP> with <CLUSTER-IP> it's not loading.
How to retrieve <EXTERTNAL-IP> from the service and put into env variable, something like:
env:
- name: BASE_PATH
value: "<EXTERNAL-IP-FROM-SERVICE-NAME>:8080"
or is there another and better approach to do that?
Here's the full Deployment and Service yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
selector:
matchLabels:
app: myapp
replicas: 1
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: xxx.ocir.io/xxxxxx/myrepo/myimage:v1.0.0-amd64
env:
- name: BASE_PATH
value: "129.153.28.245:8080"
ports:
- containerPort: 80
imagePullSecrets:
- name: ocirsecret
---
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: LoadBalancer
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: myapp
Exposing Pod and Cluster Vars to Containers
Let's say you need some data about the Pod or K8s environment in your application to add Pod information as metadata to logs. such as e.g.
Pod IP
Pod Namespace
Service Account of Pod
But how to access this information?
All Pod information can be made available in the config file
There are 2 ways to expose Pod fileds to a running Container:
Environment Variables
Volume Files
Environment Variable
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-env
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
- name: log-sider
image: busybox
command: [ 'sh', '-c' ]
args:
- while true; do
echo sync app logs;
printenv POD_NAME POD_IP POD_SERVICE_ASCCOUNT;
sleep 20;
done;
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_SERVICE_ASCCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
Source

How to deploy Ingress to expose MinIO cluster outside

I have setup MinIO in kubernetes (k3s) - one node implementation.
Services
apiVersion: v1
kind: Service
metadata:
name: minio
namespace: minio
labels:
app: minio
spec:
clusterIP: None
selector:
app: minio
ports:
- port: 9011
name: minio
---
apiVersion: v1
kind: Service
metadata:
name: minio-service
namespace: minio
labels:
app: minio
spec:
type: LoadBalancer
selector:
app: minio
ports:
- port: 9012
targetPort: 9011
protocol: TCP
Sateful Set
[. . .]
containers:
- name: ches
image: minio/minio
args:
- server
- /data
[. . .]
- containerPort: 9000
hostPort: 9011
[. . .]
Command kubectl logs minio-0 -n minio returns the following:
API: http://10.42.0.14:9000 http://127.0.0.1:9000
Console: http://10.42.0.14:41989 http://127.0.0.1:41989
I am trying to setup Ingress. The steps that followed are:
Setup an Ingress Controller
From here: https://kubernetes.github.io/ingress-nginx/deploy/
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
Create an Internal Service
apiVersion: v1
kind: Service
metadata:
name: minio-service-ingress
namespace: minio
labels:
app: minio
spec:
selector:
app: minio
ports:
- port: 9011
name: minio
Create Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minio-ingress
namespace: minio
spec:
rules:
- host: minio.com
http:
paths:
- backend:
service:
name: minio-service-ingress
port:
number: 9011
path: /
pathType: Prefix
When executing kubectl get ing -n minio:
NAME CLASS HOSTS ADDRESS PORTS AGE
minio-ingress <none> minio.com 192.168.1.14 80 43m
In /etc/hosts I added the entry:
192.168.1.14 minio.com
However, when I am trying to enter http://minio.com/ in browser I get:
Bad Gateway
Am I missing something here?

Kubernetes get nodeport mappings in a pod

I'm migrating an application to Docker/Kubernetes. This application has 20+ well-known ports it needs to be accessed on. It needs to be accessed from outside the kubernetes cluster. For this, the application writes its public accessible IP to a database so the outside service knows how to access it. The IP is taken from the downward API (status.hostIP).
One solution is defining the well-known ports as (static) nodePorts in the service, but I don't want this, because it will limit the usability of the node: if another service has started and incidentally taken one of the known ports the application will not be able to start. Also, because Kubernetes opens the ports on all nodes in the cluster, I can only run 1 instance of the application per cluster.
Now I want to make the application aware of the port mappings done by the NodePort-service. How can this be done? As I don't see a hard link between the Service and the Statefulset object in Kubernetes.
Here is my (simplified) Kubernetes config:
apiVersion: v1
kind: Service
metadata:
name: my-app-svc
labels:
app: my-app
spec:
ports:
- port: 6000
targetPort: 6000
protocol: TCP
name: debug-port
- port: 6789
targetPort: 6789
protocol: TCP
name: traffic-port-1
selector:
app: my-app
type: NodePort
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-app-sf
spec:
serviceName: my-app-svc
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: my-repo/myapp/my-app:latest
imagePullPolicy: Always
env:
- name: K8S_ServiceAccountName
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
- name: K8S_ServerIP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: serverName
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- name: debug
containerPort: 6000
- name: traffic1
containerPort: 6789
This can be done with an initContainer.
You can define an initContainer to get the nodeport and save into a directory that shared with the container, then container can get the nodeport from that directory later, a simple demo like this:
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: busybox
command: ["sh", "-c", "cat /data/port; while true; do sleep 3600; done"]
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: config-data
image: tutum/curl
command: ["sh", "-c", "TOKEN=`cat /var/run/secrets/kubernetes.io/serviceaccount/token`; curl -kD - -H \"Authorization: Bearer $TOKEN\" https://kubernetes.default:443/api/v1/namespaces/test/services/app 2>/dev/null | grep nodePort | awk '{print $2}' > /data/port"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
emptyDir: {}

I can't connect my ingress with my service

I have a problem with my ingress and my service, I can not get that when I connect to the IP of my server, I redirect to the service I have associated with port 80, which is my website. I pass you the configuration files and the description of the ingress:
apiVersion: v1
kind: Namespace
metadata:
name: bookstack
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: mysql
name: mysql
namespace: bookstack
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
service: mysql
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: bookstack
- name: MYSQL_PASS
value: pass
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_USER
value: user
image: mysql:5.7
name: mysql
ports:
- containerPort: 3306
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
labels:
service: mysql
name: mysql
namespace: bookstack
spec:
type: NodePort
ports:
- name: "3306"
port: 3306
targetPort: 3306
selector:
service: mysql
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: bookstack
name: bookstack
namespace: bookstack
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: bookstack
spec:
containers:
- env:
- name: namespace
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: podname
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: DB_DATABASE
value: bookstack
- name: DB_HOST
value: mysql
- name: DB_PASSWORD
value: root
- name: DB_USERNAME
value: root
image: solidnerd/bookstack:latest
name: bookstack
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
app: bookstack
name: bookstack
namespace: bookstack
spec:
type: NodePort
ports:
- name: http-port
port: 80
protocol: TCP
selector:
app: bookstack
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: http
namespace: bookstack
spec:
backend:
serviceName: bookstack
servicePort: http-port
This is what appears on my ingress:
Name: http
Namespace: bookstack
Address:
Default backend: bookstack:http-port (10.36.0.22:80)
Rules:
Host Path Backends
---- ---- --------
* * bookstack:http-port (10.36.0.22:80)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"http","namespace":"bookstack"},"spec":{"backend":{"serviceName":"bookstack","servicePort":"http-port"}}}
Events: <none>
It doesn't return any external IP to connect me, why could it be? I want to avoid using LoadBalancer as a service type.
The main problem was that I didn't have activated the balancer that Google Kubernetes Engine offers by default, not having it active I couldn't generate an external ip because there wasn't a balancer. There are two solutions, either activate GKE's default load balancer or create a type of service: LoadBalancer.
Important to activate also within the deploy the readinessProbe
and livenessProbe.
An example:
readinessProbe:
httpGet:
path: /login
port: 80
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 15
livenessProbe:
httpGet:
path: /login
port: 80
initialDelaySeconds: 15
timeoutSeconds: 1
periodSeconds: 15
There wouldn't be an external IP specifically because NodePort represents all the nodes on your cluster on that specific port. So, essentially you would have to point an external load balancer or that traffic source to each of the nodes on your cluster on that specific NodePort.
Note that if you are using ExternalTrafficPolicy=Local only the nodes that have pods for your service will reply.

Service not exposing in kubernetes

I have a deployment and a service in GKE. I exposed the deployment as a Load Balancer but I cannot access it through the service (curl or browser). I get an:
curl: (7) Failed to connect to <my-Ip-Address> port 443: Connection refused
I can port forward directly to the pod and it works fine:
kubectl --namespace=redfalcon port-forward web-service-rf-76967f9c68-2zbhm 9999:443 >> /dev/null
curl -k -v --request POST --url https://localhost:9999/auth/login/ --header 'content-type: application/json' --header 'x-profile-key: ' --data '{"email":"<testusername>","password":"<testpassword>"}'
I have most likely misconfigured my service but cannot see how. Any help on what I did would be very much appreciated.
Service Yaml:
---
apiVersion: v1
kind: Service
metadata:
name: red-falcon-lb
namespace: redfalcon
spec:
type: LoadBalancer
ports:
- name: https
port: 443
protocol: TCP
selector:
app: web-service-rf
Deployment YAML
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: web-service-rf
spec:
selector:
matchLabels:
app: web-service-rf
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: web-service-rf
spec:
initContainers:
- name: certificate-init-container
image: proofpoint/certificate-init-container:0.2.0
imagePullPolicy: Always
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
args:
- "-namespace=$(NAMESPACE)"
- "-pod-name=$(POD_NAME)"
- "-query-k8s"
volumeMounts:
- name: tls
mountPath: /etc/tls
containers:
- name: web-service-rf
image: gcr.io/redfalcon-186521/redfalcon-webserver-minimal:latest
# image: gcr.io/redfalcon-186521/redfalcon-webserver-full:latest
command:
- "./server"
- "--port=443"
imagePullPolicy: Always
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
ports:
- containerPort: 443
resources:
limits:
memory: "500Mi"
cpu: "100m"
volumeMounts:
- mountPath: /etc/tls
name: tls
- mountPath: /var/secrets/google
name: google-cloud-key
volumes:
- name: tls
emptyDir: {}
- name: google-cloud-key
secret:
secretName: pubsub-key
output: kubectl describe svc red-falcon-lb
Name: red-falcon-lb
Namespace: redfalcon
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"red-falcon-lb","namespace":"redfalcon"},"spec":{"ports":[{"name":"https","port...
Selector: app=web-service-rf
Type: LoadBalancer
IP: 10.43.245.9
LoadBalancer Ingress: <EXTERNAL IP REDACTED>
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 31524/TCP
Endpoints: 10.40.0.201:443,10.40.0.202:443
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 39m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 38m service-controller Ensured load balancer
I figured out what it was...
My golang app was listening on localhost instead of 0.0.0.0. This meant that port forwarding on kubectl worked but any service exposure didn't work.
I had to add "--host 0.0.0.0" to my k8s command and it then listened to requests from outside localhost.
My command ended up being...
"./server --port 8080 --host 0.0.0.0"