Kubernates Nginx-Ingress rule not working - kubernetes

I installed the ingress controller using the following command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.5.1/deploy/static/provider/cloud/deploy.yaml
And the result of kubectl get pods --namespace=ingress-nginx is:
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-x4mss 0/1 Completed 0 28m
ingress-nginx-admission-patch-jn9cz 0/1 Completed 1 28m
ingress-nginx-controller-8574b6d7c9-k4jbj 1/1 Running 0 28m
For kubectl get service ingress-nginx-controller --namespace=ingress-nginx I get:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.106.134.128 localhost 80:32294/TCP,443:30997/TCP 30m
As for my deployment and service I have the following:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app
name: app
namespace: namespace
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
labels:
app: app
spec:
containers:
- image: image
name: app
ports:
- containerPort: 5000
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: app-service
namespace: namespace
spec:
type: ClusterIP
selector:
app: app
ports:
- name: app-service
port: 5000
targetPort: 5000
My Ingress is as follows:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
namespace: namespace
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: com.host.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app-service
port:
number: 5000
My pod and service are both running fine.
The result of running kubectl describe pod command is:
Name: app-6b9f7fc47b-sh6nc
Namespace: namespace
Priority: 0
Service Account: default
Node: docker-desktop/192.168.65.4
Start Time: Wed, 30 Nov 2022 16:22:04 -0500
Labels: app=app
pod-template-hash=6b9f7fc47b
Status: Running
IP: 10.1.0.237
IPs:
IP: 10.1.0.237
Controlled By: ReplicaSet/app-6b9f7fc47b
Containers:
app:
Container ID: docker://ba77235d044c24b0f1391c56a2e8653a598a5c130ea4d15ff3b41cd96659fd4a
Image: image
Image ID: docker://sha256:912cb58ab1c3f2dd628c0b7db4d7f9ac6df4efbe4fcb86979b6a84614db8a675
Port: 5000/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 30 Nov 2022 16:22:05 -0500
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8pmjz (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-8pmjz:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 29m default-scheduler Successfully assigned namespace/app-6b9f7fc47b-sh6nc to docker-desktop
Normal Pulled 29m kubelet Container image "image" already present on machine
Normal Created 29m kubelet Created container app
Normal Started 29m kubelet Started container app
Running the following command kubectl get ingress --all-namespaces yields:
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
namespace ingress nginx com.host.com 80 7s
I have tried using different ports, changing the controller, using a load balance type instead of cluster ip and yet nothing works when it comes to trying to make the ingress rule work. I have set the ingress-controller external ip as com.host.com in my hosts file as well. Furthermore, I am using docker-desktop as my node, however, I'm having this issue on minikube as well. Any help is appreciated.

Your container's containerPort is 3000 while your service targetPort is 5000. Make sure that your service' targetPort, your container's containerPort and last but not least the listening port of your app are all the same.
Update:
So I guess it turned out that the problem is with name resolution.
com.host.com is not resolvable on your machine.
Using the localhost or 127.0.0.1 didn't match the host from the ingress rule -> that's why the default 404 handler was chosen.
So either fix name resolution or remove the host from the ingress rule.

Bear in mind that the ingress controller will only update the IP of the ingress if the mapped service is up and ready. Your ingress shows that it's using app-service on port 5000 as its backend service, but your question does not show a listing of the pods on the namespace namespace where it appears your application pods are. Please could you add that to the question -- it is possible that either your pods aren't coming up successfully, or they're listening on a different port to 5000
UPDATE
Please also try the following:
You're specifying that the container uses port 5000. But is it actually using 5000? try:
kubectl exec -it -n namespace app-6b9f7fc47b-sh6nc -- bash (or sh depending on what the default shell is for your app)
and then try curl localhost:5000 to see if it responds.
Check the ingress controller logs:
kubectl logs -f -n ingress-nginx and see if there's any log messages that might help you identify what's going on.

Related

Why can't load balancer connect to service in GKE?

I am deploying an application on GKE cluster and try to deploy a load balancer to make clients able to call this application.
My application spec is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
namespace: default
spec:
replicas: 1
selector:
matchLabels:
name: api
template:
metadata:
labels:
name: api
spec:
serviceAccountName: docker-sa
containers:
- name: api
image: zhaoyi0113/rancher-go-api
ports:
- containerPort: 8080
apiVersion: v1
kind: Service
metadata:
name: api
annotations:
cloud.google.com/neg: '{"ingress": true}'
spec:
selector:
name: api
ports:
- port: 80
targetPort: 8080
protocol: TCP
type: NodePort
It listens on the port 8080 and a service open port 80 and use the targetPort 8080 to connect to the application.
And I have a ingress spec:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: sidecar
namespace: default
spec:
defaultBackend:
service:
name: api
port:
number: 80
After deploy, I am able to see the ip address from kubectl get ingress. But when I send a request to the ip, I got 502 error.
$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
sidecar <none> * 107.178.245.193 80 28m
$ kubectl describe ingress sidecar
Name: sidecar
Labels: <none>
Namespace: default
Address: 107.178.245.193
Default backend: api:80 (10.0.1.14:8080)
Rules:
Host Path Backends
---- ---- --------
* * api:80 (10.0.1.14:8080)
Annotations: ingress.kubernetes.io/backends: {"k8s1-5ae02eec-default-api-80-28d7bbec":"Unknown"}
ingress.kubernetes.io/forwarding-rule: k8s2-fr-krllp0c9-default-sidecar-9a9n4r5m
ingress.kubernetes.io/target-proxy: k8s2-tp-krllp0c9-default-sidecar-9a9n4r5m
ingress.kubernetes.io/url-map: k8s2-um-krllp0c9-default-sidecar-9a9n4r5m
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 29m loadbalancer-controller UrlMap "k8s2-um-krllp0c9-default-sidecar-9a9n4r5m" created
Normal Sync 28m loadbalancer-controller TargetProxy "k8s2-tp-krllp0c9-default-sidecar-9a9n4r5m" created
Normal Sync 28m loadbalancer-controller ForwardingRule "k8s2-fr-krllp0c9-default-sidecar-9a9n4r5m" created
Normal IPChanged 28m loadbalancer-controller IP is now 107.178.245.193
Normal Sync 3m51s (x7 over 29m) loadbalancer-controller Scheduled for sync
Below is the curl error response:
$ curl -i http://107.178.245.193/health
HTTP/1.1 502 Bad Gateway
Content-Type: text/html; charset=UTF-8
Referrer-Policy: no-referrer
Content-Length: 332
Date: Tue, 16 Aug 2022 10:40:31 GMT
<html><head>
<meta http-equiv="content-type" content="text/html;charset=utf-8">
<title>502 Server Error</title>
</head>
<body text=#000000 bgcolor=#ffffff>
<h1>Error: Server Error</h1>
<h2>The server encountered a temporary error and could not complete your request.<p>Please try again in 30 seconds.</h2>
<h2></h2>
</body></html>
When I describe the service api, I got below error:
$ kubectl describe service api
Name: api
Namespace: default
Labels: <none>
Annotations: cloud.google.com/neg: {"ingress": true}
cloud.google.com/neg-status: {"network_endpoint_groups":{"80":"k8s1-29362abf-default-api-80-f2f1248a"},"zones":["australia-southeast2-a"]}
field.cattle.io/publicEndpoints: [{"port":30084,"protocol":"TCP","serviceName":"default:api","allNodes":true}]
Selector: name=api
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.3.253.54
IPs: 10.3.253.54
Port: <unset> 80/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30084/TCP
Endpoints: 10.0.1.17:8080
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning AttachFailed 7s neg-controller Failed to Attach 2 network endpoint(s) (NEG "k8s1-29362abf-default-api-80-f2f1248a" in zone "australia-southeast2-a"): googleapi: Error 400: Invalid value for field 'resource.ipAddress': '10.0.1.18'. Specified IP address 10.0.1.18 doesn't belong to the (sub)network default or to the instance gke-gcp-cqrs-gcp-cqrs-node-pool-6b30ca5c-41q8., invalid
Warning RetryFailed 7s neg-controller Failed to retry NEG sync for "default/api-k8s1-29362abf-default-api-80-f2f1248a--/80-8080-GCE_VM_IP_PORT-L7": maximum retry exceeded
Does anyone know what could be the root course?
I created a new GKE cluster and tried setting up the same resources you are configuring. However, I used the following image for the container gcr.io/google-samples/hello-app:1.0. Everything else remains the same - leaving the gcp-setup.yaml file I used below for reference.
gcp-setup.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
namespace: default
spec:
replicas: 1
selector:
matchLabels:
name: api
template:
metadata:
labels:
name: api
spec:
containers:
- name: api
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: api
annotations:
cloud.google.com/neg: '{"ingress": true}'
spec:
selector:
name: api
ports:
- port: 80
targetPort: 8080
protocol: TCP
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: sidecar
namespace: default
spec:
defaultBackend:
service:
name: api
port:
number: 80
There is also a small thing I had to change in your configuration, which is the annotation block - when I first tried to apply your configuration, I got the below error. Hence, I had to adjust the annotation entry to be annotations.
> kubectl apply -f gcp-setup.yaml
deployment.apps/api created
error: error validating "gcp-setup.yaml": error validating data: ValidationError(Service.metadata): unknown field "annotation" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta; if you choose to ignore these errors, turn validation off with --validate=false
Afterwards, I was able to successfully provision all of the resources, and your configuration worked perfectly fine. It took around 3 minutes I believe for the Ingress resource to get an IP address assigned (masked as XX.XXX.XXX.XX below).
> kubectl get pods
NAME READY STATUS RESTARTS AGE
api-7d6fdd9845-8dwqc 1/1 Running 0 7m13s
> kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api NodePort 10.36.4.150 <none> 80:30142/TCP 7m1s
kubernetes ClusterIP 10.36.0.1 <none> 443/TCP 12m
> kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
sidecar <none> * XX.XXX.XXX.XX 80 7m18s
> kubectl describe ingress
Name: sidecar
Namespace: default
Address: XX.XXX.XXX.XX
Default backend: api:80 (10.32.0.10:8080)
Rules:
Host Path Backends
---- ---- --------
* * api:80 (10.32.0.10:8080)
Annotations: ingress.kubernetes.io/backends: {"k8s1-05f3ce8b-default-api-80-82dd4d72":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s2-fr-9k4w4ytx-default-sidecar-9m5g4dex
ingress.kubernetes.io/target-proxy: k8s2-tp-9k4w4ytx-default-sidecar-9m5g4dex
ingress.kubernetes.io/url-map: k8s2-um-9k4w4ytx-default-sidecar-9m5g4dex
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 7m13s loadbalancer-controller UrlMap "k8s2-um-9k4w4ytx-default-sidecar-9m5g4dex" created
Normal Sync 7m10s loadbalancer-controller TargetProxy "k8s2-tp-9k4w4ytx-default-sidecar-9m5g4dex" created
Normal Sync 6m59s loadbalancer-controller ForwardingRule "k8s2-fr-9k4w4ytx-default-sidecar-9m5g4dex" created
Normal IPChanged 6m59s loadbalancer-controller IP is now XX.XXX.XXX.XX
Normal Sync 28s (x6 over 8m3s) loadbalancer-controller Scheduled for sync
After the Ingress resource became healthy, I was able to navigate in my browser to the assigned IP address XX.XXX.XXX.XX and got a successful response back from the workload I deployed (gcr.io/google-samples/hello-app:1.0).
Browser Output
Hello, world!
Version: 1.0.0
Hostname: api-7d6fdd9845-8dwqc
As a conclusion, make sure to update your Service definition from metadata.annotation to metadata.annotations. It was the only change I had to do to make your configuration work. Furthermore, I recommend turning resource definition validation on to make sure that you catch such errors when defining new resources.
If the error still persists, I would recommend running kubectl describe ingress sidecar and analyze the output, assuming it is related to the Ingress resource definition.
EDIT1
To make sure that this is not a zone-related issue, I provisioned a VPC-native, Public cluster in the same zone that you are using (australia-southeast2-a). I then applied the same configuration, and it was successful, thus ruling out the zone-related topic.
Based on the additional information you included in the post, my best guess for some potential root causes for the Service error you're getting when running kubectl describe service would be:
Your GKE cluster is not VPC-native - I see this is a core requirement to be able to leverage NEG
Your GKE cluster has been provisioned as a Private cluster, and as a consequence, NEG tries to assign an IP address from the available Private subnet ranges. This would explain the 10.0.1.18 IP address that NEG tries to assign to the resource definition

Kubernetes (on-premises) Metallb LoadBalancer and sticky sessions

I installed one Kubernetes Master and two kubernetes worker on-premises.
After I installed Metallb as LoadBalancer using commands below:
$ kubectl edit configmap -n kube-system kube-proxy
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxy
Configuration mode:
"ipvs" ipvs:
strictARP: true
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/metallb.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
vim config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 10.100.170.200-10.100.170.220
kubectl apply -f config-map.yaml
kubectl describe configmap config -n metallb-system
I created my yaml file as below:
myapp-tst-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-tst-deployment
labels:
app: myapp-tst
spec:
replicas: 2
selector:
matchLabels:
app: myapp-tst
template:
metadata:
labels:
app: myapp-tst
spec:
containers:
- name: myapp-tst
image: myapp-tomcat
securityContext:
privileged: true
capabilities:
add:
- SYS_ADMIN
myapp-tst-service.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-tst-service
labels:
app: myapp-tst
spec:
externalTrafficPolicy: Cluster
type: LoadBalancer
ports:
- name: myapp-tst-port
nodePort: 30080
port: 80
protocol: TCP
targetPort: 8080
selector:
app: myapp-tst
sessionAffinity: None
myapp-tst-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapp-tst-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/affinity-mode: "persistent"
nginx.ingress.kubernetes.io/session-cookie-name: "INGRESSCOOKIE"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: myapp-tst-service
servicePort: myapp-tst-port
I run kubectl -f apply for all three files, and these is my result:
kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/myapp-tst-deployment-54474cd74-p8cxk 1/1 Running 0 4m53s 10.36.0.1 bcc-tst-docker02 <none> <none>
pod/myapp-tst-deployment-54474cd74-pwlr8 1/1 Running 0 4m53s 10.44.0.2 bca-tst-docker01 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/myapp-tst-service LoadBalancer 10.110.184.237 10.100.170.15 80:30080/TCP 4m48s app=myapp-tst,tier=backend
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d22h <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/myapp-tst-deployment 2/2 2 2 4m53s myapp-tst mferraramiki/myapp-test app=myapp-tst
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/myapp-tst-deployment-54474cd74 2 2 2 4m53s myapp-tst myapp/myapp-test app=myapp-tst,pod-template-hash=54474cd74
But when I try to connect using LB external IP (10.100.170.15) the system redirect the browser request
(on the same browser) on a pod, if I refresh or open a new tab (on the same url) the system reply redirect the request to another pod.
I need when a user digit url in the browser, he must be connect to a specific pod during all session, and not switch to other pods.
How can solve this problem if is it possible?
In my VM I resolved this issue using stickysession, how can enable it on LB or in Kubernetes components?
In the myapp-tst-service.yaml file the "sessionAffinity" is set to "None".
You should try to set it to "ClientIP".
From page https://kubernetes.io/docs/concepts/services-networking/service/ :
"If you want to make sure that connections from a particular client are passed to the same Pod each time, you can select the session affinity based on the client's IP addresses by setting service.spec.sessionAffinity to "ClientIP" (the default is "None"). You can also set the maximum session sticky time by setting service.spec.sessionAffinityConfig.clientIP.timeoutSeconds appropriately. (the default value is 10800, which works out to be 3 hours)."

Google Kubernetes Engine Ingress doesn't work

Create ingress followed the guide of 'Kubernetes in Action' book on GKE, but the ingress doesn't work, can' be accessed from the public IP address of Ingress.
Create the replicaset to create pod.
Create Service. (followed the nodeport method on 'Kubernetes in Action').
Create ingress.
ReplicaSet, Service, Ingress are created successfully, nodeport can be accessed from the public IP address, no UNHEALTHY in ingress.
replicaset:
apiVersion: apps/v1beta2
kind: ReplicaSet
metadata:
name: kubia
spec:
replicas: 3
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: sonyfaye/kubia
Service:
apiVersion: v1
kind: Service
metadata:
name: kubia-nodeport
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
nodePort: 30123
selector:
app: kubia
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubia
spec:
rules:
- host: kubia.example.com
http:
paths:
- path: /
backend:
serviceName: kubia-nodeport
servicePort: 80
The nodeport itself can be accessed from public IP addresses.
C:\kube>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.59.240.1 <none> 443/TCP 8d
kubia-nodeport NodePort 10.59.253.10 <none> 80:30123/TCP 20h
C:\kube>kubectl get node
NAME STATUS ROLES AGE VERSION
gke-kubia-default-pool-08dd2133-qbz6 Ready <none> 8d v1.12.8-gke.6
gke-kubia-default-pool-183639fa-18vr Ready <none> 8d v1.12.8-gke.6
gke-kubia-default-pool-42725220-43q8 Ready <none> 8d v1.12.8-gke.6
C:\kube>kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gke-kubia-default-pool-08dd2133-qbz6 Ready <none> 8d v1.12.8-gke.6 10.140.0.17 35.201.224.238 Container-Optimized OS from Google 4.14.119+ docker://17.3.2
gke-kubia-default-pool-183639fa-18vr Ready <none> 8d v1.12.8-gke.6 10.140.0.18 35.229.152.12 Container-Optimized OS from Google 4.14.119+ docker://17.3.2
gke-kubia-default-pool-42725220-43q8 Ready <none> 8d v1.12.8-gke.6 10.140.0.16 34.80.225.64 Container-Optimized OS from Google 4.14.119+ docker://17.3.2
C:\kube>curl http://34.80.225.64:30123
You've hit kubia-j2lnr
But the ingress can't be accessed from outside.
hosts file:
34.98.92.110 kubia.example.com
C:\kube>kubectl describe ingress
Name: kubia
Namespace: default
Address: 34.98.92.110
Default backend: default-http-backend:80 (10.56.0.7:8080)
Rules:
Host Path Backends
---- ---- --------
kubia.example.com
/ kubia-nodeport:80 (10.56.0.14:8080,10.56.1.6:8080,10.56.3.4:8080)
Annotations:
ingress.kubernetes.io/backends: {"k8s-be-30123--c4addd497b1e0a6d":"HEALTHY","k8s-be-30594--c4addd497b1e0a6d":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-kubia--c4addd497b1e0a6d
ingress.kubernetes.io/target-proxy: k8s-tp-default-kubia--c4addd497b1e0a6d
ingress.kubernetes.io/url-map: k8s-um-default-kubia--c4addd497b1e0a6d
Events:
<none>
C:\kube>curl http://kubia.example.com
curl: (7) Failed to connect to kubia.example.com port 80: Timed out
C:\kube>telnet kubia.example.com 80
Connecting To kubia.example.com...
C:\kube>telnet 34.98.92.110 80
Connecting To 34.98.92.110...Could not open connection to the host, on port 80: Connect failed
Tried from intranet.
curl 34.98.92.110 IP can get some resule, and 80 port of 34.98.92.110 is accessible from intranet.
C:\kube>kubectl exec -it kubia-lrt9x bash
root#kubia-lrt9x:/# curl http://kubia.example.com
curl: (6) Could not resolve host: kubia.example.com
root#kubia-lrt9x:/# curl http://34.98.92.110
default backend - 404root#kubia-lrt9x:/# curl http://34.98.92.110
default backend - 404root#kubia-lrt9x:/#
root#kubia-lrt9x:/# curl http://10.56.0.7:8080
default backend - 404root#kubia-lrt9x:/#
Does anybody know how to debug this?
The nodeport is been added to the firewall, or else nodeport is not accessible. The Ingress IP seems don't need to be added to the firewall.
Try to expose replicaset to be able to connect from the outside:
$ kubectl expose rs hello-world --type=NodePort --name=my-service
remember to first delete service kubia-nodeport and delete selector and section with service in Ingress configuration file and then apply changes using kubectl apply command.
More information you can find here: exposing-externalip.
Useful doc: kubectl-expose.

K8S - Not able to see alerts via - alertmanager

I've Prometheus operator which is working as expected
https://github.com/coreos/prometheus-operator
Now I want to apply the alert manager from scratch
After reading the docs im came out with those yamls.
but the problem is when I entered to the UI
Nothing is shown, any idea what I miss here ?
http://localhost:9090/alerts
I use port forwarding ...
This is all the config files I've apply to my k8s cluster
I just want to do some simple test to see that it working and then extend it to our needs...
alertmanger_main.yml
---
apiVersion: monitoring.coreos.com/v1
kind: Alertmanager
metadata:
name: main
labels:
alertmanager: main
spec:
replicas: 3
version: v0.14.0
alertmanger_service.yml
apiVersion: v1
kind: Service
metadata:
name: alertmanager-main
spec:
type: LoadBalancer
ports:
- name: web
port: 9093
protocol: TCP
targetPort: web
selector:
alertmanager: main
testalert.yml
kind: ConfigMap
apiVersion: v1
metadata:
name: prometheus-example-rules
labels:
role: prometheus-rulefiles
prometheus: prometheus
data:
example.rules.yaml: |+
groups:
- name: ./example.rules
rules:
- alert: ExampleAlert
expr: vector(1)
alertmanager.yml
global:
resolve_timeout: 5m
route:
group_by: ['job']
group_wait: 30s
group_interval: 5m
repeat_interval: 12h
receiver: 'webhook'
receivers:
- name: 'webhook'
webhook_configs:
- url: 'http://alertmanagerwh:30500/'
and to create secret I use
kubectl create secret generic alertmanager-main --from-file=alertmanager.yaml
what I need is some basic alerts in K8S and I follow the documatation but didnt find any good step by step tutorial
to check my sys for monitoring namespace
~  kubectl get pods -n monitoring 13.4m  Sun Feb 17 18:48:16 2019
NAME READY STATUS RESTARTS AGE
kube-state-metrics-593czc6b4-mrtkb 2/2 Running 0 12h
monitoring-grafana-771155cbbb-scqvx 1/1 Running 0 12h
prometheus-operator-79f345dc67-nw5zc 1/1 Running 0 12h
prometheus-prometheus-0 3/3 Running 1 12h
~  kubectl get svc -n monitoring 536ms  Sun Feb 17 21:04:51 2019
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager-main NodePort 100.22.170.666 <none> 9093:30904/TCP 4m53s
kube-state-metrics ClusterIP 100.34.212.596 <none> 8080/TCP 4d7h
monitoring-grafana ClusterIP 100.67.230.884 <none> 80/TCP 4d7h
prometheus-operated ClusterIP None <none> 9090/TCP 4d7h
I've also now changed the service to LoadBalancer and I try to enter like
~  kubectl get svc -n monitoring 507ms  Sun Feb 17 21:23:56 2019
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager-main LoadBalancer 100.22.170.666 38.482.152.331 9093:30904/TCP 23m
when I hit the browser with
38.482.152.331:9093
38.482.152.331:30904
nothing happen...
When you consider using AlertManager, besides the general configuration and applying alert rules, AlertManager requires being integrated with a Prometheus server. The Prometheus instance can then track any incoming series of events, and once it detects any rule which is recognized, it triggers an alert to the nested alertmanager.
In order to enable alerting it might be necessary to append the following config the to Prometheus instance:
alerting:
alertmanagers:
- static_configs:
- targets:
- 'alertmanagerIP:9093'
Specifically, for AlertManager implementation in CoreOS, you can follow the steps described in the official Alerting documentation; however, below you can find example for Prometheus pod alerting configuration kept from the mentioned guideline:
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: example
spec:
replicas: 2
alerting:
alertmanagers:
- namespace: default
name: alertmanager-example
port: web
serviceMonitorSelector:
matchLabels:
team: frontend
resources:
requests:
memory: 400Mi
ruleSelector:
matchLabels:
role: prometheus-rulefiles
prometheus: example

kube-proxy Couldn't find an endpoint for default/tomcat:http: missing service entry

I use OS Centos 7.
My Pod:
apiVersion: v1
kind: Pod
metadata:
name: tomcat
spec:
containers:
- image: ec2-73-99-254-8.eu-central-1.compute.amazonaws.com:5000/tom
name: tomcat
command: ["sh","-c","/opt/tomcat/bin/deploy-and-run.sh"]
volumeMounts:
- mountPath: /maven
name: app-volume
ports:
- containerPort: 8080
volumes:
- name: app-volume
hostPath:
path: /maven
My Sevice:
apiVersion: v1
kind: Service
metadata:
name: tomcat
spec:
ports:
- name: http
port: 80
targetPort: 8080
protocol: TCP
selector:
name: tomcat
Services looks like:
# kubectl get svc
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 10.254.0.1 <none> 443/TCP <none> 14h
tomcat 10.254.206.26 <none> 80/TCP name=tomcat 13h
And Pods:
# kubectl get pod
NAME READY STATUS RESTARTS AGE
tomcat 1/1 Running 0 13h
And when I run Curl:
curl 10.254.206.26
curl: (56) Recv failure: Connection reset by peer
Kube-proxy logs at that moment show somthing like this:
kube-proxy[22273]: Couldn't find an endpoint for default/tomcat:http: missing service entry
kube-proxy[22273]: Failed to connect to balancer: missing service entry
But when I run curl directly to the pod ip address and port 8080 - it works fine.
When I run command kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 195.234.109.11:6443 14h
tomcat <none> 14h
Field ENDPOINTS in this output with "none" looks strange.
What's wrong?
Services work by matching labels. You are attempting to match based on the name of your pod. Try changing the metadata for your pod to
metadata:
name: tomcat
labels:
name: tomcat
and see if that helps.