K8S - Not able to see alerts via - alertmanager - kubernetes

I've Prometheus operator which is working as expected
https://github.com/coreos/prometheus-operator
Now I want to apply the alert manager from scratch
After reading the docs im came out with those yamls.
but the problem is when I entered to the UI
Nothing is shown, any idea what I miss here ?
http://localhost:9090/alerts
I use port forwarding ...
This is all the config files I've apply to my k8s cluster
I just want to do some simple test to see that it working and then extend it to our needs...
alertmanger_main.yml
---
apiVersion: monitoring.coreos.com/v1
kind: Alertmanager
metadata:
name: main
labels:
alertmanager: main
spec:
replicas: 3
version: v0.14.0
alertmanger_service.yml
apiVersion: v1
kind: Service
metadata:
name: alertmanager-main
spec:
type: LoadBalancer
ports:
- name: web
port: 9093
protocol: TCP
targetPort: web
selector:
alertmanager: main
testalert.yml
kind: ConfigMap
apiVersion: v1
metadata:
name: prometheus-example-rules
labels:
role: prometheus-rulefiles
prometheus: prometheus
data:
example.rules.yaml: |+
groups:
- name: ./example.rules
rules:
- alert: ExampleAlert
expr: vector(1)
alertmanager.yml
global:
resolve_timeout: 5m
route:
group_by: ['job']
group_wait: 30s
group_interval: 5m
repeat_interval: 12h
receiver: 'webhook'
receivers:
- name: 'webhook'
webhook_configs:
- url: 'http://alertmanagerwh:30500/'
and to create secret I use
kubectl create secret generic alertmanager-main --from-file=alertmanager.yaml
what I need is some basic alerts in K8S and I follow the documatation but didnt find any good step by step tutorial
to check my sys for monitoring namespace
~  kubectl get pods -n monitoring 13.4m  Sun Feb 17 18:48:16 2019
NAME READY STATUS RESTARTS AGE
kube-state-metrics-593czc6b4-mrtkb 2/2 Running 0 12h
monitoring-grafana-771155cbbb-scqvx 1/1 Running 0 12h
prometheus-operator-79f345dc67-nw5zc 1/1 Running 0 12h
prometheus-prometheus-0 3/3 Running 1 12h
~  kubectl get svc -n monitoring 536ms  Sun Feb 17 21:04:51 2019
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager-main NodePort 100.22.170.666 <none> 9093:30904/TCP 4m53s
kube-state-metrics ClusterIP 100.34.212.596 <none> 8080/TCP 4d7h
monitoring-grafana ClusterIP 100.67.230.884 <none> 80/TCP 4d7h
prometheus-operated ClusterIP None <none> 9090/TCP 4d7h
I've also now changed the service to LoadBalancer and I try to enter like
~  kubectl get svc -n monitoring 507ms  Sun Feb 17 21:23:56 2019
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager-main LoadBalancer 100.22.170.666 38.482.152.331 9093:30904/TCP 23m
when I hit the browser with
38.482.152.331:9093
38.482.152.331:30904
nothing happen...

When you consider using AlertManager, besides the general configuration and applying alert rules, AlertManager requires being integrated with a Prometheus server. The Prometheus instance can then track any incoming series of events, and once it detects any rule which is recognized, it triggers an alert to the nested alertmanager.
In order to enable alerting it might be necessary to append the following config the to Prometheus instance:
alerting:
alertmanagers:
- static_configs:
- targets:
- 'alertmanagerIP:9093'
Specifically, for AlertManager implementation in CoreOS, you can follow the steps described in the official Alerting documentation; however, below you can find example for Prometheus pod alerting configuration kept from the mentioned guideline:
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: example
spec:
replicas: 2
alerting:
alertmanagers:
- namespace: default
name: alertmanager-example
port: web
serviceMonitorSelector:
matchLabels:
team: frontend
resources:
requests:
memory: 400Mi
ruleSelector:
matchLabels:
role: prometheus-rulefiles
prometheus: example

Related

Kubernates Nginx-Ingress rule not working

I installed the ingress controller using the following command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.5.1/deploy/static/provider/cloud/deploy.yaml
And the result of kubectl get pods --namespace=ingress-nginx is:
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-x4mss 0/1 Completed 0 28m
ingress-nginx-admission-patch-jn9cz 0/1 Completed 1 28m
ingress-nginx-controller-8574b6d7c9-k4jbj 1/1 Running 0 28m
For kubectl get service ingress-nginx-controller --namespace=ingress-nginx I get:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.106.134.128 localhost 80:32294/TCP,443:30997/TCP 30m
As for my deployment and service I have the following:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app
name: app
namespace: namespace
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
labels:
app: app
spec:
containers:
- image: image
name: app
ports:
- containerPort: 5000
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: app-service
namespace: namespace
spec:
type: ClusterIP
selector:
app: app
ports:
- name: app-service
port: 5000
targetPort: 5000
My Ingress is as follows:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
namespace: namespace
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: com.host.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app-service
port:
number: 5000
My pod and service are both running fine.
The result of running kubectl describe pod command is:
Name: app-6b9f7fc47b-sh6nc
Namespace: namespace
Priority: 0
Service Account: default
Node: docker-desktop/192.168.65.4
Start Time: Wed, 30 Nov 2022 16:22:04 -0500
Labels: app=app
pod-template-hash=6b9f7fc47b
Status: Running
IP: 10.1.0.237
IPs:
IP: 10.1.0.237
Controlled By: ReplicaSet/app-6b9f7fc47b
Containers:
app:
Container ID: docker://ba77235d044c24b0f1391c56a2e8653a598a5c130ea4d15ff3b41cd96659fd4a
Image: image
Image ID: docker://sha256:912cb58ab1c3f2dd628c0b7db4d7f9ac6df4efbe4fcb86979b6a84614db8a675
Port: 5000/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 30 Nov 2022 16:22:05 -0500
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8pmjz (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-8pmjz:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 29m default-scheduler Successfully assigned namespace/app-6b9f7fc47b-sh6nc to docker-desktop
Normal Pulled 29m kubelet Container image "image" already present on machine
Normal Created 29m kubelet Created container app
Normal Started 29m kubelet Started container app
Running the following command kubectl get ingress --all-namespaces yields:
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
namespace ingress nginx com.host.com 80 7s
I have tried using different ports, changing the controller, using a load balance type instead of cluster ip and yet nothing works when it comes to trying to make the ingress rule work. I have set the ingress-controller external ip as com.host.com in my hosts file as well. Furthermore, I am using docker-desktop as my node, however, I'm having this issue on minikube as well. Any help is appreciated.
Your container's containerPort is 3000 while your service targetPort is 5000. Make sure that your service' targetPort, your container's containerPort and last but not least the listening port of your app are all the same.
Update:
So I guess it turned out that the problem is with name resolution.
com.host.com is not resolvable on your machine.
Using the localhost or 127.0.0.1 didn't match the host from the ingress rule -> that's why the default 404 handler was chosen.
So either fix name resolution or remove the host from the ingress rule.
Bear in mind that the ingress controller will only update the IP of the ingress if the mapped service is up and ready. Your ingress shows that it's using app-service on port 5000 as its backend service, but your question does not show a listing of the pods on the namespace namespace where it appears your application pods are. Please could you add that to the question -- it is possible that either your pods aren't coming up successfully, or they're listening on a different port to 5000
UPDATE
Please also try the following:
You're specifying that the container uses port 5000. But is it actually using 5000? try:
kubectl exec -it -n namespace app-6b9f7fc47b-sh6nc -- bash (or sh depending on what the default shell is for your app)
and then try curl localhost:5000 to see if it responds.
Check the ingress controller logs:
kubectl logs -f -n ingress-nginx and see if there's any log messages that might help you identify what's going on.

Kubernetes (on-premises) Metallb LoadBalancer and sticky sessions

I installed one Kubernetes Master and two kubernetes worker on-premises.
After I installed Metallb as LoadBalancer using commands below:
$ kubectl edit configmap -n kube-system kube-proxy
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxy
Configuration mode:
"ipvs" ipvs:
strictARP: true
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/metallb.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
vim config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 10.100.170.200-10.100.170.220
kubectl apply -f config-map.yaml
kubectl describe configmap config -n metallb-system
I created my yaml file as below:
myapp-tst-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-tst-deployment
labels:
app: myapp-tst
spec:
replicas: 2
selector:
matchLabels:
app: myapp-tst
template:
metadata:
labels:
app: myapp-tst
spec:
containers:
- name: myapp-tst
image: myapp-tomcat
securityContext:
privileged: true
capabilities:
add:
- SYS_ADMIN
myapp-tst-service.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-tst-service
labels:
app: myapp-tst
spec:
externalTrafficPolicy: Cluster
type: LoadBalancer
ports:
- name: myapp-tst-port
nodePort: 30080
port: 80
protocol: TCP
targetPort: 8080
selector:
app: myapp-tst
sessionAffinity: None
myapp-tst-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapp-tst-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/affinity-mode: "persistent"
nginx.ingress.kubernetes.io/session-cookie-name: "INGRESSCOOKIE"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: myapp-tst-service
servicePort: myapp-tst-port
I run kubectl -f apply for all three files, and these is my result:
kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/myapp-tst-deployment-54474cd74-p8cxk 1/1 Running 0 4m53s 10.36.0.1 bcc-tst-docker02 <none> <none>
pod/myapp-tst-deployment-54474cd74-pwlr8 1/1 Running 0 4m53s 10.44.0.2 bca-tst-docker01 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/myapp-tst-service LoadBalancer 10.110.184.237 10.100.170.15 80:30080/TCP 4m48s app=myapp-tst,tier=backend
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d22h <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/myapp-tst-deployment 2/2 2 2 4m53s myapp-tst mferraramiki/myapp-test app=myapp-tst
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/myapp-tst-deployment-54474cd74 2 2 2 4m53s myapp-tst myapp/myapp-test app=myapp-tst,pod-template-hash=54474cd74
But when I try to connect using LB external IP (10.100.170.15) the system redirect the browser request
(on the same browser) on a pod, if I refresh or open a new tab (on the same url) the system reply redirect the request to another pod.
I need when a user digit url in the browser, he must be connect to a specific pod during all session, and not switch to other pods.
How can solve this problem if is it possible?
In my VM I resolved this issue using stickysession, how can enable it on LB or in Kubernetes components?
In the myapp-tst-service.yaml file the "sessionAffinity" is set to "None".
You should try to set it to "ClientIP".
From page https://kubernetes.io/docs/concepts/services-networking/service/ :
"If you want to make sure that connections from a particular client are passed to the same Pod each time, you can select the session affinity based on the client's IP addresses by setting service.spec.sessionAffinity to "ClientIP" (the default is "None"). You can also set the maximum session sticky time by setting service.spec.sessionAffinityConfig.clientIP.timeoutSeconds appropriately. (the default value is 10800, which works out to be 3 hours)."

Kubernetes Ingress: Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io"

Playing around with K8 and ingress in local minikube setup. Creating ingress from yaml file in networking.k8s.io/v1 api version fails. See below output.
Executing
> kubectl apply -f ingress.yaml
returns
Error from server (InternalError): error when creating "ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": an error on the server ("") has prevented the request from succeeding
in local minikube environment with hyperkit as vm driver.
Here is the ingress.yaml file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: mongodb-express-ingress
namespace: hello-world
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: mongodb-express.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: mongodb-express-service-internal
port:
number: 8081
Here is the mongodb-express deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-express
namespace: hello-world
labels:
app: mongodb-express
spec:
replicas: 1
selector:
matchLabels:
app: mongodb-express
template:
metadata:
labels:
app: mongodb-express
spec:
containers:
- name: mongodb-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongodb-root-username
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongodb-root-password
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: mongodb-configmap
key: mongodb_url
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-express-service-external
namespace: hello-world
spec:
selector:
app: mongodb-express
type: LoadBalancer
ports:
- protocol: TCP
port: 8081
targetPort: 8081
nodePort: 30000
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-express-service-internal
namespace: hello-world
spec:
selector:
app: mongodb-express
ports:
- protocol: TCP
port: 8081
targetPort: 8081
Some more information:
> kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:20:00Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
> minikube version
minikube version: v1.19.0
commit: 15cede53bdc5fe242228853e737333b09d4336b5
> kubectl get all -n hello-world
NAME READY STATUS RESTARTS AGE
pod/mongodb-68d675ddd7-p4fh7 1/1 Running 0 3h29m
pod/mongodb-express-6586846c4c-5nfg7 1/1 Running 6 3h29m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mongodb-express-service-external LoadBalancer 10.106.185.132 <pending> 8081:30000/TCP 3h29m
service/mongodb-express-service-internal ClusterIP 10.103.122.120 <none> 8081/TCP 3h3m
service/mongodb-service ClusterIP 10.96.197.136 <none> 27017/TCP 3h29m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mongodb 1/1 1 1 3h29m
deployment.apps/mongodb-express 1/1 1 1 3h29m
NAME DESIRED CURRENT READY AGE
replicaset.apps/mongodb-68d675ddd7 1 1 1 3h29m
replicaset.apps/mongodb-express-6586846c4c 1 1 1 3h29m
> minikube addons enable ingress
▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
🔎 Verifying ingress addon...
🌟 The 'ingress' addon is enabled
> kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-admission-create-2bn8h 0/1 Completed 0 4h4m
pod/ingress-nginx-admission-patch-vsdqn 0/1 Completed 0 4h4m
pod/ingress-nginx-controller-5d88495688-n6f67 1/1 Running 0 4h4m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller NodePort 10.111.176.223 <none> 80:32740/TCP,443:30636/TCP 4h4m
service/ingress-nginx-controller-admission ClusterIP 10.97.107.77 <none> 443/TCP 4h4m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 4h4m
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-5d88495688 1 1 1 4h4m
NAME COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create 1/1 7s 4h4m
job.batch/ingress-nginx-admission-patch 1/1 9s 4h4m
However, it works for the beta api version, i.e.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: mongodb-express-ingress-deprecated
namespace: hello-world
spec:
rules:
- host: mongodb-express.local
http:
paths:
- path: /
backend:
serviceName: mongodb-express-service-internal
servicePort: 8081
Any help very much appreciated.
I had the same issue. I successfully fixed it using:
kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
then apply the yaml files:
kubectl apply -f ingress_file.yaml
I have the same problem with you, and you can see this issue https://github.com/kubernetes/minikube/issues/11121.
Two way you can try:
download the new version ,or go back the old version
Do a strange thing like what balnbibarbi said.
2. The Strange Thing
# Run without --addons=ingress
sudo minikube start --vm-driver=none #--addons=ingress
# install external ingress-nginx
sudo helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
sudo helm repo update
sudo helm install ingress-nginx ingress-nginx/ingress-nginx
# expose your services
And then you will find your Ingress lacks Endpoints. And then:
sudo minikube addons enable ingress
After minitues, the Endpoints appears.
Problem
If you search examples with addons Ingress by Google, you will find what the below lacks is ingress.
root#ubuntu:~# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-74ff55c5b-xnmx2 1/1 Running 1 4h40m
etcd-ubuntu 1/1 Running 1 4h40m
kube-apiserver-ubuntu 1/1 Running 1 4h40m
kube-controller-manager-ubuntu 1/1 Running 1 4h40m
kube-proxy-k9lnl 1/1 Running 1 4h40m
kube-scheduler-ubuntu 1/1 Running 2 4h40m
storage-provisioner 1/1 Running 3 4h40m
Ref: Expecting apiVersion - networking.k8s.io/v1 instead of extensions/v1beta1
TL;DR
kubectl explain predated a lot of the generic resource parsing logic, so it has a dedicated --api-version flag. This should do what you want.
kubectl explain ingresses --api-version=networking.k8s.io/v1
This should solve your doubt!
In my case, it was a previous deployment of NGINX. Check with:
kubectl get ValidatingWebhookConfiguration -A
If there is more than one NGINX, then delete the older one.
You can also get this error on GKE private clusters as a firewall rule is not configured automatically.
https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules
https://github.com/kubernetes/kubernetes/issues/79739

kubernetes cluster mode, what is ingress url?

Before I had one single vm (centos 7.4, hostname kube-2.novalocal,ip 172.50.10.10), I installed both master and kubelet in it and I could access my ingress by 172.50.10.10/uaa/login. Inside cluster, I use ClusterIP, and deployed ingress nginx as NodePort on ingress. Since it is redirect/rewrite, so I changed nodeport as 80 by avoiding port omitted. The service url is http://172.50.10.10/uaa/login. And it works fine.
Now I adding two nodes (kube-1.novalocal/172.50.10.1 and kube-3.novalocal/172.50.10.4). I could see ingress is deployed by kubernetes on kube-3.novalocal. And it restarts frequently, it restarts almost every minute. And I do not know ingress service url either. Is it http://kube-2.novalocal/uaa/login or http://kube-3.novalocal/uaa/login? Why it restarts so frequently?
I put all related yaml files, log file, console commands output and dashboard information here.
[centos#kube-2 ingress]$ sudo kubectl get po
NAME READY STATUS RESTARTS AGE
gearbox-rack-api-gateway 1/1 Running 0 15h
gearbox-rack-config-server 1/1 Running 0 15h
gearbox-rack-eureka-server 1/1 Running 0 15h
gearbox-rack-rabbitmq 1/1 Running 0 15h
gearbox-rack-redis 1/1 Running 0 15h
gearbox-rack-uaa-service 1/1 Running 0 15h
gearbox-rack-zipkin-server 1/1 Running 0 15h
ingress-nginx-5c6d78668c-brlsv 1/1 Running 279 15h
nginx-default-backend-6647766887-nbwhl 1/1 Running 0 15h
Access ingress url in kube-3.novalocal(172.50.10.4):
[centos#kube-2 ingress]$ curl http://172.50.10.4/uaa/login
curl: (7) Failed connect to 172.50.10.4:80; Connection refused
ingress-nginx logs:
[centos#kube-2 ingress]$ sudo kubectl logs ingress-nginx-5c6d78668c-frb2r
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.15.0
Build: git-df61bd7
Repository: https://github.com/kubernetes/ingress-nginx
-------------------------------------------------------------------------------
W0703 02:16:35.966965 7 client_config.go:533] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0703 02:16:35.967483 7 main.go:158] Creating API client for https://10.96.0.1:443
Dashborad images is as follows:
ingress-nginx-res.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: gearbox-rack-api-gateway
servicePort: 5555
ingress-nginx-ctl.yaml
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
spec:
type: NodePort
selector:
app: ingress-nginx
ports:
- name: http
port: 80
nodePort: 80
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: ingress-nginx
spec:
replicas: 1
template:
metadata:
labels:
app: ingress-nginx
spec:
terminationGracePeriodSeconds: 60
serviceAccount: lb
containers:
- image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0
name: ingress-nginx
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
apiServerExtraArgs:
service-node-port-range: 80-32767
networking:
podSubnet: 192.168.0.0/16
kubernetesVersion: v1.10.3
featureGates:
CoreDNS: true
=================================================
edition two
Ingress-nginx controller is updated to 0.16.2, same deployment as before, ingress-nginx continue restart almost every two minutes.
NAME READY STATUS RESTARTS AGE
ingress-nginx-59b74f9684-lgm2k 0/1 CrashLoopBackOff 9 20m 192.168.179.5 kube-3.novalocal
Usage of NodePort assumes that you are able to access all your pods, so you should be able to use both the http://kube-2.novalocal/uaa/login and the http://kube-3.novalocal/uaa/login.
You can find more information about NodePort here: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
"NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service to which the NodePort service will route is automatically created. You’ll be able to contact the NodePort service from outside the cluster by requesting :."
Regarding your ingress-nginx frequent restarts: Try to upgrade your nginx controller to the latest version and come back with the results. You can find it here: https://github.com/kubernetes/ingress-nginx
Also, take a look at this article with the similar issue: https://github.com/kubernetes/ingress-nginx/issues/2450
The root reason could be deployment hardware environment. With my virtualbox, there is no restart. When I use company vm based on openstack, the ingress-nginx controller always restarts.

Minikube unable to expose service with yaml

Trying to run a local registry. I have the following configuration:
Deployment:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: registry
labels:
app: registry
role: registry
spec:
replicas: 1
selector:
matchLabels:
app: registry
template:
metadata:
labels:
app: registry
spec:
containers:
- name: registry
image: registry:latest
ports:
- containerPort: 5000
volumeMounts:
- mountPath: '/registry'
name: registry-volume
volumes:
- name: registry-volume
hostPath:
path: '/data'
type: Directory
Service:
apiVersion: v1
kind: Service
metadata:
name: registry
namespace: default
labels:
app: registry
spec:
selector:
role: registry
type: NodePort
ports:
- name: registry
nodePort: 31001
port: 5000
protocol: TCP
It all works well when I create deployment/service. kubectl shows status as Running for both service and deployment:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/registry 1 1 1 1 30m
NAME DESIRED CURRENT READY AGE
rs/registry-6549cbc974 1 1 1 30m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/registry 1 1 1 1 30m
NAME DESIRED CURRENT READY AGE
rs/registry-6549cbc974 1 1 1 30m
NAME READY STATUS RESTARTS AGE
po/registry-6549cbc974-mmqpj 1/1 Running 0 30m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 37m
svc/registry NodePort 10.0.0.6 <none> 5000:31001/TCP 7m
However, when I try to get external IP for service using: minikube service registry --url, it times-out/fails: Waiting, endpoint for service is not ready yet....
When I delete the service (keeping deployment intact), and manually expose the deployment using kubectl expose deployment registry --type=NodePort, I am able to get it working.
Minikube log can be found here.
You need to specify the correct spec.selector in registry service manifest:
apiVersion: v1
kind: Service
metadata:
name: registry
namespace: default
labels:
app: registry
spec:
selector:
app: registry
type: NodePort
ports:
- name: registry
nodePort: 31001
port: 5000
protocol: TCP
Now registry service correctly points to the registry pod:
$ kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 10.0.2.15:8443 14m
registry 172.17.0.4:5000 4s
And you can get external url as well:
$ minikube service registry --url
http://192.168.99.106:31001