Kubernetes nginx ingress 0.22 not respecting cookie affinity annotation? - kubernetes

We recently upgraded to nginx-ingress 0.22. Before this upgrade, my service was using the old namespace ingress.kubernetes.io/affinity: cookie and everything was working as I expected. However, upon the upgrade to 0.22, affinity stopped being applied to my service (I don't see sticky anywhere in the nginx.conf).
I looked at the docs and changed the namespace to nginx.ingress.kubernetes.io as shown in this example, but it didn't help.
Is there some debug log I can look at that will show the configuration parsing/building process? My guess is that some other setting is preventing this from working (I can't imagine the k8s team shipped a release with this feature completely broken), but I'm not sure what that could be.
My ingress config as shown by the k8s dashboard follows:
"kind": "Ingress",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "example-ingress",
"namespace": "master",
"selfLink": "/apis/extensions/v1beta1/namespaces/master/ingresses/example-ingress",
"uid": "01e81627-3b90-11e9-bb5a-f6bc944a4132",
"resourceVersion": "23345275",
"generation": 1,
"creationTimestamp": "2019-02-28T19:35:30Z",
"labels": {
},
"annotations": {
"ingress.kubernetes.io/backend-protocol": "HTTPS",
"ingress.kubernetes.io/limit-rps": "100",
"ingress.kubernetes.io/proxy-body-size": "100m",
"ingress.kubernetes.io/proxy-read-timeout": "60",
"ingress.kubernetes.io/proxy-send-timeout": "60",
"ingress.kubernetes.io/secure-backends": "true",
"ingress.kubernetes.io/secure-verify-ca-secret": "example-ingress-ssl",
"kubernetes.io/ingress.class": "nginx",
"nginx.ingress.kubernetes.io/affinity": "cookie",
"nginx.ingress.kubernetes.io/backend-protocol": "HTTPS",
"nginx.ingress.kubernetes.io/limit-rps": "100",
"nginx.ingress.kubernetes.io/proxy-body-size": "100m",
"nginx.ingress.kubernetes.io/proxy-buffer-size": "8k",
"nginx.ingress.kubernetes.io/proxy-read-timeout": "60",
"nginx.ingress.kubernetes.io/proxy-send-timeout": "60",
"nginx.ingress.kubernetes.io/secure-verify-ca-secret": "example-ingress-ssl",
"nginx.ingress.kubernetes.io/session-cookie-expires": "172800",
"nginx.ingress.kubernetes.io/session-cookie-max-age": "172800",
"nginx.ingress.kubernetes.io/session-cookie-name": "route",
"nginx.org/websocket-services": "example"
}
},
"spec": {
"tls": [
{
"hosts": [
"*.example.net"
],
"secretName": "example-ingress-ssl"
}
],
"rules": [
{
"host": "*.example.net",
"http": {
"paths": [
{
"path": "/",
"backend": {
"serviceName": "example",
"servicePort": 443
}
}
]
}
}
]
},
"status": {
"loadBalancer": {
"ingress": [
{}
]
}
}
}

As I tested Sticky session affinity with Nginx Ingress version 0.22, I can assure that it works just fine. Then when I was looking for your configuration, I replaced wildcard host host: "*.example.net" with i.e host: "stickyingress.example.net" just to ignore wildcard, and it worked fine again.
So after some search I found out that from this issue
Wildcard hostnames are not supported by the Ingress spec (only SSL
wildcard certificates are)
Even this issue was opened for NGINX Ingress controller version:
0.21.0

Related

why kubernetes dashboard service return json

I am access my kubernetes dashboard using this url:
https://kubernetes.example.com/api/v1/namespaces/kube-system/services/kubernetes-dashboard/#/workload?namespace=default
what make me confusing is the return content just json string, not the login page. The json content is :
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "kubernetes-dashboard",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/services/kubernetes-dashboard",
"uid": "884240d7-8f3f-41a4-a3a0-a89649545c82",
"resourceVersion": "133822",
"creationTimestamp": "2019-09-21T16:21:19Z",
"labels": {
"addonmanager.kubernetes.io/mode": "Reconcile",
"k8s-app": "kubernetes-dashboard",
"kubernetes.io/cluster-service": "true"
},
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"k8s-app\":\"kubernetes-dashboard\",\"kubernetes.io/cluster-service\":\"true\"},\"name\":\"kubernetes-dashboard\",\"namespace\":\"kube-system\"},\"spec\":{\"ports\":[{\"port\":443,\"targetPort\":8443}],\"selector\":{\"k8s-app\":\"kubernetes-dashboard\"},\"type\":\"NodePort\"}}\n"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 443,
"targetPort": 8443,
"nodePort": 31085
}
],
"selector": {
"k8s-app": "kubernetes-dashboard"
},
"clusterIP": "10.254.75.193",
"type": "NodePort",
"sessionAffinity": "None",
"externalTrafficPolicy": "Cluster"
},
"status": {
"loadBalancer": {
}
}
}
this is my nginx forward config:
upstream kubernetes{
server 172.19.104.231:8001;
}
this is my kubernetes cluster proxy command:
kubectl proxy --address=0.0.0.0 --port=8001 --accept-hosts='^*$'
You are accessing the Kubernetes API to get the kubernetes-dashboard Service resource manifest. This is the JSON that you get back.
If you want to access the Service, you need to access the Service itself, not the Kubernetes API. You can do this for example with port forwarding:
kubectl port-forward svc/kubernetes-dashboard 8443:443
And then access the Service with:
curl localhost:8443/#/workload?namespace=default
Kubernetes documentation has clearer instructions now on how to deploy and access their dashboard:
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
I went to 127.0.0.1:8001 at first, and got the API JSON (as per the original question) before noticing the URL given under the kubectl proxy instruction:
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/.

What is my Custom Resource Definition URL in Kubernetes

I am trying to hit my custom resource definition endpoint in Kubernetes but cannot find an exact example for how Kubernetes exposes my custom resource definition in the Kubernetes API. If I hit the custom services API with this:
https://localhost:6443/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions
I get back this response
"items": [
{
"metadata": {
"name": "accounts.stable.ibm.com",
"selfLink": "/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/accounts.stable.ibm.com",
"uid": "eda9d695-d3d4-11e9-900f-025000000001",
"resourceVersion": "167252",
"generation": 1,
"creationTimestamp": "2019-09-10T14:11:48Z",
"deletionTimestamp": "2019-09-12T22:26:20Z",
"finalizers": [
"customresourcecleanup.apiextensions.k8s.io"
]
},
"spec": {
"group": "stable.ibm.com",
"version": "v1",
"names": {
"plural": "accounts",
"singular": "account",
"shortNames": [
"acc"
],
"kind": "Account",
"listKind": "AccountList"
},
"scope": "Namespaced",
"versions": [
{
"name": "v1",
"served": true,
"storage": true
}
],
"conversion": {
"strategy": "None"
}
},
"status": {
"conditions": [
{
"type": "NamesAccepted",
"status": "True",
"lastTransitionTime": "2019-09-10T14:11:48Z",
"reason": "NoConflicts",
"message": "no conflicts found"
},
{
"type": "Established",
"status": "True",
"lastTransitionTime": null,
"reason": "InitialNamesAccepted",
"message": "the initial names have been accepted"
},
{
"type": "Terminating",
"status": "True",
"lastTransitionTime": "2019-09-12T22:26:20Z",
"reason": "InstanceDeletionCheck",
"message": "could not confirm zero CustomResources remaining: timed out waiting for the condition"
}
],
"acceptedNames": {
"plural": "accounts",
"singular": "account",
"shortNames": [
"acc"
],
"kind": "Account",
"listKind": "AccountList"
},
"storedVersions": [
"v1"
]
}
}
]
}
This leads me to believe I have correctly created the custom resource accounts. There are a number of examples that don't seem to be quite right and I cannot find my resource in the Kubernetes REST api. I can use with my custom resource from kubectl but I need to expose it with RESTful APIs.
https://localhost:6443/apis/stable.example.com/v1/namespaces/default/accounts
returns
404 page not found
Where as:
https://localhost:6443/apis/apiextensions.k8s.io/v1beta1/apis/stable.ibm.com/namespaces/default/accounts
returns
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "the server could not find the requested resource",
"reason": "NotFound",
"details": {},
"code": 404
}
I have looked at https://docs.okd.io/latest/admin_guide/custom_resource_definitions.html and https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/
The exact URL would be appreciated.
This is a quite decent way retrieving K8s REST API resource executing kubectl get command on some top debugging levels, like #Suresh Vishnoi mentioned in the comment:
kubectl get <api-resource> -v=8
Apparently, eventually checked by #Amit Kumar Gupta, the correct URL accessing custom resource as per your CRD json output is the following:
https://<API_server>:port/apis/stable.ibm.com/v1/namespaces/default/accounts
Depending on the authentication method you may choose: X509 Client Certs, Static Token File, Bearer Token or HTTP API proxy in order to authenticate user requests against Kubernetes API.

Access pod localhost from Service

New to Kubernetes.
I have a private dockerhub image deployed on a Kubernetes instance. When I exec into the pod I can run the following so I know my docker image is running:
root#private-reg:/# curl 127.0.0.1:8085
Hello world!root#private-reg:/#
From the dashboard I can see my service has an external endpoint which ends with port 8085. When I try to load this I get 404. My service YAML is as below:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "test",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/services/test",
"uid": "a1a2ae23-339b-11e9-a3db-ae0f8069b739",
"resourceVersion": "3297377",
"creationTimestamp": "2019-02-18T16:38:33Z",
"labels": {
"k8s-app": "test"
}
},
"spec": {
"ports": [
{
"name": "tcp-8085-8085-7vzsb",
"protocol": "TCP",
"port": 8085,
"targetPort": 8085,
"nodePort": 31859
}
],
"selector": {
"k8s-app": "test"
},
"clusterIP": "******",
"type": "LoadBalancer",
"sessionAffinity": "None",
"externalTrafficPolicy": "Cluster"
},
"status": {
"loadBalancer": {
"ingress": [
{
"ip": "******"
}
]
}
}
}
Can anyone point me in the right direction.
What is the output from the below command
curl cluzterIP:8085
If you get Hello world message then it means that the service is routing the traffic Correctly to the backend pod.
curl HostIP:NODEPORT should also be working
Most likely that service is not bound to the backend pod. Did you define the below label on the pod?
labels: {
"k8s-app": "test"
}
You didn't mention what type of load balancer or cloud provider you are using but if your load balancer provisioned correctly which you should be able to see in your kube-controller-manager logs, then you should be able to access your service with what you see here:
"status": {
"loadBalancer": {
"ingress": [
{
"ip": "******"
}
]
}
Then you could check by running:
$ curl <ip>:<whatever external port your lb is fronting>
It's likely that this didn't provision if as described in other answers this works:
$ curl <clusterIP for svc>:8085
and
$ curl <NodeIP>:31859 # NodePort
Give a check on services on kuberntes, there are a few types:
https://kubernetes.io/docs/concepts/services-networking/service/
ClusterIP: creates access to service only inside the cluster.
NodePort: Access service through a given port on the nodes.
LoadBalancer: service externally acessible through a LB.
I am assuming you are running on GKE.
What kind of service is it, the one launched?

Extract LoadBalancer name from kubectl output with go-template

I'm trying to write a go template that extracts the value of the load balancer. Using --go-template={{status.loadBalancer.ingress}} returns [map[hostname:GUID.us-west-2.elb.amazonaws.com]]% When I add .hostname to the template I get an error saying, "can't evaluate field hostname in type interface {}". I've tried using the range keyword, but I can't seem to get the syntax right.
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"creationTimestamp": "2018-07-30T17:22:12Z",
"labels": {
"run": "nginx"
},
"name": "nginx-http",
"namespace": "jx",
"resourceVersion": "495789",
"selfLink": "/api/v1/namespaces/jx/services/nginx-http",
"uid": "18aea6e2-941d-11e8-9c8a-0aae2cf24842"
},
"spec": {
"clusterIP": "10.100.92.49",
"externalTrafficPolicy": "Cluster",
"ports": [
{
"nodePort": 31032,
"port": 80,
"protocol": "TCP",
"targetPort": 8080
}
],
"selector": {
"run": "nginx"
},
"sessionAffinity": "None",
"type": "LoadBalancer"
},
"status": {
"loadBalancer": {
"ingress": [
{
"hostname": "GUID.us-west-2.elb.amazonaws.com"
}
]
}
}
}
As you can see from the JSON, the ingress element is an array. You can use the template function index to grab this array element.
Try:
kubectl get svc <name> -o=go-template --template='{{(index .status.loadBalancer.ingress 0 ).hostname}}'
This is assuming of course that you're only provisioning a single loadbalancer, if you have multiple, you'll have to use range
try this:
kubectl get svc <name> -o go-template='{{range .items}}{{range .status.loadBalancer.ingress}}{{.hostname}}{{printf "\n"}}{{end}}{{end}}'

Kubernetes 1.5.2 ping can't find ClusterIP of a Service from inside Cluster

I have setup a Cluster with some deployments and services.
I can log into any of my pods and ping the pods from their pod network ips ( 172.x.x.x ) and they are successful.
But when I try to ping the services ClusterIP addresses from any of my pods they never respond, so I can't access my services.
Below is my Kibana deployment, and 10.254.77.135 is the IP I am trying to connect to from my other services, I also can't use this node port, it never responds
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "kibana",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/services/kibana",
"uid": "21498caf-569c-11e7-a801-0050568fc023",
"resourceVersion": "3282683",
"creationTimestamp": "2017-06-21T16:10:23Z",
"labels": {
"component": "elk",
"role": "kibana"
}
},
"spec": {
"ports": [
{
"name": "http",
"protocol": "TCP",
"port": 5601,
"targetPort": 5601,
"nodePort": 31671
}
],
"selector": {
"k8s-app": "kibana"
},
"clusterIP": "10.254.77.135",
"type": "NodePort",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {}
}
}
Not sure if this is your problem, but apparently ping doesn't work on services ClusterIP addresses because they are virtual addresses created by iptables rules that just redirect packets to the endpoints(pods).