K8s fails to assign floating ip to loadbalancer with octavia - kubernetes

I have
an openstack, it is Queens, it has octavia for lbaas
a small (test) k8s cluster on top of it (3 nodes, 1 master), version 9.1.2
a deployment called hello which serves a simple webpage saying 'hello world', it works when accessed from within the cluster
I want to expose my deployment as a load balanced service with a floating IP.
I did kubectl expose deployment hello --type=LoadBalancer --name=my-service
It says (kubectl describe service my-service)
Error creating load balancer (will retry): failed to ensure load balancer for service default/my-service: error getting floating ip for port 9cc6442b-2b2f-4b6a-8f91-65dbc2ff13d0: Resource not found
If I manually do: openstack floating ip --port 9cc6442b-2b2f-4b6a-8f91-65dbc2ff13d0 356c8ffa-7bc2-43a9-a8d3-29147ae01727
where:
| ID | Floating IP Address | Port | Floating Network |
| 356c8ffa-7bc2-43a9-a8d3-29147ae01727 | 172.27.81.241 | None | eb31cc74-96ba-4394-aef4-0e94bec46d85 |
and /etc/kubernetes/cloud_config has:
[LoadBalancer]
subnet-id=6a6cdc35-8dda-4982-850e-53c6ee5a5085
floating-network-id=eb31cc74-96ba-4394-aef4-0e94bec46d85
use-octavia=True
(so it is looking for floating IPs on the correct network, and that subnet is the k8s internal subnet)
It all works.
So everything except "associate an IP" has worked. Why does this step fail? Where has k8s logged what it did and how it failed? I can only find docs for pod level logging (and my pod is fine, and serving it's test webpage just great).
(I have lots of quota remaining for 'make more floating ips', and several unused ones hanging around)

I was able to find this No Ports Available when trying to associate a floating IP and this Failed to Associate Floating IP. Maybe those will point you into right direction.
I would recommend that you check this page OpenStack community and look for more answers as I'm not an expert in OpenStack.
As for your question
Where has k8s logged what it did and how it failed?
You can use kubectl describe service <service_name>
Show details of a specific resource or group of resources
Print a detailed description of the selected resources, including related resources such as events or controllers. You may select a single object by name, all objects of that type, provide a name prefix, or label selector. For example:
$ kubectl describe TYPE NAME_PREFIX
For mode debug description please check Debug Services.

Related

Kubernetes, see what pod has some specific IP address

I've noticed some logs in my Zabbix, telling me that some random IP, from my private subnet, is trying to log in as guest user. I know the IP is 10.190.0.1 but there are currently no pods with that IP. Does anyone have any idea how to see which pod had it?
The first thing I thought of, is looking and GCP Log Exporter, but we're not adding labels to logs with what POD it is. I'm sure I should be able to see it from the terminal level. So any suggestion would be nice.
Also, I know it won't be reserved but I took a look either way
gcloud compute addresses list | grep '10.190.0.1'
<empty line>
and
kubectl get all -o wide -A | grep 10.190.0.1
<empty line>
Hi you are doing the right way.
I mean the:
kubectl get pods,svc -o wide
will effectively show you the pods and services and their IP. If the line is empty though, it is because there is no such IP in services or pods in your cluster workoads. two things to check:
maybe the IP has changed
maybe this logs come from an IP in the master node? something from the k8s control plane?
bgess

gke and auto created domain for enabling http routing

I need to use a domain for GKE cluster to access ingress into the cluster and applications, similar like azure AKS http add-on which gives a generic-created domain(not a custom domain)
https://learn.microsoft.com/en-us/azure/aks/http-application-routing
Is there any solution on Google cloud as well?
Our GKE creating/deleting process is a part of IaC tooling and we are automating cluster and our app deployment for dev/test/staging. And the generic domain creation and binding managed dns zone to the cluster resources gives us great flexibility. Otherwise we have to create custom domain and managed dns zone which will be static and bring unnecessary complexity to the provisioning tooling.
GCP has not implemented resources like that, however this operation could be automated using one of the available Cloud DNS APIs 1, as for example the ResourceRecordSets 2 to configure A records to the ManagedZone you want to assign the host, scripting this configuration after the Ingress controller creation.
Example, retrieving the IP address allocated to the ingress controller issuing the command like kubectl describe ing <ingress-name> |grep “Address:” |awk ‘{print $2}’ than using the IP information to construct the API body request 3.
There is not generic domain options in gke so I have to purchase a domain and update NS according to created managed dns zone NS and they will be automated sync when I update ingress in gke by external-dns
I can say I solve this problem with this steps,
1- Create a managed zone which has domain name belongs own and be sure it has permission to access domain from dns zones which you create. Mean is giving access the google project which your dns zone exist
Note: when you create the cluster be sure giving scopes for readwrite perm for managed dns zone
gcloud container clusters create “external-dns” \
—num-nodes 1 \
—scopes “https://www.googleapis.com/auth/ndev.clouddns.readwrite
Create a DNS zone which will contain the managed DNS records.
$ gcloud dns managed-zones create “xxx.test-dev” \
—dns-name “xxx.test.dev.” \
—description “Automatically managed zone by kubernetes.io/external-dns test.dev domain name”
2- Please deploy the resources to gke which name is external-dns
https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/gke.md#deploy-externaldns
And check the logs with
kubectl logs $(kubectl get pods --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep dns)
Or
kubectl logs $(kubectl get pods --no-headers -o custom-columns=":metadata.name" | grep dns)
And if you see something like everything is going smoothly
time="2021-01-20T11:37:46Z" level=info msg="Add records: xxx.test.dev. A [34.89.xx.xx] 300"
time="2021-01-20T11:37:46Z" level=info msg="Add records: xxx.test.dev. TXT [\"heritage=external-dns,external-dns/owner=my-identifier,external-dns/resource=ingress/default/ingress-test\"] 300"
time="2021-01-20T11:38:47Z" level=info msg="All records are already up to date"
Note created TXT record alongside A record. TXT record signifies that the corresponding A record is managed by ExternalDNS. This makes ExternalDNS safe for running in environments where there are other records managed via other means.
Let’s check that we can resolve this DNS name. We’ll ask the nameservers assigned to your zone first.
$ dig +short #ns-cloud-e1.googledomains.com. xxx.test.dev.
104.155.xx.xx
And you can check the ip of the domain is correct or has a problem
host https://xxx.test.dev/
Host https://xxx.test.dev/ not found: 3(NXDOMAIN)
It can be complained bed domain for a while but then you will get the correct response
host xxx.test.dev
xxx.test.dev has address 35.197.xx.xx

Kubernetes: create service vs expose deployment

I am new to Kubernetes. I was going through some tutorials related to Kubernetes deployment. I am seeing two different commands which looks like doing similar things.
The below command is from google code lab (URL: https://codelabs.developers.google.com/codelabs/cloud-springboot-kubernetes/index.html?index=..%2F..index#7 )
$ kubectl create service loadbalancer hello-java --tcp=8080:8080
Another command is being seen in a different place along with the Kubernetes site (https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/)
$ kubectl expose deployment hello-world --type=LoadBalancer --name=my-service
Now as per my understanding both the command are creating services from deployments with loadbalancer and exposing them to the outer world.
I don't think there will be two separate commands for the same task. There should be some difference that I am not able to understand.
Would anyone please clarify this to me?
There are cases where the expose command is not sufficient & your only practical option is to use create service.
Overall there are 4 different types of Kubernetes services, for some it really doesn't matter if you use expose or create, while for others it maters very much.
The types of Kubernetes services are:
ClusterIP
NodePort
LoadBalancer
ExternalName
So for example in the case of the NodePort type service let's say we wanted to set a node port with value 31888 :
Example 1:
In the following command there is no argument for the node port value, the expose command creates it automatically:
kubectl expose deployment demo --name=demo --type=NodePort --port=8080 --target-port=80
The only way to set the node port value is after being created using the edit command to update the node port value: kebctl edit service demo
Example 2:
In this example the create service nodeport is dedicated to creating the NodePort type and has arguments to enable us to control the node port value:
kubectl create service nodeport demo --top=8080:80 --node-port=31888
In this Example 2 the node port value is set with the command line and there is no need to manually edit the value as in case of Example 1.
Important :
The create service [service-name] does not have an option to set the service's selector , so the service wont automatically connect to existing pods.
To set the selector labels to target specific pods you will need to follow up the create service [service-name] with the set selector command :
kubectl set selector service [NAME] [key1]=[value1]
So for above case 2 example, if you want the service to work with a deployment with pods labeled myapp: hello then this is the follow-up command needed:
kubectl set selector service demo myapp=hello
The main differences can be seen from the docs.
1.- kubectl create command
Create a resource from a file or from stdin.
JSON and YAML formats are accepted.
2.- kubectl expose command
Expose a resource as a new Kubernetes service.
Looks up a deployment, service, replica set, replication controller or
pod by name and uses the selector for that resource as the selector
for a new service on the specified port. [...]
Even though both achieve the same thing in the examples you provided, the create command is kind of a more global one, with it you can create all resources by using the command line or a yaml/json file. However, the expose command will only create a service resource, and it's mainly used to expose other already existing resources.
Source: K8s Docs
I hope this helps a little : Here the key would be to understand the difference between services and deployments. As per this link [1] you will notice that a deployment deals with the mortality of Pods automatically. However , if a Pod is terminated and then another is spun up how do the
Pods continue to communicate when their IPs change? They use Services : “a Service is an abstraction which defines a logical set of Pods and a policy by which to access them”. Additionally, it may be of interest to view this link [2] as it describes that the kubectl expose command creates a service which in turn creates an external IP and a Load Balancer. As a beginner it may be of help to review the command language used with Kubernetes, this link [3] describes (as mentioned in another answer) that the kubectl create command is used to be more specific about the objects it creates. As well using the create command you can create a larger variety of objects.
[1]:Service :https://kubernetes.io/docs/concepts/services-networking/service/
[2]:Deploying a containerized web application :https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app#step_6_expose_your_application_to_the_internet
[3]:How to create objects: https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-command/#how-to-create-objects
From my understanding, approach 1 (using create service) just creates service object and as label selector is not specified it does not have any underlying target pods. But in approach 2 (using expose deployment) the service load balances all the pods which are created using deployment as the service is attached with required labels automatically.

(re)setting ip range for k8 loadBalancerIP assignments

When I created a given k8 cluster I didn't specify anything specific for service-cluster-ip-range. Now when I create new loadBalancer services k8 is assigning values that walk on existing ip addresses within the network.
Checking the allowed range via kubectl cluster-info dump | grep service-cluster-ip-range gives me:
"--service-cluster-ip-range=10.96.0.0/12"
which (oddly enough) isn't where the assigned values are coming from. New values seem to have started at 10.95.96.235 and incremented from there.
Attempts to preset a valid ip in a service descriptor via spec.loadBalancerIP gives me errors from kubelet:
Failed to allocate IP for "newservice": "10.95.96.233" is not allowed in config
My questions are:
is it possible to change service-cluster-ip-range without rebuilding the entire cluster?
if not, do I have any other options for (pre)setting loadBalancerIP ?

How to update services in Kubernetes?

Background:
Let's say I have a replication controller with some pods. When these pods were first deployed they were configured to expose port 8080. A service (of type LoadBalancer) was also create to expose port 8080 publicly. Later we decide that we want to export an additional port from the pods (port 8081). We change the pod definition and do a rolling-update with no downtime, great! But we want this port to be publicly accessible as well.
Question:
Is there a good way to update a service without downtime (for example by adding a an additional port to expose)? If I just do:
kubectl replace -f my-service-with-an-additional-port.json
I get the following error message:
Replace failedspec.clusterIP: invalid value '': field is immutable
If you name the ports in the pods, you can specify the target ports by name in the service rather than by number, and then the same service can direct target to pods using different port numbers.
Or, as Yu-Ju suggested, you can do a read-modify-write of the live state of the service, such as via kubectl edit. The error message was due to not specifying the clusterIP that had already been set.
In such case you can create a second service to expose the second port, it won't conflict with the other one and you'll have no downtime.
If you have more that one pod running for the same service you may use the Kubernetes Engine within the Google Cloud Console as follows:
Under "Workloads", select your Replication Controller. Within that screen, click "EDIT" then update and save your replication controller details.
Under "Discover & Load Balancing", select your Service. Within that screen, click "EDIT" then update and save your service details. If you changed ports you should see those reflecting under the column "Endpoints" when you've finished editing the details.
Assuming you have at least two pods running on a machine (and a restart policy of Always), if you wanted to update the pods with the new configuration or container image:
Under "Workloads", select your Replication Controller. Within that screen, scroll down to "Managed pods". Select a pod, then in that screen click "KUBECTL" -> "Delete". Note, you can do the same with the command line: kubectl delete pod <podname>. This would delete and restart it with the newly downloaded configuration and container image. Delete each pod one at a time, making sure to wait until a pod has fully restarted and working (i.e. check logs, debug) etc, before deleting the next.