Fail to delete rc by api? - kubernetes

kubernetes version:1.02
REST api
DELETE /api/v1/namespaces/default/replicationcontrollers/test
body
{
"apiVersion": "v1",
"kind": "ReplicationController",
"gracePeriodSeconds": 0}
}
Fail
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "converting to : type names don't match (ReplicationController, DeleteOptions), and no conversion 'func (v1.ReplicationController, api.DeleteOptions) error' registered.",
"code": 500
}
if setting body is empty, delete success, but pod is exist.
kubectl get rc, rc is deleted
kubectl get pod, pod is existting
why?
How can I delete rc with all pods by api delete method?

API requests are designed to be able to be fulfilled immediately. Tasks like reaping/recursively deleting are typically handled by a client by combining multiple API requests. In this case, you can do what kubectl does when running kubectl delete rc/test (which you can see by adding --v=8):
Set the spec.replicas of rc/test to 0
Watch until status.replicas of rc/test is also 0
Delete rc/test

Related

KEDA Error: Got empty response for: external.metrics.k8s.io/v1beta1

I am getting below error after installing keda in my k8s cluster and created some scaled object...
whatever command i am running EG: " kubectl get pods" i am getting response with below error message..
How to get rid of below error message.
E0125 11:45:32.766448 316 memcache.go:255] couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1
This error is from client-go when there are no resources available in external.metrics.k8s.io/v1beta1 here in client-go, it gets all ServerGroups.
When KEDA is not installed then external.metrics.k8s.io/v1beta1 is not part of ServerGroups and hence its not called and therefore no issue.
But when KEDA is installed then it creates an ApiService
$ kubectl get apiservice | grep keda-metrics
v1beta1.external.metrics.k8s.io keda/keda-metrics-apiserver True 20m
But it doesn't create any external.metrics.k8s.io resources
$ kubectl get --raw /apis/external.metrics.k8s.io/v1beta1 | jq .
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "external.metrics.k8s.io/v1beta1",
"resources": []
}
Since there are no resources, client-go throws an error.
The workaround is registering a dummy resource in the empty resource group.
Refer to this Github link for more detailed information.

how to display nodes information with a JSON request?

I know how to use the API to perform simple request such as display node information selecting node by labels value.
For example : curl http://localhost:8080/api/v1/nodes?labelSelector=kubernetes.io/role%3Dworker3
Display informations about node whose role is worker3.
Is there a way to perform the same request using a JSON query ?
looked on the web to find a such example but did not find one.
You can query with kubectl by label.
The Roles of the node are just labels.
To return in yaml format
kubectl get nodes -l node-role.kubernetes.io/worker -o yaml
To return in json format
kubectl get nodes -l node-role.kubernetes.io/worker -o json
Update
Querying the api with json you can do like so:
curl http://localhost:8080/api/v1/nodes?{"node.kubernetes.io/worker01":"worker01"}
This in my case returns this:
{
"kind": "NodeList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "317238"
},
"items": [
{
"metadata": {
"name": "worker01",
"uid": "a2bec224-361f-49e9-8bba-b3b172816d6e",
"resourceVersion": "316653",
"creationTimestamp": "2022-12-24T11:04:43Z",
"labels": {
"beta.kubernetes.io/arch": "amd64",
"beta.kubernetes.io/os": "linux",
"kubernetes.io/arch": "amd64",
"kubernetes.io/hostname": "worker01",
"kubernetes.io/os": "linux",
"microk8s.io/cluster": "true",
"node.kubernetes.io/microk8s-worker": "microk8s-worker"
},
............
As you can see it works, but you must analyse 2 things generally.
the api version (can be different to v1, depends on the kubernetes version)
the labels and property name.
The example above comes from microk8s, here i havent even Roles defined.
kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready <none> 17d v1.25.4
worker01 Ready <none> 17d v1.25.4
So i looked for some label that could extract the required data.

Is it possible, and how to limit kubernetes job to create a maxium number of pods if always fail?

As a QA in our company I am daily user of kubernetes, and we use kubernetes job to create performance tests pods. One advantage of job, according to the docs, is
to create one Job object in order to reliably run one Pod to completion
But in our tests this feature will create infinite pods if previous ones fail, which will occupy resources of our team's shared cluster, and deleting such pods will take a lot of time. see this image:
Currently the job manifest is like this:
{
"apiVersion": "batch/v1",
"kind": "Job",
"metadata": {
"name": "upgradeperf",
"namespace": "ntg6-grpc26-tts"
},
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "upgradeperfjob",
"image":
"mycompany.com:5000/ncs-cd-qa/upgradeperf:0.1.1",
"command": [
"python",
"/jmeterwork/jmeter.py",
"-gu",
"git#gitlab-pri-eastus2.dev.mycompany.net:mobility-ncs-tools/tts-cdqa-tool.git",
"-gb",
"upgradeperf",
"-t",
"JMeter/testcases/ttssvc/JMeterTestPlan_ttssvc_cmpsize.jmx",
"-JtestDataFile",
"JMeter/testcases/ttssvc/testData/avaml_opus.csv",
"-JthreadNum",
"3",
"-JthreadLoopCount",
"1500",
"-JresultsFile",
"results_upgradeperf_cavaml_opus_t3_l1500.csv",
"-Jhost",
"mtl-blade32-03.mycompany.com",
"-Jport",
"28416"
]
}
],
"restartPolicy": "Never",
"imagePullSecrets": [
{
"name": "docker-registry-secret"
}
]
}
}
}
}
In some cases, such as misconfiguring of ip/ports, 'reliably run one Pod to completion' is impossible and recreating pods is waste of time and resource.
So is it possible, and how to limit kubernetes job to create a maxium number(say 3) of pods if always fail?
Depending on your kubernetes version, you can resolve this problem with these methods:
set the option: restartPolicy: OnFailure, then the failed container will be restarted in the same Pod, so you will not get lots of failed Pods, instead you will get a Pod with lots of restart.
From Kubernetes 1.8 on, There is a parameter backoffLimit to control the restart policy of failed job. This parameter defines the retry times of the job before treating the job to be failed, default 6 times. For this parameter to work you must set the parameter restartPolicy: Never .
You probably didn't set restartPolicy: Never in your pod spec, add that and I would expect it matches your expected behaviors better.

Kubernetes rest api to check if namespace is created and active

I call the below rest api with post body to create a namespace in kubernetes
http://kuberneteshot/api/v1/namespaces/
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"name": "testnamespace"
}
}
In response i get the http status 201 created and the below json response
{
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"name": "testnamespace",
"selfLink": "/api/v1/namespaces/testnamespace",
"uid": "701ff75e-5781-11e6-a48a-74dbd1a0fb73",
"resourceVersion": "57825525",
"creationTimestamp": "2016-08-01T00:46:52Z",
},
"spec": {
"finalizers": [
"kubernetes"
]
},
"status": {
"phase": "Active"
}
}
Does the status in response with phase as Active mean the namespace is successfully created and active ?
Is there any other rest api to check if the namespace exists and is active ?
The reason i would like to know if the namespace is created is because i get an error message if i fire create pod before the namespace is actually created:
Error from server: error when creating "./pod.json": pods "my-pod" is
forbidden: service account username/default was not found, retry after
the service account is created
The below works fine if i give a sleep of 5 seconds between create namespace and create pod command
kubectl delete namespace testnamepsace;kubectl create namespace
testnamepsace;sleep 5;kubectl create -f ./pod.json
--namespace=testnamepsace
If i don't give the sleep of 5 seconds i see the error message mentioned above
Apparently your Pod has a hard dependency on the default ServiceAccount, so you probably want to check it's been created instead of looking only at the namespace state. The existence of the namespace doesn't guarantee the immediate availability of your default ServiceAccount.
Some API endpoints you might want to query:
GET /api/v1/namespaces/foo/serviceaccounts/default
returns 200 with the object description if the ServiceAccount default exists in the namespace foo, 404 otherwise
GET /api/v1/serviceaccounts?fieldSelector=metadata.namespace=foo,metadata.name=default
returns 200 and a list of all ServiceAccount items in the namespace foo with name default (empty list if no object matches)
Yes, the namespace is persisted prior to being returned from the create API call. The returned object shows the status of the namespace as Active.
You can also do GET http://kubernetehost/api/v1/namespaces/testnamespace to retrieve the same information.

pod is not showing in ready state

I am trying to configure php phabricator example from kubernetes but after creating the replication controller. POD is not showing in ready state ever. It shows in below state:
NAME READY STATUS RESTARTS AGE
phabricator-controller-z0nk3 0/1 CrashLoopBackOff 5 2m
Below is the controller yaml:
{
"kind": "ReplicationController",
"apiVersion": "v1",
"metadata": {
"name": "phabricator-controller",
"labels": {
"name": "phabricator"
}
},
"spec": {
"replicas": 1,
"selector": {
"name": "phabricator"
},
"template": {
"metadata": {
"labels": {
"name": "phabricator"
}
},
"spec": {
"containers": [
{
"name": "phabricator",
"image": "fgrzadkowski/example-php-phabricator",
"ports": [
{
"name": "http-server",
"containerPort": 80
}
]
}
]
}
}
}
}
Can someone please suggest me how to fix this?
This Pod is crash-looping. You can tell because the number of restarts is greater than zero.
kubectl describe pods <pod-name>
Should give further details to help debug. As will
kubectl logs <pod-name>
Actually tracking issues with kubectl describe pods <pod-name> and kubectl logs <pod-name> is indeed the default way to track issues, unfortunately in my case it WASN'T helpful (at first.) All logs were nice or at least were giving no error or clue that something goes wrong.
Readiness and Liveness probes were however showing the app is not passing through...
So where the devil were hiding? In my case increasing values for "initialDelaySeconds" and/or "timeoutSeconds" for Readiness and Liveness probes did the thing.
My first assumption was the app has not enough time to reach "Ready status". However app was still not ready and failed in fact... !!!BUT!!! extending those values increased deployment attempt time and thus I've been able to reach more logs. And what I got??? "Database connection failed attempt due to the timeout". So no connection to the database, and the app is died in fact. Tricky moment is - timeouts are not appearing quickly and you need to wait a bit more ... at least default values for "initialDelaySeconds" and/or "timeoutSeconds" were unable to give me needed time to see the "database connectivity timeout".
When firewall rule was set to allow app talk to the database, issue has gone!