kubernetes apiserver "the server could not find the requested resource" - atomic

So I am hesitant to ask as a newbie but I have hit a wall. I am following:
http://www.projectatomic.io/docs/gettingstarted/
Using fedora atomic host 22 latest.
I had trouble getting the system up with some of the port settings and with the api string. I was able to get all my services running on the master and my three minions. Kubelet and kube-proxy are failing to connect to the apiserver. I am able to reach the server from curl but the api paths return:
http://cas-vm-atomic-m:8080/api/v1beta3
{
"kind": "Status",
"apiVersion": "v1beta3",
"metadata": {},
"status": "Failure",
"message": "the server could not find the requested resource",
"reason": "NotFound",
"details": {},
"code": 404
}
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
I have turned up the logging. I have tried a variety of setting for KUBE_ADMISSION_CONTROL. I think my problem is on the master and with the apiserver being up but not serving working correctly. kubectl does return my three nodes and services and endpoints. But the nodes stay in NotReady status. The node are attempting to move out of NotReady but can't reach the apiserver to do so.
I am kinda of bummed that the newbie getting started howto has been so difficult. Though I guess educational. I have the logging set to 3 but now I mostly see the kube-proxy requests failing with 404 errors. Any ideas?
If this is the wrong place for this please let me know.

That guide probably needs to be updated, given that the kubernetes v1beta3 api was deprecated in July. I suspect you're running a recent build of the apiserver (which supports only the v1 api), but older builds of kube-proxy/kubelet.
I'd recommend following one of the getting started guides from kubernetes.io/v1.0/docs/getting-started-guides, as those are pretty stable and have dedicated maintainers. e.g. the flannel on fedora guide sounds pretty close to what you're setting up and having trouble with.

Related

Can't do a kubectl get on the TokenReview kind

I'm searching for out of date apis in my k8s cluster but when I try to do kubectl get TokenReview --all-namespaces , it comes back with Error from server (MethodNotAllowed): the server does not allow this method on the requested resource
I was expecting a list of different yaml files of the kind "TokenReview" similar to the below
I'm running k8s 1.21 for server amd c.lient
anybody got any ideas?...not seeing anything in k8s docs
Apparently,.... "TokenReview create requests are not persisted, they are answered with ephemeral responses. This means the token review API does not support read requests (get, list, watch, etc) or update/delete/patch requests, only create."
"All of the _Review resources are like this as well, subjectaccessreview, selfsubjectaccessreview, tokenreview, etc, are all non-persisted resources."
I raised this on github and this is what they said

kubectl command to GKE-Autopilot sometimes return forbidden error

env
GKE Autopilot v1.22.12-gke.2300
use kubectl command from ubuntu2004 VM
use gke-gcloud-auth-plugin
what happens
kubectl command sometimes return (Forbidden) error. e.g.)
kubectl get pod
Error from server (Forbidden): pods is forbidden: User "my-mail#domain.com" cannot list resource "pods" in API group "" in the namespace "default": GKEAutopilot authz: the request was sent before policy enforcement is enabled
It happens not always, so it must not be IAM problem. (it happens about 40%).
Before, I thinks it was GKE Autopilot v1.21.xxxx, this error didn't happen; at least not such frequently.
I couldn't find any helpful info even if I searched "GKEAutopilot authz", or "the request was sent before policy enforcement is enabled"
I wish if someone who faced to same issue has any idea.
Thank you in advance
I asked google cloud support.
They said it's bug on GKE master, and was fixed by them.
This problem doesn't happen anymore

Not able to access app deployed on kubernetes cluster

I am getting following error while accessing the app deployed on Azure kubernetes service
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {},
"code": 403
}
I have followed all steps as given here https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-prepare-app
I know that this is something to do with authentication and RBAC, but i don't know what exactly is wrong and where should i make changes.
Just follow the steps in the link you posted. You will be successful in finishing that. The destination of each step below:
Create the image and make sure it can work without any error.
Create an Azure Container Registry and push the image into the registry.
Create a Service Principal for the AKS to let it just can pull the image from the registry.
Change the yaml file and make it pull the image from the Azure Registry, then crate pods in the AKS nodes.
You just need these four steps to run the application on AKS. Then get the IP address through the command kubectl get service azure-vote-front --watch like the step 4. If you can not access the application, check your steps carefully again.
Also, you can check all the pods status through the command kubectl describe pods or one pod with kubectl describe pod podName.
Update
I test with the image you provide and the result here:
And you can get the service information and know which port you should use to browse.

Kubernetes Deployment update crashes ReplicaSet and creates too many Pods

Using Kubernetes I deploy an app to Google Cloud Containerengine on a cluster with 3 smalll instances.
On a first-time deploy, all goes well using:
kubectl create -f deployment.yaml
And:
kubectl create -f service.yaml
Then I change the image in my deployment.yaml and update it like so:
kubectl apply -f deployment.yaml
After the update, a couple of things happen:
Kubernetes updates its Pods correctly, ending up with 3 updated instances.
Short after this, another ReplicaSet is created (?)
Also, the double amount (2 * 3 = 6) of Pods are suddenly present, where half of them have a status of Running, and the other half Unknown.
So I inspected my Pods and came across this error:
FailedSync Error syncing pod, skipping: network is not ready: [Kubenet does not have netConfig. This is most likely due to lack of PodCIDR]
Also I can't use the dashboard anymore using kubectl proxy. The page shows:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "no endpoints available for service \"kubernetes-dashboard\"",
"reason": "ServiceUnavailable",
"code": 503
}
So I decided to delete all pods forecefully:
kubectl delete pod <pod-name> --grace-period=0 --force
Then, three Pods are triggered for creation, since this is defined in my service.yaml. But upon inspecting my Pods using kubectl describe pods/<pod-name>, I see:
no nodes available to schedule pods
I have no idea where this all went wrong. I essence, all I did was updating an image of a deployment.
Anyone ideas?
I've run into similar issues on Kubernetes. According to your reply to my question on your question (see above):
I noticed that this happens only when I deploy to a micro instance on Google Cloud, which simply has insufficient resources to handle the deployment. Scaling up the initial resources (CPU, Memory) resolved my issue
It seems to me like what's happening here is that the OOM killer from the Linux kernel ends up killing the kubelet, which in turn makes the Node useless to the cluster (and becomes "Unknown").
A real solution to this problem (to prevent an entire node from dropping out of service) is to add resource limits. Make sure you're not just adding requests; add limits because you want your services -- rather than K8s system services -- to be killed so that they can be rescheduled appropriately (if possible).
Also inside of the cluster settings (specifically in the Node Pool -- select from https://console.cloud.google.com/kubernetes/list), there is a box you can check for "Automatic Node Repair" that would at least partially re-mediate this problem rather than giving you an undefined amount of downtime.
If your intention is just to update the image try to use kubectl set image instead. That at least works for me.
By googling kubectl apply a lot of known issues do seem to come up. See this issue for example or this one.
You did not post which version of kubernetes you deployed, but if you can try to upgrade your cluster to the latest version to see if the issue still persists.

Using kubectl with Kubernetes authorization mode ABAC

I sent up a 4 node cluster (1 master 3 workers) running Kubernetes on Ubuntu. I turned on --authorization-mode=ABAC and set up a policy file with an entry like the following
{"user":"bob", "readonly": true, "namespace": "projectgino"}
I want user bob to only be able to look at resources in projectgino. I'm having problems using kubectl command line as user Bob. When I run the following command
kubectl get pods --token=xxx --namespace=projectgino --server=https://xxx.xxx.xxx.xx:6443
I get the following error
error: couldn't read version from server: the server does not allow access to the requested resource
I traced the kubectl command line code and the problem seems to caused by kubectl calling function NegotiateVersion in pkg/client/helper.go. This makes a call to /api on the server to get the version of Kubernetes. This call fails because the rest path doesn't contain namespace projectgino. I added trace code to pkg/auth/authorizer/abac/abac.go and it fails on the namespace check.
I haven't moved up the the latest 1.1.1 version of Kubernetes yet, but looking at the code I didn't see anything that has changed in this area.
Does anybody know how to configure Kubernetes to get around the problem?
This is missing functionality in the ABAC authorizer. The fix is in progress: #16148.
As for a workaround, from the authorization doc:
For miscellaneous endpoints, like
/version, the resource is the empty string.
So you may be able to solve by defining a policy:
{"user":"bob", "readonly": true, "resource": ""}
(note the empty string for resource) to grant access to unversioned endpoints. If that doesn't work I don't think there's a clean workaround that will let you use kubectl with --authorization-mode=ABAC.