Not able to access app deployed on kubernetes cluster - kubernetes

I am getting following error while accessing the app deployed on Azure kubernetes service
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {},
"code": 403
}
I have followed all steps as given here https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-prepare-app
I know that this is something to do with authentication and RBAC, but i don't know what exactly is wrong and where should i make changes.

Just follow the steps in the link you posted. You will be successful in finishing that. The destination of each step below:
Create the image and make sure it can work without any error.
Create an Azure Container Registry and push the image into the registry.
Create a Service Principal for the AKS to let it just can pull the image from the registry.
Change the yaml file and make it pull the image from the Azure Registry, then crate pods in the AKS nodes.
You just need these four steps to run the application on AKS. Then get the IP address through the command kubectl get service azure-vote-front --watch like the step 4. If you can not access the application, check your steps carefully again.
Also, you can check all the pods status through the command kubectl describe pods or one pod with kubectl describe pod podName.
Update
I test with the image you provide and the result here:
And you can get the service information and know which port you should use to browse.

Related

Unable to create node object manually in EKS

I'm trying to setup kubedge with cloudcore in EKS (k8s version 1.21.12) and edgecore in an external server. As part of the kubeedge setup, I had to create a node object manually in cloudside which will be labelled as edge node.
But when I do the kubectl apply -f node.json, I'm getting the following response:
C:\Users\akhi1\manifest>kubectl apply -f node.json
node/edge-node-01 created
C:\Users\akhi1\manifest>kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-xx-xx-xxx-213.ap-southeast-1.compute.internal Ready <none> 3h48m v1.21.12-eks-xxxx << this node was already in my eks cluster
As you can see, I'm not able see the newly created node 'edge-node-01' in the list.
On checking the kube events, I got the following:
C:\Users\akhi1\manifests>kubectl get events
LAST SEEN TYPE REASON OBJECT MESSAGE
13m Normal DeletingNode node/edge-node-01 Deleting node edge-node-01 because it does not exist in the cloud provider
For manually node registration, I followed this doc:
https://kubernetes.io/docs/concepts/architecture/nodes/#manual-node-administration
My node.json would look like this:
{
"kind": "Node",
"apiVersion": "v1",
"metadata": {
"name": "edge-node-01",
"labels": {
"name": "my-first-k8s-edge-node"
}
}
}
I have also checked, node restriction and admission controller but couldn't find anything related to it.
Please let me know why, eks is blocking me to create a node object that doesn't have an underlying ec2 attached.
Thanks in advance,
Akhil

Istio on GKE - Admission webhook not working to inject sidecar-porxy

I have been trying to set up Istio on my existing GKE cluster.
I hev followed the steps mentioned on the Istio website for the installation - prerequisite.
https://istio.io/latest/docs/setup/platform-setup/gke/
I have a private cluster so I added the firewall rule mention in the prerequisite.
gke-aiq-kubernetes-0a227ee8-all default INGRESS 1000 tcp,udp,icmp,esp,ah,sctp False
gke-aiq-kubernetes-0a227ee8-master default INGRESS 1000 tcp:10250,tcp:443,tcp:15017 False
gke-aiq-kubernetes-0a227ee8-vms default INGRESS 1000 tcp:1-65535,udp:1-65535,icmp False
and then install the istio with the demo profile.
istioctl install --set profile=demo
and then verify the intallation
istioctl verify-install
Which says everything is succeeded.
I labeled by namecpace "instio-inject=enabled" so that I will automatically get the sidecar porxy injected.
But when I trying to deploy something in the namespace I am getting following error:
Error from server (InternalError): error when creating "pod-pending.yaml": Internal error occurred: failed calling webhook "sidecar-injector.istio.io": Post https://istiod.istio-system.svc:443/inject?timeout=30s: context deadline exceeded
What I understand from this is that there is some connectivity issue, but I am not sure how to debug this.
I tried the debugging page from istio:
https://github.com/istio/istio/wiki/Troubleshooting-Istio#diagnostics
and after running the command:
kubectl get --raw /api/v1/namespaces/istio-system/services/https:istiod:https-webhook/proxy/inject -v4
I am conformend that this is connectivity issue:
t -v5
I1113 23:20:11.241079 40356 helpers.go:199] server response object: [{
"metadata": {},
"status": "Failure",
"message": "the server is currently unable to handle the request",
"reason": "ServiceUnavailable",
"details": {
"causes": [
{
"reason": "UnexpectedServerResponse",
"message": "Error trying to reach service: 'dial tcp 10.48.3.25:15017: i/o timeout'"
}
]
},
"code": 503
}]
F1113 23:20:11.241367 40356 helpers.go:114] Error from server (ServiceUnavailable): the server is currently unable to handle the request
Need help, I am new to GKE.
I figured out the issue.
I have 7 different GCP projects configured with my gcloud profile and I was in a different project while running the gcloud command. (all gke clusters have the same name)
Steps from the command line:
I logged in to the GCP UI from the browser search the firewall under VPC and opened the ports manually there and it worked.
gcloud compute firewall-rules list --filter="name~gke-<clustername>-[0-9a-z]*-master"
Then get the firewall rule name.
gcloud compute firewall-rules update <firewall rule name> --allow tcp:10250,tcp:443,tcp:15017
The idea here is to add the tcp:15017 which is required by the admission hook.

User "system:anonymous" cannot get path "/"

I just setup a kubenetes cluster base on this link https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#multi-platform
I check with kubectl get nodes, then the master node is Ready, but when I access to the link https://k8s-master-ip:6443/
it show the error: User "system:anonymous" cannot get path "/".
What is the trick I am missing ?
Hope you see something like this:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
This is good as not everyone should be able to access the cluster, if you want to see the services run "kubectl proxy", this should enable access to the services from the outside world.
C:\dev1> kubectl proxy
Starting to serve on 127.0.0.1:8001
And when you hit 127.0.0.1:8001 you should see the list of services.
The latest kubernetes deployment tools enable RBAC on the cluster. Jenkins is relegated to the catch-all user system:anonymous when it accesses https://192.168.70.94:6443/api/v1/.... This user has almost no privileges on kube-apiserver.
The bottom-line is, Jenkins needs to authenticate with kube-apiserver - either with a bearer token or a client cert that's signed by the k8s cluster's CA key.
Method 1. This is preferred if Jenkins is hosted in the k8s cluster:
Create a ServiceAccount in k8s for the plugin
Create an RBAC profile (ie. Role/RoleBinding or ClusterRole/ClusterRoleBinding) that's tied to the ServiceAccount
Config the plugin to use the ServiceAccount's token when accessing the URL https://192.168.70.94:6443/api/v1/...
Method 2. If Jenkins is hosted outside the k8s cluster, the steps above can still be used. The alternative is to:
Create a client cert that's tied to the k8s cluster's CA. You have to find where the CA key is kept and use it to generate a client cert.
Create an RBAC profile (ie. Role/RoleBinding or ClusterRole/ClusterRoleBinding) that's tied to the client cert
Config the plugin to use the client cert when accessing the URL https://192.168.70.94:6443/api/v1/...
Both methods work in any situation. I believe Method 1 will be simpler for you because you don't have to mess around with the CA key.
By default, your clusterrolebinding has system:anonymous set which blocks the cluster access.
Execute the following command, it will set a clusterrole as cluster-admin which will give you the required access.
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous

Using kubectl with Kubernetes authorization mode ABAC

I sent up a 4 node cluster (1 master 3 workers) running Kubernetes on Ubuntu. I turned on --authorization-mode=ABAC and set up a policy file with an entry like the following
{"user":"bob", "readonly": true, "namespace": "projectgino"}
I want user bob to only be able to look at resources in projectgino. I'm having problems using kubectl command line as user Bob. When I run the following command
kubectl get pods --token=xxx --namespace=projectgino --server=https://xxx.xxx.xxx.xx:6443
I get the following error
error: couldn't read version from server: the server does not allow access to the requested resource
I traced the kubectl command line code and the problem seems to caused by kubectl calling function NegotiateVersion in pkg/client/helper.go. This makes a call to /api on the server to get the version of Kubernetes. This call fails because the rest path doesn't contain namespace projectgino. I added trace code to pkg/auth/authorizer/abac/abac.go and it fails on the namespace check.
I haven't moved up the the latest 1.1.1 version of Kubernetes yet, but looking at the code I didn't see anything that has changed in this area.
Does anybody know how to configure Kubernetes to get around the problem?
This is missing functionality in the ABAC authorizer. The fix is in progress: #16148.
As for a workaround, from the authorization doc:
For miscellaneous endpoints, like
/version, the resource is the empty string.
So you may be able to solve by defining a policy:
{"user":"bob", "readonly": true, "resource": ""}
(note the empty string for resource) to grant access to unversioned endpoints. If that doesn't work I don't think there's a clean workaround that will let you use kubectl with --authorization-mode=ABAC.

kubernetes apiserver "the server could not find the requested resource"

So I am hesitant to ask as a newbie but I have hit a wall. I am following:
http://www.projectatomic.io/docs/gettingstarted/
Using fedora atomic host 22 latest.
I had trouble getting the system up with some of the port settings and with the api string. I was able to get all my services running on the master and my three minions. Kubelet and kube-proxy are failing to connect to the apiserver. I am able to reach the server from curl but the api paths return:
http://cas-vm-atomic-m:8080/api/v1beta3
{
"kind": "Status",
"apiVersion": "v1beta3",
"metadata": {},
"status": "Failure",
"message": "the server could not find the requested resource",
"reason": "NotFound",
"details": {},
"code": 404
}
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
I have turned up the logging. I have tried a variety of setting for KUBE_ADMISSION_CONTROL. I think my problem is on the master and with the apiserver being up but not serving working correctly. kubectl does return my three nodes and services and endpoints. But the nodes stay in NotReady status. The node are attempting to move out of NotReady but can't reach the apiserver to do so.
I am kinda of bummed that the newbie getting started howto has been so difficult. Though I guess educational. I have the logging set to 3 but now I mostly see the kube-proxy requests failing with 404 errors. Any ideas?
If this is the wrong place for this please let me know.
That guide probably needs to be updated, given that the kubernetes v1beta3 api was deprecated in July. I suspect you're running a recent build of the apiserver (which supports only the v1 api), but older builds of kube-proxy/kubelet.
I'd recommend following one of the getting started guides from kubernetes.io/v1.0/docs/getting-started-guides, as those are pretty stable and have dedicated maintainers. e.g. the flannel on fedora guide sounds pretty close to what you're setting up and having trouble with.