How can I see which kubernetes user is creating the deployment and what type of authentication is used? - kubernetes

I am trying to see which kubernetes user is creating the deployment and what type of authentication is used (basic auth, token, etc).
I try to do it using this:
kubectl describe deployment/my-workermole
but I am not finding that type of information in there.
Cluster is not managed by me and I am not able to find it in the deployment Jenkinsfile. Where and how can I find that type of information in my kubernetes deployment but after deployment?

Related

K8s RBAC needed when no API calls?

My pod is running with the default service account. My pod uses secrets through mounted files and config maps but this is defined in yaml and the pod does not contain kubectl or similar component.
Is there a point of using RBAC for anything if I don't call the API? The best practices state "Enable or configure RBAC rules that restrict reading data in Secrets (including via indirect means)."
Only things that call the Kubernetes API, like the kubectl command and the various Kubernetes SDK libraries, use RBAC. For your basic application, you as the user need permission to create deployments, create secrets, etc. but if you have cluster-administrator permissions you don't need anything special setup.
You could imagine an orchestrator application that wanted to farm out work by creating Kubernetes Jobs. In this case the orchestrator itself would need an RBAC setup; typically its Helm chart or other deployment YAML would contain a Role (to create Jobs), a ServiceAccount, and a RoleBinding, and set its own Deployment to run using that ServiceAccount. This isn't the "normal" case of a straightforward HTTP-based application (Deployment/Service/Ingress) with a backing database (StatefulSet/Service).
... restrict reading data in Secrets ...
If you can kubectl get secret -o yaml then the Secret values are all but there to read; they are base64 encoded but not encrypted at all. It's good practice to limit the ability to do this. This having been said, you can also create a Pod, mounting the Secret, and make the main container command be to dump out the Secret value to somewhere readable, so even then Secrets aren't that secret. It's still a good practice, but not required per se, particularly in an evaluation or test cluster.

signingError after deploying node js code into google kubernetes engine

I am using getSignedUrl to get a public authenticated url for a video. It is working fine in my local machine. But after deploying it in GKE, it is not working. I have checked a related question on SigningError with Firebase getSignedUrl(). But I don't see a service account for GKE to configure those roles. I have already assigned full storage and service enabled permissions to the cluster while creating the kubernetes cluester.
Do I have to add any more permissions to get rid of this error or should I do anything else.
This issue got fixed. I have followed this link https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform#console to fix this issue.
We have to access the service account from the GKE. Google cloud service accounts are not directly accessed by the GKE. I have Followed the below steps to access google cloud service account from GKE.
We have to create service account with the required roles - Storage Object Creator and Service Account Token Creator.
Generate a key and save the json file in your app for one time.
Add volume, volumeMounts, GOOGLE_APPLICATION_CREDENTIALS env variable to deployment.yaml
Use kubectl create secret generic [key name] --from-file=key.json=PATH-TO-KEY-FILE.json
Deploy your manifest using kubectl apply -f deployment.yaml.
These steps will provide access to storage and service account which will fix the signingError.

Terraform Kubernetes provider with EKS fails on configmap

I've followed the instructions to create an EKS cluster in AWS using Terraform.
https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html
I've also copied the output for connecting to the cluster to ~/.kube/config-eks. I've verified this successfully works as I've been able to connect to the cluster and manually deploy containers. However, now i'm trying to use the Terraform Kubernetes provider to connect to the cluster but cannot seem to be able to configure the provider properly.
I've configured the provider to use my kubectl configuration but when attempting to push a simple configmap, i get an error stating the following:
configmaps is forbidden: User "system:anonymous" cannot create configmaps in the namespace "kube-system"
I know that the provider is picking up part of the configuration but I cannot seem to get it to authenticate. I suspect this is because EKS uses heptio for authentication and i'm not sure if the K8s Go client used by Terraform can support heptio. However, given that Terraform released their AWS EKS support when EKS went GA, I'd doubt that they wouldn't also update their Terraform provider to work with it.
Is it possible to even do this now? Are there alternatives?
Exec auth was added here: https://github.com/kubernetes/client-go/commit/19c591bac28a94ca793a2f18a0cf0f2e800fad04
This is what is utilized for custom authentication plugins and was published Feb 7th.
Right now, Terraform doesn't support the new exec-based authentication provider, but there is an issue open with a workaround: https://github.com/terraform-providers/terraform-provider-kubernetes/issues/161
That said, if I get some free time I will work on a PR.

How do I authenticate with Kubernetes kubectl using a username and password?

I've got a username and password, how do I authenticate kubectl with them?
Which command do I run?
I've read through: https://kubernetes.io/docs/reference/access-authn-authz/authorization/ and https://kubernetes.io/docs/reference/access-authn-authz/authentication/ though can not find any relevant information in there for this case.
kubectl config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif
https://kubernetes-v1-4.github.io/docs/user-guide/kubectl/kubectl_config_set-credentials/
The above does not seem to work:
kubectl get pods
Error from server (Forbidden): pods is forbidden: User "client" cannot list pods in the namespace "default": Unknown user "client"
Kubernetes provides a number of different authentication mechanisms. Providing a username and password directly to the cluster (as opposed to using an OIDC provider) would indicate that you're using Basic authentication, which hasn't been the default option for a number of releases.
The syntax you've listed appears right, assuming that the cluster supports basic authentication.
The error you're seeing is similar to the one here which may suggest that the cluster you're using doesn't currently support the authentication method you're using.
Additional information about what Kubernetes distribution and version you're using would make it easier to provide a better answer, as there is a lot of variety in how k8s handles authentication.
You should have a group set for the authenticating user.
Example:
password1,user1,userid1,system:masters
password2,user2,userid2
Reference:
"Use a credential with the system:masters group, which is bound to the cluster-admin super-user role by the default bindings."
https://kubernetes.io/docs/reference/access-authn-authz/rbac/

Kubernetes system:serviceaccount can't access service

I'm trying to follow this tutorial to setup an nginx-ingress controller.
It seems it was written before RBAC was fully integrated into k8s. When I get to the final step of running the nginx-controller.yaml I get back an authorization error:
no service with name default/default-http-backend found: services "default-http-backend" is forbidden: User "system:serviceaccount:default:default" cannot get services in the namespace "default"
What do I need to do to make this work with RBAC?
That hackernoon post (like most of them) is incorrent. Specifically there are no RBAC objects and the deployment is not assigned a service account (i.e.: serviceAccountName: ).
To ensure that you have the right (or enough) RBAC objects created check out the RBAC-* objects at https://github.com/mateothegreat/k8-byexamples-ingress-controller/tree/master/manifests.