I have the pod and it has 2 containers.
if i give the command "kubectl logs pod_name"
it does not list logs, i need to give container name along with this command.
Is there a way to display both the container logs when we give command "kubectl logs pod_name"?
To display the logs of a particular container
kubectl logs <pod-name> -c <container_name>
To display all containers logs use below command
kubectl logs <pod-name> --all-containers=true
The rest API to get logs of a pod is
GET /api/v1/namespaces/{namespace}/pods/{name}/log
You can pass container as a query param to above API to get logs of a particular container
GET /api/v1/namespaces/{namespace}/pods/{name}/log?container=containername
When you hit above APIs from code using a service account or a user you need to have below RBAC Role and RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-logs-reader
rules:
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get", "list"]
The API is documented here
Related
i have created a ScaledObject and TriggerAuthentication using Keda, in order to horizontally autoscale my pods based on a RabbitMQ length.
but for some reason, when i try to query my ScaledObjects like this:
kubectl get ScaledObjects -n mynamespace
i am not getting anything.
but when i am applying the yaml file which contains all of the information about the ScaledObject, the output is this:
scaledobject.keda.sh/rabbitmq-scaledobject unchanged
i am also able to edit this scaled object using this command:
kubectl edit scaledobject.keda.sh/rabbitmq-scaledobject -n mynamespace
but i am not sure why it is not listed when doing this command:
kubectl get ScaledObjects -n mynamespace
the autoscaler does work, i am just wondering why it is not listed..
Thanks in Advance,
Yaniv
This might be a case of having more than one Custom Resource defined with the same kind but a different apiVersion.
For example, these two versions of Keda create the ScaledObject with different apiVersion:
1.4:
apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
2.0:
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
So when you run kubectl get ScaledObjects -n mynamespace, it might be defaulting to the one you are not using.
1.I tried to get kubernetes cluster detail by entering command below .
kubectl describe service {NAME}
2.I got an error message below.
error from server (Forbidden): servicies “max-Object-detectoer” is forbidden:User “{username}” cannot get resource “services” in API group “” in the namespace “default”
Looks like the service account (user) you are using doesn't have access to that Service
You can create a ClusterRole like so:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: service-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["services"]
verbs: ["get", "watch", "list"]
Then you can create a clusterRoleBinding, giving your service account the above mentioned role, like so:
kubectl create clusterrolebinding service-reader-pod \
--clusterrole=service-reader \
--serviceaccount={name_of_service_account}
Let me know if this worked for you.
I am following this helm + secure - guide:
https://www.cockroachlabs.com/docs/stable/orchestrate-cockroachdb-with-kubernetes.html#helm
I deployed the cluster with this command: $ helm install my-release --values my-values.yaml cockroachdb/cockroachdb --namespace=thesis-crdb
This is how it looks: $ helm list --namespace=thesis-crdb
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
my-release thesis-crdb 1 2021-01-31 17:38:52.8102378 +0100 CET deployed cockroachdb-5.0.4 20.2.4
Here is how it looks using: $ kubectl get all --namespace=thesis-crdb
NAME READY STATUS RESTARTS AGE
pod/my-release-cockroachdb-0 1/1 Running 0 7m35s
pod/my-release-cockroachdb-1 1/1 Running 0 7m35s
pod/my-release-cockroachdb-2 1/1 Running 0 7m35s
pod/my-release-cockroachdb-init-fhzdn 0/1 Completed 0 7m35s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-release-cockroachdb ClusterIP None <none> 26257/TCP,8080/TCP 7m35s
service/my-release-cockroachdb-public ClusterIP 10.xx.xx.x <none> 26257/TCP,8080/TCP 7m35s
NAME READY AGE
statefulset.apps/my-release-cockroachdb 3/3 7m35s
NAME COMPLETIONS DURATION AGE
job.batch/my-release-cockroachdb-init 1/1 43s 7m36s
In the my-values.yaml-file I only changed the tls from false to true:
tls:
enabled: true
So far so good, but from here on the guide isn't really working for me anymore. I try as they say with getting the csr: kubectl get csr --namespace=thesis-crdb
No resources found
Ok, perhaps not needed. I carry on to deploy the client-secure
I download the file: https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/client-secure.yaml
And changes the serviceAccountName: cockroachdb to serviceAccountName: my-release-cockroachdb.
I try to deploy it with $ kubectl create -f client-secure.yaml --namespace=thesis-crdb but it throws this error:
Error from server (Forbidden): error when creating "client-secure.yaml": pods "cockroachdb-client-secure" is forbidden: error looking up service account thesis-crdb/my-release-cockroachdb: serviceaccount "my-release-cockroachdb" not found
Anyone got an idea how to solve this? I'm fairly sure it's something with the namespace that is messing it up.
I have tried to put the namespace in the metadata-section
metadata:
namespace: thesis-crdb
And then try to deploy it with: kubectl create -f client-secure.yaml but to no avail:
Error from server (Forbidden): error when creating "client-secure.yaml": pods "cockroachdb-client-secure" is forbidden: error looking up service account thesis-crdb/my-release-cockroachdb: serviceaccount "my-release-cockroachdb" not found
You mention in question that you have changed serviceAccountName in YAML.
And changes the serviceAccountName: cockroachdb to serviceAccountName: my-release-cockroachdb.
So Root Cause of your issue is related with ServiceAccount misconfiguration.
Background
In your cluster you have something called ServiceAccount.
When you (a human) access the cluster (for example, using kubectl), you are authenticated by the apiserver as a particular User Account (currently this is usually admin, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver. When they do, they are authenticated as a particular Service Account (for example, default).
To ServiceAccount you also should configure RBAC which grants you permissions to create resources.
Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization.
RBAC authorization uses the rbac.authorization.k8s.io API group to drive authorization decisions, allowing you to dynamically configure policies through the Kubernetes API.
If you don't have proper RBAC permissions you will not be able to create resources.
In Kubernetes you can find Role and ClusterRole. Role sets permissions within a particular namespace and ClusterRole sets permissions in whole cluster.
Besides that, you also need to bind roles using RoleBinding and ClusterRoleBinding.
In addition, if you would use Cloud environment, you would also need special rights in project. Your guide provides instructions to do it here.
Root cause
I've checked cockroachdb chart and it creates ServiceAccount, Role, ClusterRole, RoleBinding and ClusterRoleBinding for cockroachdb and prometheus. There is no configuration for my-release-cockroachdb.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cockroachdb
...
verbs:
- create
- get
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cockroachdb
labels:
app: cockroachdb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cockroachdb
...
In client-secure.yaml you change serviceAccountName to my-release-cockroachdb and Kubernetes cannot find that ServiceAccount as it was not created by cluster administrator or cockroachdb chart.
To list ServiceAccounts in default namespace you can use command $ kubectl get ServiceAccount, however if you would check all ServiceAccounts in cluster you should add -A to your command - $ kubectl get ServiceAccount -A.
Solution
Option 1 is to use existing ServiceAccount with proper permissions like SA created by cockroachdb chart which is cockroachdb, not my-release-cockroachdb.
Option 2 is to create ServiceAccount, Role/ClusterRole and RoleBinding/ClusterRoleBinding for my-release-cockroachdb.
We're testing Shiny-proxy Kubernetes containers and each application spins it's own container, it's working fine until this part. We have made some changes to create a PVC/PV to persist user specific data for each container, noticed that serviceaccount is unable to create the PVC though I have following roles configured for the account. In general, are there any other steps for making sure that SA is able to access/create PVC?
The PV/PVC are accessible when testing from a normal container, but seem to be an issue with the serviceaccount roles/permissions that's used to create new containers.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: sp-ns
name: sp-sa
rules:
- apiGroups: [""]
resources: ["pods", "pods/log", "persistentvolumeclaims"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
I have verfied that the serviceaccount roles are set right as below commands returns 'yes'.
kubectl auth can-i create pvc --as=system:serviceaccount:sp-ns:sp-sa -n sp-ns
Error during container creation from the application:
at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:830)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: http://localhost:8001/api/v1/namespaces/sp-ns/pods. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods "sp-pod-92e1efc0-0859-4a87-8b9b-04d6adaa11f5" is forbidden: user "system:serviceaccount:sp-ns:sp-sa" is not an admin and does not have permissions to use host bind mounts for resource .
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:503)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:440)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:406)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:365)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:234)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:735)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:325)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:321)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.lambda$createNew$0(BaseOperation.java:336)
at io.fabric8.kubernetes.api.model.DoneablePod.done(DoneablePod.java:26)
at eu.openanalytics.containerproxy.backend.kubernetes.KubernetesBackend.startContainer(KubernetesBackend.java:223)
at eu.openanalytics.containerproxy.backend.AbstractContainerBackend.doStartProxy(AbstractContainerBackend.java:129)
at eu.openanalytics.containerproxy.backend.AbstractContainerBackend.startProxy(AbstractContainerBackend.java:110)
... 95 more
Container is not running as privileged. Use privileged: true in pod spec.
Service account don't have cluster-admin role. Use below to provide permission.
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=sp-ns:sp-sa
I checked the kubernetes docs, find that pods/exec resources has no verb,
and do not know how to only control access for it? Since I create a pod, someone else need to access it use 'exec' but cannot create anything in my cluster.
How to implement this?
Since pods/exec is a subresource of pods, If you want to exec a pod, you first need to get the pod, so here is my role definition.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
Maybe you can try this kubectl plugin: https://github.com/zhangweiqaz/go_pod
kubectl go -h
kubectl exec in pod with username. For example:
kubectl go pod_name
Usage:
go [flags]
Flags:
-c, --containerName string containerName
-h, --help help for go
-n, --namespace string namespace
-u, --username string username, this user must exist in image, default: dev