How to edit Kubernetes ServiceAccount's namespace - service

I have service account name: myservice
$ kubectl get serviceaccount
NAME SECRETS AGE
default 1 15d
myservice 1 15d
$ kubectl get serviceaccount myservice -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2018-06-13T12:41:18Z
name: myservice
namespace: default
...
I want to change the service's namespace default to development.
I tried to edit it with:
kubectl edit serviceaccount myservice
After saving it I received:
A copy of your changes has been stored to "/tmp/kubectl-edit-gjae6.yaml"
error: the namespace from the provided object "development" does not match the namespace "default". You must pass '--namespace=development' to perform this operation.
So I tried like they wrote and it still didn't work:
$ kubectl edit serviceaccount myservice --namespace=development
Error from server (NotFound): serviceaccounts "myservice" not found
The namespace development is exist and also the service myservice.

It seems you should create new myservice SA in development NS instead modifying existing SA in default namespace. Create new myservice in development NS, then remove one in default NS. The error cause the nonexistent myservice even is in development NS.

Related

Terraform kubectl provider manifest to specific namespace

I am learning how to translate kubectl deployments to terraform. I am currently facing issues getting services to work as intended with terraform provider kubectl once I specify a namespace.
I have confirmed the terraform script works when doing the equivalent kubectl apply to the default namespace.
What is the proper methodology in terraform using the kubectl provider to apply -n namespace?
The two different approaches I have tried are:
resource "kubectl_manifest" "example" {
override_namespace = kubernetes_namespace_v1.namespace.metadata[0].name
yaml_body = file("${path.cwd},deploy.yaml")
}
And also:
resource "kubectl_manifest" "example" {
yaml_body = file("${path.cwd},deploy.yaml")
}
with adding the namespace to the deploy.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
namespace: namespace
...
---
apiVersion: v1
kind: Service
metadata:
name: example
namespace: namespace
Then when I try to confirm that the service is functioning as intended via:
kubectl logs example-6878fd468-9vgkm
Error from server (NotFound): pods "example-6878fd468-9vgkm" not found
Your terraform description is fine. You need to specify the -n switch to your kubectl logs command as well, to detect the pod in the right namespace.
kubectl logs example-6878fd468-9vgkm -nexample

kubectl: create replicaset without a yml file

I am trying to create a replicaset with kubernetes. This time, I don't have a yml file and this is why I am trying to create the replicaset using a command line.
Why kubectl create replicaset somename --image=nginx raise an error, and how to fix this?
You cannot create replicaset using the command line. Only the following resource creation is possible using kubectl create:
kubectl create --help |awk '/Available Commands:/,/^$/'
Available Commands:
clusterrole Create a cluster role
clusterrolebinding Create a cluster role binding for a particular cluster role
configmap Create a config map from a local file, directory or literal value
cronjob Create a cron job with the specified name
deployment Create a deployment with the specified name
ingress Create an ingress with the specified name
job Create a job with the specified name
namespace Create a namespace with the specified name
poddisruptionbudget Create a pod disruption budget with the specified name
priorityclass Create a priority class with the specified name
quota Create a quota with the specified name
role Create a role with single rule
rolebinding Create a role binding for a particular role or cluster role
secret Create a secret using specified subcommand
service Create a service using a specified subcommand
serviceaccount Create a service account with the specified name
Although, You may use the following way to create the replica set, in the below example, kubectl create -f is fed with stdout(-):
echo "apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# modify replicas according to your case
replicas: 3
selector:
matchLabels:
tier: frontend
template:
metadata:
labels:
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
" |kubectl create -f -
Hello, hope you are enjoying your kubernetes journey !
In fact, you cannot create a RS directly, but if you really don't want to use manifest, you can surely create it via a deployment:
❯ kubectl create deployment --image nginx:1.21 --port 80 test-rs
deployment.apps/test-rs created
here it is:
❯ kubectl get rs
NAME DESIRED CURRENT READY AGE
test-rs-5c99c9b8c 1 1 1 15s
bguess

Who has created a namespace and have access to it

I want to understand who has created a namespace and who has access to a specific namespace in Openshift.
This is specifically required as would require to block access and be very selective about access.
Who has created a specific namespace in OpenShift, can be found checking the parent Project annotations:
$ oc describe project example-project
Name: example-project
Created: 15 months ago
Labels: <none>
Annotations: alm-manager=operator-lifecycle-manager.olm-operator
openshift.io/display-name=Example Project
openshift.io/requester=**here is the username**
...
Who has access to a specific namespace: depends on what you mean by this. The oc client would allow you to review privileges for a given verb, in a given namespace, ... something like this:
$ oc adm policy who-can get pods -n specific-namespace
resourceaccessreviewresponse.authorization.openshift.io/<unknown>
Namespace: specific-namespace
Verb: get
Resource: pods
Users: username1
username2
...
system:admin
system:kube-scheduler
system:serviceaccount:default:router
system:serviceaccount:kube-service-catalog:default
Groups: system:cluster-admins
system:cluster-readers
system:masters

Problem deploying cockroachdb-client-secure

I am following this helm + secure - guide:
https://www.cockroachlabs.com/docs/stable/orchestrate-cockroachdb-with-kubernetes.html#helm
I deployed the cluster with this command: $ helm install my-release --values my-values.yaml cockroachdb/cockroachdb --namespace=thesis-crdb
This is how it looks: $ helm list --namespace=thesis-crdb
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
my-release thesis-crdb 1 2021-01-31 17:38:52.8102378 +0100 CET deployed cockroachdb-5.0.4 20.2.4
Here is how it looks using: $ kubectl get all --namespace=thesis-crdb
NAME READY STATUS RESTARTS AGE
pod/my-release-cockroachdb-0 1/1 Running 0 7m35s
pod/my-release-cockroachdb-1 1/1 Running 0 7m35s
pod/my-release-cockroachdb-2 1/1 Running 0 7m35s
pod/my-release-cockroachdb-init-fhzdn 0/1 Completed 0 7m35s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-release-cockroachdb ClusterIP None <none> 26257/TCP,8080/TCP 7m35s
service/my-release-cockroachdb-public ClusterIP 10.xx.xx.x <none> 26257/TCP,8080/TCP 7m35s
NAME READY AGE
statefulset.apps/my-release-cockroachdb 3/3 7m35s
NAME COMPLETIONS DURATION AGE
job.batch/my-release-cockroachdb-init 1/1 43s 7m36s
In the my-values.yaml-file I only changed the tls from false to true:
tls:
enabled: true
So far so good, but from here on the guide isn't really working for me anymore. I try as they say with getting the csr: kubectl get csr --namespace=thesis-crdb
No resources found
Ok, perhaps not needed. I carry on to deploy the client-secure
I download the file: https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/client-secure.yaml
And changes the serviceAccountName: cockroachdb to serviceAccountName: my-release-cockroachdb.
I try to deploy it with $ kubectl create -f client-secure.yaml --namespace=thesis-crdb but it throws this error:
Error from server (Forbidden): error when creating "client-secure.yaml": pods "cockroachdb-client-secure" is forbidden: error looking up service account thesis-crdb/my-release-cockroachdb: serviceaccount "my-release-cockroachdb" not found
Anyone got an idea how to solve this? I'm fairly sure it's something with the namespace that is messing it up.
I have tried to put the namespace in the metadata-section
metadata:
namespace: thesis-crdb
And then try to deploy it with: kubectl create -f client-secure.yaml but to no avail:
Error from server (Forbidden): error when creating "client-secure.yaml": pods "cockroachdb-client-secure" is forbidden: error looking up service account thesis-crdb/my-release-cockroachdb: serviceaccount "my-release-cockroachdb" not found
You mention in question that you have changed serviceAccountName in YAML.
And changes the serviceAccountName: cockroachdb to serviceAccountName: my-release-cockroachdb.
So Root Cause of your issue is related with ServiceAccount misconfiguration.
Background
In your cluster you have something called ServiceAccount.
When you (a human) access the cluster (for example, using kubectl), you are authenticated by the apiserver as a particular User Account (currently this is usually admin, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver. When they do, they are authenticated as a particular Service Account (for example, default).
To ServiceAccount you also should configure RBAC which grants you permissions to create resources.
Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization.
RBAC authorization uses the rbac.authorization.k8s.io API group to drive authorization decisions, allowing you to dynamically configure policies through the Kubernetes API.
If you don't have proper RBAC permissions you will not be able to create resources.
In Kubernetes you can find Role and ClusterRole. Role sets permissions within a particular namespace and ClusterRole sets permissions in whole cluster.
Besides that, you also need to bind roles using RoleBinding and ClusterRoleBinding.
In addition, if you would use Cloud environment, you would also need special rights in project. Your guide provides instructions to do it here.
Root cause
I've checked cockroachdb chart and it creates ServiceAccount, Role, ClusterRole, RoleBinding and ClusterRoleBinding for cockroachdb and prometheus. There is no configuration for my-release-cockroachdb.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cockroachdb
...
verbs:
- create
- get
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cockroachdb
labels:
app: cockroachdb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cockroachdb
...
In client-secure.yaml you change serviceAccountName to my-release-cockroachdb and Kubernetes cannot find that ServiceAccount as it was not created by cluster administrator or cockroachdb chart.
To list ServiceAccounts in default namespace you can use command $ kubectl get ServiceAccount, however if you would check all ServiceAccounts in cluster you should add -A to your command - $ kubectl get ServiceAccount -A.
Solution
Option 1 is to use existing ServiceAccount with proper permissions like SA created by cockroachdb chart which is cockroachdb, not my-release-cockroachdb.
Option 2 is to create ServiceAccount, Role/ClusterRole and RoleBinding/ClusterRoleBinding for my-release-cockroachdb.

How to get kubernetes applications to change deploy configs

I have two applications running in K8. APP A has write access to a data store and APP B has read access.
APP A needs to be able to change APP B's running deployment.
How we currently do this is manually by kicking off a process in APP A which adds a new DB in the data store (say db bob). Then we do:
kubectl edit deploy A
And change an environment variable to bob. This starts a rolling restart of all the pods of APP B. We would like to automate this process.
Is there anyway to get APP A to change the deployment config of APP B in k8?
Firstly answering your main question:
Is there anyway to get a service to change the deployment config of another service in k8?
From my understanding you are calling it Service A and B for it's purpose in the real life, but to facilitate understanding I suggested an edit to call them APP A and APP B, because:
In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service).
So if in your question you meant:
"Is there anyway to get APP A to change the deployment config of APP B in k8?"
Then Yes, you can give a pod admin privileges to manage other components of the cluster using the kubectl set env command to change/add envs.
In order to achieve this, you will need:
A Service Account with needed permissions in the namespace.
NOTE: In my example below since I don't know if you are working with multiple namespaces I'm using a ClusterRole, granting cluster-admin to a specific user. If you use only 1 namespace for these apps, consider a Role instead.
A ClusterRoleBinding binding the permissions of the service account to a role of the Cluster.
The Kubectl client inside the pod (manually added or modifying the docker-image) on APP A
Steps to Reproduce:
Create a deployment to apply the cluster-admin privileges, I'm naming it manager-deploy.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: manager-deploy
labels:
app: manager
spec:
replicas: 1
selector:
matchLabels:
app: manager
template:
metadata:
labels:
app: manager
spec:
serviceAccountName: k8s-role
containers:
- name: manager
image: gcr.io/google-samples/node-hello:1.0
Create a deployment with a environment var, mocking your Service B. I'm naming it deploy-env.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: env-deploy
labels:
app: env-replace
spec:
replicas: 1
selector:
matchLabels:
app: env-replace
template:
metadata:
labels:
app: env-replace
spec:
serviceAccountName: k8s-role
containers:
- name: env-replace
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DATASTORE_NAME
value: "john"
Create a ServiceAccount and a ClusterRoleBinding with cluster-admin privileges, I'm naming it service-account-for-pod.yaml (notice it's mentioned in manager-deploy.yaml:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: k8s-role
subjects:
- kind: ServiceAccount
name: k8s-role
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-role
Apply the service-account-for-pod.yaml, deploy-env.yaml, manager-deploy.yamland list current environment variables from deploy-env pod:
$ kubectl apply -f manager-deploy.yaml
deployment.apps/manager-deploy created
$ kubectl apply -f deploy-env.yaml
deployment.apps/env-deploy created
$ kubectl apply -f service-account-for-pod.yaml
clusterrolebinding.rbac.authorization.k8s.io/k8s-role created
serviceaccount/k8s-role created
$ kubectl exec -it env-deploy-fbd95bb94-hcq75 -- printenv
DATASTORE_NAME=john
Shell into the manager pod, download the kubectl binary and apply the kubectl set env deployment/deployment_name VAR_NAME=VALUE:
$ kubectl exec -it manager-deploy-747c9d5bc8-p684s -- /bin/bash
root#manager-deploy-747c9d5bc8-p684s:/# curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
root#manager-deploy-747c9d5bc8-p684s:/# chmod +x ./kubectl
root#manager-deploy-747c9d5bc8-p684s:/# mv ./kubectl /usr/local/bin/kubectl
root#manager-deploy-747c9d5bc8-p684s:/# kubectl set env deployment/env-deploy DATASTORE_NAME=bob
Verify the env var value on the pod (notice that the pod is recreated when deployment is modified:
$ kubectl exec -it env-deploy-7f565ffc4-t46zc -- printenv
DATASTORE_NAME=bob
Let me know in the comments if you have any doubt on how to apply this solution to your environment.
You could give service A access to your cluster (install kubectl and allow traffic from that NAT of service A to your cluster master) and with some cron jobs or jenkins / ssh or something that will execute your commands do it. You can also do kubectl patch or get the current config of second deployment kubectl get deployment <name> -o yaml --export > deployment.yaml and edit it there with some regex/awk/sed and then apply although the --export method is getting deprecated so you might aswell on service A download the GIT repo and apply the new config like that.
Thank you for the answers all (upvoted as they were both correct). I am just putting my own answer to document exactly what solved it for me.
In my case I just needed to make use of the patch url available on k8. That plus the this example worked.
All I needed to do was create a service account to restrict who can patch where. Restrict that account to Service A and use the java client in Service A to update the chart of Service B. After that the pods would roll and done.