How to get kubernetes applications to change deploy configs - kubernetes

I have two applications running in K8. APP A has write access to a data store and APP B has read access.
APP A needs to be able to change APP B's running deployment.
How we currently do this is manually by kicking off a process in APP A which adds a new DB in the data store (say db bob). Then we do:
kubectl edit deploy A
And change an environment variable to bob. This starts a rolling restart of all the pods of APP B. We would like to automate this process.
Is there anyway to get APP A to change the deployment config of APP B in k8?

Firstly answering your main question:
Is there anyway to get a service to change the deployment config of another service in k8?
From my understanding you are calling it Service A and B for it's purpose in the real life, but to facilitate understanding I suggested an edit to call them APP A and APP B, because:
In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service).
So if in your question you meant:
"Is there anyway to get APP A to change the deployment config of APP B in k8?"
Then Yes, you can give a pod admin privileges to manage other components of the cluster using the kubectl set env command to change/add envs.
In order to achieve this, you will need:
A Service Account with needed permissions in the namespace.
NOTE: In my example below since I don't know if you are working with multiple namespaces I'm using a ClusterRole, granting cluster-admin to a specific user. If you use only 1 namespace for these apps, consider a Role instead.
A ClusterRoleBinding binding the permissions of the service account to a role of the Cluster.
The Kubectl client inside the pod (manually added or modifying the docker-image) on APP A
Steps to Reproduce:
Create a deployment to apply the cluster-admin privileges, I'm naming it manager-deploy.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: manager-deploy
labels:
app: manager
spec:
replicas: 1
selector:
matchLabels:
app: manager
template:
metadata:
labels:
app: manager
spec:
serviceAccountName: k8s-role
containers:
- name: manager
image: gcr.io/google-samples/node-hello:1.0
Create a deployment with a environment var, mocking your Service B. I'm naming it deploy-env.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: env-deploy
labels:
app: env-replace
spec:
replicas: 1
selector:
matchLabels:
app: env-replace
template:
metadata:
labels:
app: env-replace
spec:
serviceAccountName: k8s-role
containers:
- name: env-replace
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DATASTORE_NAME
value: "john"
Create a ServiceAccount and a ClusterRoleBinding with cluster-admin privileges, I'm naming it service-account-for-pod.yaml (notice it's mentioned in manager-deploy.yaml:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: k8s-role
subjects:
- kind: ServiceAccount
name: k8s-role
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-role
Apply the service-account-for-pod.yaml, deploy-env.yaml, manager-deploy.yamland list current environment variables from deploy-env pod:
$ kubectl apply -f manager-deploy.yaml
deployment.apps/manager-deploy created
$ kubectl apply -f deploy-env.yaml
deployment.apps/env-deploy created
$ kubectl apply -f service-account-for-pod.yaml
clusterrolebinding.rbac.authorization.k8s.io/k8s-role created
serviceaccount/k8s-role created
$ kubectl exec -it env-deploy-fbd95bb94-hcq75 -- printenv
DATASTORE_NAME=john
Shell into the manager pod, download the kubectl binary and apply the kubectl set env deployment/deployment_name VAR_NAME=VALUE:
$ kubectl exec -it manager-deploy-747c9d5bc8-p684s -- /bin/bash
root#manager-deploy-747c9d5bc8-p684s:/# curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
root#manager-deploy-747c9d5bc8-p684s:/# chmod +x ./kubectl
root#manager-deploy-747c9d5bc8-p684s:/# mv ./kubectl /usr/local/bin/kubectl
root#manager-deploy-747c9d5bc8-p684s:/# kubectl set env deployment/env-deploy DATASTORE_NAME=bob
Verify the env var value on the pod (notice that the pod is recreated when deployment is modified:
$ kubectl exec -it env-deploy-7f565ffc4-t46zc -- printenv
DATASTORE_NAME=bob
Let me know in the comments if you have any doubt on how to apply this solution to your environment.

You could give service A access to your cluster (install kubectl and allow traffic from that NAT of service A to your cluster master) and with some cron jobs or jenkins / ssh or something that will execute your commands do it. You can also do kubectl patch or get the current config of second deployment kubectl get deployment <name> -o yaml --export > deployment.yaml and edit it there with some regex/awk/sed and then apply although the --export method is getting deprecated so you might aswell on service A download the GIT repo and apply the new config like that.

Thank you for the answers all (upvoted as they were both correct). I am just putting my own answer to document exactly what solved it for me.
In my case I just needed to make use of the patch url available on k8. That plus the this example worked.
All I needed to do was create a service account to restrict who can patch where. Restrict that account to Service A and use the java client in Service A to update the chart of Service B. After that the pods would roll and done.

Related

kubectl: create replicaset without a yml file

I am trying to create a replicaset with kubernetes. This time, I don't have a yml file and this is why I am trying to create the replicaset using a command line.
Why kubectl create replicaset somename --image=nginx raise an error, and how to fix this?
You cannot create replicaset using the command line. Only the following resource creation is possible using kubectl create:
kubectl create --help |awk '/Available Commands:/,/^$/'
Available Commands:
clusterrole Create a cluster role
clusterrolebinding Create a cluster role binding for a particular cluster role
configmap Create a config map from a local file, directory or literal value
cronjob Create a cron job with the specified name
deployment Create a deployment with the specified name
ingress Create an ingress with the specified name
job Create a job with the specified name
namespace Create a namespace with the specified name
poddisruptionbudget Create a pod disruption budget with the specified name
priorityclass Create a priority class with the specified name
quota Create a quota with the specified name
role Create a role with single rule
rolebinding Create a role binding for a particular role or cluster role
secret Create a secret using specified subcommand
service Create a service using a specified subcommand
serviceaccount Create a service account with the specified name
Although, You may use the following way to create the replica set, in the below example, kubectl create -f is fed with stdout(-):
echo "apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# modify replicas according to your case
replicas: 3
selector:
matchLabels:
tier: frontend
template:
metadata:
labels:
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
" |kubectl create -f -
Hello, hope you are enjoying your kubernetes journey !
In fact, you cannot create a RS directly, but if you really don't want to use manifest, you can surely create it via a deployment:
❯ kubectl create deployment --image nginx:1.21 --port 80 test-rs
deployment.apps/test-rs created
here it is:
❯ kubectl get rs
NAME DESIRED CURRENT READY AGE
test-rs-5c99c9b8c 1 1 1 15s
bguess

Openshift - Run pod only for specific time period

I'm new to Openshfit. We are using openshift deployments to deploy our multiple microservice (SpringBoot application). The deployment is done from docker image.
We have a situation that we need to stop one micro service alone from Midnight till morning 5 AM ( due to an external dependency ).
Could someone suggest a way to do this automatically?
I was able to run
oc scale deployment/sampleservice--replicas=0 manually to make number of pods as zero and scale up to 1 manually later.
I'm not sure how to run this command on specific time automatically. The CronJob in Openshift should be able to do this. But not sure how to configure cronjob to execute an OC command.
Any guidance will be of great help
Using a cronjob is a good option.
First, you'll need an image that has the oc command line client available. I'm sure there's a prebuilt one out there somewhere, but since this will be running with privileges in your OpenShift cluster you want something you trust, which probably means building it yourself. I used:
FROM quay.io/centos/centos:8
RUN curl -o /tmp/openshift-client.tar.gz \
https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-client-linux.tar.gz; \
tar -C /bin -xf /tmp/openshift-client.tar.gz oc kubectl; \
rm -f /tmp/openshift-client.tar.gz
ENTRYPOINT ["/bin/oc"]
In order to handle authentication correctly, you'll need to create a ServiceAccount and then assign it appropriate privileges through a Role and a RoleBinding. I created a ServiceAccount named oc-client-sa:
apiVersion: v1
kind: ServiceAccount
metadata:
name: oc-client-sa
namespace: oc-client-example
A Role named oc-client-role that grants privileges to Pod and Deployment objects:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: oc-client-role
namespace: oc-client-example
rules:
- verbs:
- get
- list
- create
- watch
- patch
apiGroups:
- ''
resources:
- pods
- verbs:
- get
- list
- create
- watch
- patch
apiGroups:
- 'apps'
resources:
- deployments
- deployments/scale
And a RoleBinding that connects the oc-client-sa ServiceAccount
to the oc-client-role Role:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: oc-client-rolebinding
namespace: oc-client-example
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: oc-client-role
subjects:
- kind: ServiceAccount
name: oc-client-sa
With all this in place, we can write a CronJob like this that will
scale down a deployment at a specific time. Note that we're running
the jobs using the oc-client-sa ServiceAccount we created earlier:
apiVersion: batch/v1
kind: CronJob
metadata:
name: scale-web-down
namespace: oc-client-example
spec:
schedule: "00 00 * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
serviceAccountName: oc-client-sa
restartPolicy: Never
containers:
- image: docker.io/larsks/openshift-client
args:
- scale
- deployment/sampleservice
- --replicas=0
name: oc-scale-down
You would write a similar one to scale things back up at 5AM.
The oc client will automatically use the credentials provided to your pod by Kubernetes because of the serviceAccountName setting.
API
You can use the OC rest api client and write the simple python code which will scale down replicas. Pack this python into a docker image and run it as a cronjob inside the OC cluster.
Simple Curl
Run a simple curl inside the cronjob to scale up & down deployment at a certain time.
Here is a simple Curl to scale the deployment: https://docs.openshift.com/container-platform/3.7/rest_api/apis-apps/v1beta1.Deployment.html#Get-apis-apps-v1beta1-namespaces-namespace-deployments-name-scale
API documentation : https://docs.openshift.com/container-platform/3.7/rest_api/apis-apps/v1beta1.Deployment.html
CLI
If you don't want to run code as docker image in cronjob of K8s, you can also run the command, in that case, use the docker image inside cronjob, and fire the command
OC-cli : https://hub.docker.com/r/widerin/openshift-cli
Dont forget authentication is required in both cases either API or running a command inside the cronjob.

Does a Pod use the k8s API Server to fetch spec declarations?

I'm going through this post, where we bind a Role to a Service Account and then query the API Server using said Service Account. The role only has list permission to the pods resource.
I did an experiment where I mounted a random Secret into a Pod that is using the above Service Account and my expectation was that the Pod would attempt to query the Secret and fail the creation process, but the pod is actually running successfully with the secret mounted in place.
So I'm left wondering when does a pod actually needs to query the API Server for resources or if the pod creation process is special and gets the resources through other means.
Here is the actual list of resources I used for my test:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: example-sa
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: example-role
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: example-rb
subjects:
- kind: ServiceAccount
name: example-sa
roleRef:
kind: Role
name: example-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Secret
metadata:
name: example-secret
data:
password: c3RhY2tvdmVyZmxvdw==
---
apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
serviceAccountName: example-sa
containers:
- name: webserver
image: nginx
volumeMounts:
- name: secret-volume
mountPath: /mysecrets
volumes:
- name: secret-volume
secret:
secretName: example-secret
...
I must admit that at first I didn't quite get your point, but when I read your question again I think now I can see what it's all about. First of all I must say that your initial interpretation is wrong. Let me explain it.
You wrote:
I did an experiment where I mounted a random Secret into a Pod that
is using the above Service Account
Actually the key word here is "I". The question is: who creates a Pod and who **mounts a random Secret into this Pod ? And the answer to that question from your perspective is simple: me. When you create a Pod you don't use the above mentioned ServiceAccount but you authorize your access to kubernetes API through entries in your .kube/config file. During the whole Pod creation process the ServiceAccount you created is not used a single time.
and my expectation was that the
Pod would attempt to query the Secret and fail the creation process,
but the pod is actually running successfully with the secret mounted
in place.
Why would it query the Secret if it doesn't use it ?
You can test it in a very simple way. You just need to kubectl exec into your running Pod and try to run kubectl, query kubernetes API directly or use one of the officially supported kubernetes cliet libraries. Then you will see that you're allowed to perform only specific operations, listed in your Role i.e. list Pods. If you attempt to run kubectl get secrets from within your Pod, it will fail.
The result you get is totally expected and there is nothig surprising in the fact that a random Secret is successfully mounted and a Pod is being created successfully every time. It's you who query kubernetes API and request creation of a Pod with a Secret mounted. **It's not Pod's
ServiceAccount.
So I'm left wondering when does a pod actually needs to query the API
Server for resources or if the pod creation process is special and
gets the resources through other means.
If you don't have specific queries e.g. written in python that use Kubernetes Python Client library that are run by your Pod or you don't use kubectl command from within such Pod, you won't see it making any queries to kubernetes API as all the queries needed for its creation process are performed by you, with permissions given to your user.

Google Stackdriver - how can I use my Kubernetes YAML labels for Stackdriver Log Query?

When using Google Stackdriver I can use the log query to find the exact log statements I am looking for.
This might look like this:
resource.type="k8s_container"
resource.labels.project_id="my-project"
resource.labels.location="europe-west3-a"
resource.labels.cluster_name="my-cluster"
resource.labels.namespace_name="dev"
resource.labels.pod_name="my-app-pod-7f6cf95b6c-nkkbm"
resource.labels.container_name="container"
However as you can see in this query argument resource.labels.pod_name="my-app-pod-7f6cf95b6c-nkkbm" that I am looking for a pod with the id 7f6cf95b6c-nkkbm. Because of this I can not use this Stackdriver view with this exact query if I deployed a new revision of my-app therefore having a new ID and the one in the curreny query becomes invalid or not locatable.
Now I don't always want to look for the new ID every time I want to have the current view of my my-app logs. So I tried to add a special label stackdriver: my-app to my Kubernetes YAML file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
metadata:
labels:
stackdriver: my-app <<<
Revisiting my newly deployed Pod I can assure that the label stackdriver: my-app is indeed existing.
Now I want to add this new label to use as a query argument:
resource.type="k8s_container"
resource.labels.project_id="my-project"
resource.labels.location="europe-west3-a"
resource.labels.cluster_name="my-cluster"
resource.labels.namespace_name="dev"
resource.labels.pod_name="my-app-pod-7f6cf95b6c-nkkbm"
resource.labels.container_name="container"
resource.labels.stackdriver=my-app <<< the kubernetes label
As you can guess this did not work otherwise I'd have no reason to write this question ;)
Any idea how the thing I am about to do can be achieved?
Any idea how the thing I am about to do can be achieved?
Yes! In fact, I've prepared an example to show you the whole process :)
Let's assume:
You have a GKE cluster named: gke-label
You have a Cloud Operations for GKE enabled (logging)
You have a Deployment named nginx with a following label:
stackdriver: look_here_for_me
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
stackdriver: look_here_for_me
replicas: 1
template:
metadata:
labels:
app: nginx
stackdriver: look_here_for_me
spec:
containers:
- name: nginx
image: nginx
You can apply this definition and send some traffic from the other pod so that the logs could be generated. I've done it with:
$ kubectl run -it --rm --image=ubuntu ubuntu -- /bin/bash
$ apt update && apt install -y curl
$ curl NGINX_POD_IP_ADDRESS/NONEXISTING # <-- this path is only for better visibility
After that you can go to:
GCP Cloud Console (Web UI) -> Logging (I used new version)
With the following query:
resource.type="k8s_container"
resource.labels.cluster_name="gke-label"
-->labels."k8s-pod/stackdriver"="look_here_for_me"
You should be able to see the container logs as well it's label:

How to access Kubernetes Dashboard as admin with userid/passwd outside cluster?

Desired Outcome:
I want to set up a CSV file with userids and passwords and access Kubernetes Dashboard as a full admin, preferably from anywhere with a browser. I am just learning kubernetes and want to experiment with cluster management, deployments, etc. This is just for learning and is not a Production Setup. I am using Kubernetes version 1.9.2 and created a 3-machine cluster (master and 2 workers)
Background/What I've done so far:
I read the Dashboard README and I created an admin-user and admin-role-binding with the files shown below. I can then use the kubectl describe secret command to get the admin user's token. I run kubectl proxy on the cluster master and authenticate to the Dashboard with that token using a browser running on the cluster master. All of this works.
admin-user.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
admin-role-binding.yaml:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
I can login to the dashboard as admin IF:
I run kubectl proxy
I access the dashboard with a browser where I ran command (1)
I use the "token" option to login and paste the admin user's token which I get using the kubectl describe secret command.
What I'd like to do:
Set up a CSV file with userids/passwords
Login as admin with userid/password
Be able to login from anywhere
To that end, I created a CSV file, e.g. /home/chris/myusers.txt:
mypasswd,admin,42
I did not know what value to use for id so I just punted with 42.
I then edited the file:
/etc/kubernetes/manifests/kube-apiserver.yaml and adding this line:
--basic-auth-file=/home/chris/myusers.txt
and then restarting kubelet:
sudo systemctl restart kubelet
However when I did that, my cluster stopped working and I couldn't access the Dashboard so I reverted back where I still use the admin user's token.
My questions are:
Is it possible to do what I'm trying to do here?
What id values do I use in the user CSV file? What groups would I specify?
What other changes do I need to make to get all of this to work? If I modify the apiserver manifest to use a file with userids/passwords, does that mess up the rest of the configuration for my cluster?
You can try this one it is working for me. Taking reference from here.
volumeMounts:
- mountPath: /etc/kubernetes/auth.csv
name: kubernetes-dashboard
readOnly: true
volumes:
- hostPath:
path: /etc/kubernetes/auth.csv
name: kubernetes-dashboard
How to config simple login/pass authentication for kubernetes desktop UI