Does kubernetes has an api that can create multiple kind resources at the same time - kubernetes

Now I have a situation that I have same yaml files, each of them contains things like deployment, service, ingress, etc. I have to create them concurrently, I tyied ansible to achive if but failed, so I want to know if kubernetes has an API endpoint can post the yaml file to it and create resources like what the command kubectl create -f sample.yaml did.You can also give me other advice to achive my purpose if you like.

I can also accept if there's a way to post my yaml file to kubernetes api and create all resources in it
Then you can simply concatenate all individual yaml files into one big yaml file but each separted by --- in between.
For example to install kubernetes-dashboard you simply use: kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml, examination of this yaml file reveals structure you need (excerpt below):
...
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
...
where each block between comments could be separate yaml file. Note that comments are optional and separator between contents of individual yaml files is ---.
Just some sidenotes:
although this is somewhat practical for final deployments, if you glue all of your individual yaml files into such a mega-all-in-one.yaml file then everything you do to it - (create/update/apply/delete...) you do to all resources listed inside.
If it is not "shared" file to be executed from some network resource, it then may be easier to use --recursive switch to kubectl as detailed in the official documentation and run against folder that contains all individual yaml files. This way you retain capability to individually pick any yaml file, and can deploy/delete/apply... all at once if you choose so...

Related

What is the role of ControllerManagerConfig generated in kubebuilder

I use kubebuilder to quickly develop k8s operator, and now I save the yaml deployed by kustomize to a file in the following way.
create: manifests kustomize ## Create chart
cd config/manager && $(KUSTOMIZE) edit set image controller=${IMG}
$(KUSTOMIZE) build config/default --output yamls
I found a configmap, but it is not referenced by other resources.
apiVersion: v1
data:
controller_manager_config.yaml: |
apiVersion: controller-runtime.sigs.k8s.io/v1alpha1
kind: ControllerManagerConfig
health:
healthProbeBindAddress: :8081
metrics:
bindAddress: 127.0.0.1:8080
webhook:
port: 9443
leaderElection:
leaderElect: true
resourceName: 31568e44.ys7.com
kind: ConfigMap
metadata:
name: myoperator-manager-config
namespace: myoperator-system
I am a little curious what it does? can i delete it?
I really appreciate any help with this.
It is another way to provide controller configuration. Check that https://book.kubebuilder.io/component-config-tutorial/custom-type.html .
However, it should be mounted in your deployment and the file path must be provided in the --config flag (the name of the flag can be different in your case, but "config" is used in the tutorial - it depends on your code)

Does a Pod use the k8s API Server to fetch spec declarations?

I'm going through this post, where we bind a Role to a Service Account and then query the API Server using said Service Account. The role only has list permission to the pods resource.
I did an experiment where I mounted a random Secret into a Pod that is using the above Service Account and my expectation was that the Pod would attempt to query the Secret and fail the creation process, but the pod is actually running successfully with the secret mounted in place.
So I'm left wondering when does a pod actually needs to query the API Server for resources or if the pod creation process is special and gets the resources through other means.
Here is the actual list of resources I used for my test:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: example-sa
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: example-role
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: example-rb
subjects:
- kind: ServiceAccount
name: example-sa
roleRef:
kind: Role
name: example-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Secret
metadata:
name: example-secret
data:
password: c3RhY2tvdmVyZmxvdw==
---
apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
serviceAccountName: example-sa
containers:
- name: webserver
image: nginx
volumeMounts:
- name: secret-volume
mountPath: /mysecrets
volumes:
- name: secret-volume
secret:
secretName: example-secret
...
I must admit that at first I didn't quite get your point, but when I read your question again I think now I can see what it's all about. First of all I must say that your initial interpretation is wrong. Let me explain it.
You wrote:
I did an experiment where I mounted a random Secret into a Pod that
is using the above Service Account
Actually the key word here is "I". The question is: who creates a Pod and who **mounts a random Secret into this Pod ? And the answer to that question from your perspective is simple: me. When you create a Pod you don't use the above mentioned ServiceAccount but you authorize your access to kubernetes API through entries in your .kube/config file. During the whole Pod creation process the ServiceAccount you created is not used a single time.
and my expectation was that the
Pod would attempt to query the Secret and fail the creation process,
but the pod is actually running successfully with the secret mounted
in place.
Why would it query the Secret if it doesn't use it ?
You can test it in a very simple way. You just need to kubectl exec into your running Pod and try to run kubectl, query kubernetes API directly or use one of the officially supported kubernetes cliet libraries. Then you will see that you're allowed to perform only specific operations, listed in your Role i.e. list Pods. If you attempt to run kubectl get secrets from within your Pod, it will fail.
The result you get is totally expected and there is nothig surprising in the fact that a random Secret is successfully mounted and a Pod is being created successfully every time. It's you who query kubernetes API and request creation of a Pod with a Secret mounted. **It's not Pod's
ServiceAccount.
So I'm left wondering when does a pod actually needs to query the API
Server for resources or if the pod creation process is special and
gets the resources through other means.
If you don't have specific queries e.g. written in python that use Kubernetes Python Client library that are run by your Pod or you don't use kubectl command from within such Pod, you won't see it making any queries to kubernetes API as all the queries needed for its creation process are performed by you, with permissions given to your user.

Is it possible to set the name of an external metric for a HorizontalPodAutoscaler from a configmap? GKE

I am modifying a deployment which autoscales using a HorizontalPodAutoscaler (HPA). This deployment is part of a pipeline in which workers read messages from pubsub subscriptions, do some work and publish to the next topic. Right now I use a configmap to define the pipeline for the deployments (the configmap contains input subscription and output topics). The HPA autoscales based on the number of messages on the input subscription. I would like to be able to pull the subscription name for the HPA from a configmap if possible? Is there a way to do this?
example HPA:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: my-deployment-hpa
namespace: default
labels:
name: my-deployment-hpa
spec:
minReplicas: 1
maxReplicas: 10
metrics:
- external:
metricName: pubsub.googleapis.com|subscription|num_undelivered_messages
metricSelector:
matchLabels:
resource.labels.subscription_id: "$INPUT_SUBSCRIPTION"
targetAverageValue: "2"
type: External
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-deployment
The value from the HPA currently $INPUT_SUBSCRIPTION could ideally come from a configmap.
Posting this answer as a community wiki for a better visibility as well as the answer was provided in the comments.
Answering the question from the post:
I would like to be able to pull the subscription name for the HPA from a configmap if possible? Is there a way to do this?
As pointed by user #Abdennour TOUMI there is no possibility to set the metric used by HPA with a ConfigMap:
Unfortunately, you cannot.. but you can using prometheus-adapter + HPA . Check this tuto: itnext.io/...
As for a manual workaround you could use a script that will extract needed metric name from the configMap and use a template to replace and apply new HPA.
With a configMap like:
apiVersion: v1
kind: ConfigMap
metadata:
name: example
data:
metric_name: "new_awesome_metric" # <-
not_needed: "only for example"
And following script:
#!/bin/bash
# variables
hpa_file_name="hpa.yaml"
configmap_name="example"
string_to_replace="PLACEHOLDER"
# extract the metric name used in a configmap
new_metric=$(kubectl get configmap $configmap_name -o json | jq '.data.metric_name')
# use the template to replace the $string_to_replace with your $new_metric and apply it
sed "s/$string_to_replace/$new_metric/g" $hpa_file_name | kubectl apply -f -
This script will need to have a hpa.yaml with the template to apply it as resource (example from question could be used with a change:
resource.labels.subscription_id: PLACEHOLDER
For more reference this HPA definition could be based on this guide:
Cloud.google.com: Kubernetes Engine: Tutorials: Autoscaling-metrics: PubSub

How to get kubernetes applications to change deploy configs

I have two applications running in K8. APP A has write access to a data store and APP B has read access.
APP A needs to be able to change APP B's running deployment.
How we currently do this is manually by kicking off a process in APP A which adds a new DB in the data store (say db bob). Then we do:
kubectl edit deploy A
And change an environment variable to bob. This starts a rolling restart of all the pods of APP B. We would like to automate this process.
Is there anyway to get APP A to change the deployment config of APP B in k8?
Firstly answering your main question:
Is there anyway to get a service to change the deployment config of another service in k8?
From my understanding you are calling it Service A and B for it's purpose in the real life, but to facilitate understanding I suggested an edit to call them APP A and APP B, because:
In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service).
So if in your question you meant:
"Is there anyway to get APP A to change the deployment config of APP B in k8?"
Then Yes, you can give a pod admin privileges to manage other components of the cluster using the kubectl set env command to change/add envs.
In order to achieve this, you will need:
A Service Account with needed permissions in the namespace.
NOTE: In my example below since I don't know if you are working with multiple namespaces I'm using a ClusterRole, granting cluster-admin to a specific user. If you use only 1 namespace for these apps, consider a Role instead.
A ClusterRoleBinding binding the permissions of the service account to a role of the Cluster.
The Kubectl client inside the pod (manually added or modifying the docker-image) on APP A
Steps to Reproduce:
Create a deployment to apply the cluster-admin privileges, I'm naming it manager-deploy.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: manager-deploy
labels:
app: manager
spec:
replicas: 1
selector:
matchLabels:
app: manager
template:
metadata:
labels:
app: manager
spec:
serviceAccountName: k8s-role
containers:
- name: manager
image: gcr.io/google-samples/node-hello:1.0
Create a deployment with a environment var, mocking your Service B. I'm naming it deploy-env.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: env-deploy
labels:
app: env-replace
spec:
replicas: 1
selector:
matchLabels:
app: env-replace
template:
metadata:
labels:
app: env-replace
spec:
serviceAccountName: k8s-role
containers:
- name: env-replace
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DATASTORE_NAME
value: "john"
Create a ServiceAccount and a ClusterRoleBinding with cluster-admin privileges, I'm naming it service-account-for-pod.yaml (notice it's mentioned in manager-deploy.yaml:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: k8s-role
subjects:
- kind: ServiceAccount
name: k8s-role
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-role
Apply the service-account-for-pod.yaml, deploy-env.yaml, manager-deploy.yamland list current environment variables from deploy-env pod:
$ kubectl apply -f manager-deploy.yaml
deployment.apps/manager-deploy created
$ kubectl apply -f deploy-env.yaml
deployment.apps/env-deploy created
$ kubectl apply -f service-account-for-pod.yaml
clusterrolebinding.rbac.authorization.k8s.io/k8s-role created
serviceaccount/k8s-role created
$ kubectl exec -it env-deploy-fbd95bb94-hcq75 -- printenv
DATASTORE_NAME=john
Shell into the manager pod, download the kubectl binary and apply the kubectl set env deployment/deployment_name VAR_NAME=VALUE:
$ kubectl exec -it manager-deploy-747c9d5bc8-p684s -- /bin/bash
root#manager-deploy-747c9d5bc8-p684s:/# curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
root#manager-deploy-747c9d5bc8-p684s:/# chmod +x ./kubectl
root#manager-deploy-747c9d5bc8-p684s:/# mv ./kubectl /usr/local/bin/kubectl
root#manager-deploy-747c9d5bc8-p684s:/# kubectl set env deployment/env-deploy DATASTORE_NAME=bob
Verify the env var value on the pod (notice that the pod is recreated when deployment is modified:
$ kubectl exec -it env-deploy-7f565ffc4-t46zc -- printenv
DATASTORE_NAME=bob
Let me know in the comments if you have any doubt on how to apply this solution to your environment.
You could give service A access to your cluster (install kubectl and allow traffic from that NAT of service A to your cluster master) and with some cron jobs or jenkins / ssh or something that will execute your commands do it. You can also do kubectl patch or get the current config of second deployment kubectl get deployment <name> -o yaml --export > deployment.yaml and edit it there with some regex/awk/sed and then apply although the --export method is getting deprecated so you might aswell on service A download the GIT repo and apply the new config like that.
Thank you for the answers all (upvoted as they were both correct). I am just putting my own answer to document exactly what solved it for me.
In my case I just needed to make use of the patch url available on k8. That plus the this example worked.
All I needed to do was create a service account to restrict who can patch where. Restrict that account to Service A and use the java client in Service A to update the chart of Service B. After that the pods would roll and done.

Spinnaker - Reference ConfigMap versioned value inside manifest

I'm deploying a single yaml file containing two manifests using the Spinnaker Kubernetes Provider V2 (Manifest deployer). Inside the Deployment I have a custom annotation that references the ConfigMap:
# ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config-map
data:
foo: bar
---
# Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-deployment
spec:
template:
metadata:
annotations:
my-config-map-reference: my-config-map
[...]
Upon deployment, Spinnaker applies versioning to the ConfigMap, which is then deployed as my-config-map-v000.
I'd like to be able to retrieve the full name inside my custom annotation, but since Spinnaker replaces automatically the configMap references with the appropriate versioned values only in specific entrypoints ( https://github.com/spinnaker/clouddriver/blob/master/clouddriver-kubernetes/src/main/groovy/com/netflix/spinnaker/clouddriver/kubernetes/v2/artifact/ArtifactReplacerFactory.java ) in this case this does not work.
According to Spinnaker documentation ( https://www.spinnaker.io/reference/artifacts/in-kubernetes-v2/#why-not-pipeline-expressions ) I may be able to write a Pipeline Expression to retrieve the full name, but I wasn't able to do so.
How can I set the full ConfigMap name inside the annotation?
Spinnaker can inject artifacts from the currently executing pipeline into your manifests as they are deployed
Refer to this guide for the instructions on how to Binding artifacts in manifests
However, as mentioned here, there's NO resource mapping for annotation, so it should be user-supplied only as a parameter for your manifest.
In the future, certain relationships between resources will be recorded and annotated by Spinnaker