Am trying to use kubernetes ansible module for creating the kubectl secret, below is my command
kubectl create secret generic -n default test --from-file=gcp=serviceaccount.json
Do we have any way to pass service account json file(--from-file=gcp=serviceaccount.json) in Ansible k8s module,
how to pass this --from-file in the below module?
- name: CREATE SECRET
k8s:
state: present
definition:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: test
namespace: default
data:
?? : ??
Am able to resolve the issue with stringData option.
Ive passed the content of the file with stringData option
- name: CREATE SECRET
k8s:
state: present
definition:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: test
namespace: default
stringData:
content of the file
Related
I'm trying to run my kuberntes app using minikube on ubuntu20.04 and applied a secret to pull a private docker image from docker hub, but it doesn't seem to work correctly.
Failed to pull image "xxx/node-graphql:latest": rpc error: code
= Unknown desc = Error response from daemon: pull access denied for xxx/node-graphql, repository does not exist or may require
'docker login': denied: requested access to the resource is denied
Here's the secret generated by
kubectl create secret generic regcred \
--from-file=.dockerconfigjson=<pathtofile>.docker/config.json \
--type=kubernetes.io/dockerconfigjson
And here's the secret yaml file I have created
apiVersion: v1
data:
.dockerconfigjson: xxx9tRXpNakZCSTBBaFFRPT0iCgkJfQoJfQp9
kind: Secret
metadata:
name: node-graphql-secret
uid: xxx-2e18-44eb-9719-xxx
type: kubernetes.io/dockerconfigjson
Did anyone try to pull a private docker image into Kubernetes using a secret? Any kind of help would be appreciated. Thank you!
I managed to add the secrets config in the following steps.
First, you need to login to docker hub using:
docker login
Next, you create a k8s secret running:
kubectl create secret generic <your-secret-name>\\n --from-file=.dockerconfigjson=<pathtoyourdockerconfigfile>.docker/config.json \\n --type=kubernetes.io/dockerconfigjson
And then get the secret in yaml format
kubectl get secret -o yaml
It should look like this:
apiVersion: v1
items:
- apiVersion: v1
data:
.dockerconfigjson: xxxewoJImF1dGhzIjogewoJCSJodHRwczovL2luZGV4LmRvY2tl
kind: Secret
metadata:
creationTimestamp: "2022-10-27T23:06:01Z"
name: <your-secret-name>
namespace: default
resourceVersion: "513"
uid: xxxx-0f12-4beb-be41-xxx
type: kubernetes.io/dockerconfigjson
kind: List
metadata:
resourceVersion: ""
And I have copied the content for the secret in the secret yaml file:
apiVersion: v1
data:
.dockerconfigjson: xxxewoJImF1dGhzIjogewoJCSJodHRwczovL2luZGV4LmRvY2tlci
kind: Secret
metadata:
creationTimestamp: "2022-10-27T23:06:01Z"
name: <your-secret-name>
namespace: default
resourceVersion: "513"
uid: xxx-0f12-4beb-be41-xxx
type: kubernetes.io/dockerconfigjson
It works! This is a simple approach to using Secret to pull a private docker image for K8s.
As a side note, to apply the secret, run kubectl apply -f secret.yml
Hope it helps
I am trying to use a container image from a private container registry in one of my tasks.
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: echo-hello-world
spec:
steps:
- name: echo
image: de.icr.io/reporting/status:latest
command:
- echo
args:
- "Hello World"
But when I run this task within an IBM Cloud Delivery Pipeline (Tekton) the image can not be pulled
message: 'Failed to pull image "de.icr.io/reporting/status:latest": rpc error: code = Unknown desc = failed to pull and unpack image "de.icr.io/reporting/status:latest": failed to resolve reference "de.icr.io/reporting/status:latest": failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized'
I read several tutorials and blogs, but so far couldn't find a solution. This is probably what I need to accomplish, so that the IBM Cloud Delivery Pipeline (Tekton) can access my private container registry: https://tekton.dev/vault/pipelines-v0.15.2/auth/#basic-authentication-docker
So far I have created a secret.yaml file in my .tekton directory:
apiVersion: v1
kind: Secret
metadata:
name: basic-user-pass
annotations:
tekton.dev/docker-0: https://de.icr.io # Described below
type: kubernetes.io/basic-auth
stringData:
username: $(params.DOCKER_USERNAME)
password: $(params.DOCKER_PASSWORD)
I am also creating a ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: default-runner
secrets:
- name: basic-user-pass
And in my trigger definition I telling the pipeline to use the default-runner ServiceAccount:
apiVersion: tekton.dev/v1beta1
kind: TriggerTemplate
metadata:
name: theTemplateTrigger
spec:
resourcetemplates:
- apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pipelinerun-$(uid)
spec:
serviceAccountName: default-runner
pipelineRef:
name: hello-goodbye
I found a way to pass my API key to my IBM Cloud Delivery Pipeline (Tekton) and the tasks in my pipeline are now able to pull container images from my private container registry.
This is my working trigger template:
apiVersion: tekton.dev/v1beta1
kind: TriggerTemplate
metadata:
name: theTemplateTrigger
spec:
params:
- name: pipeline-dockerconfigjson
description: dockerconfigjson for images used in .pipeline-config.yaml
default: "eyJhdXRocyI6e319" # ie. {"auths":{}} base64 encoded
resourcetemplates:
- apiVersion: v1
kind: Secret
data:
.dockerconfigjson: $(tt.params.pipeline-dockerconfigjson)
metadata:
name: pipeline-pull-secret
type: kubernetes.io/dockerconfigjson
- apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pipelinerun-$(uid)
spec:
pipelineRef:
name: hello-goodbye
podTemplate:
imagePullSecrets:
- name: pipeline-pull-secret
It first defines a parameter called pipeline-dockerconfigjson:
params:
- name: pipeline-dockerconfigjson
description: dockerconfigjson for images used in .pipeline-config.yaml
default: "eyJhdXRocyI6e319" # ie. {"auths":{}} base64 encoded
The second part turns the value passed into this parameter into a Kubernetes secret:
- apiVersion: v1
kind: Secret
data:
.dockerconfigjson: $(tt.params.pipeline-dockerconfigjson)
metadata:
name: pipeline-pull-secret
type: kubernetes.io/dockerconfigjson
And this secret is then pushed into the imagePullSecrets field of the PodTemplate.
The last step is to populate the parameter with a valid dockerconfigjson and this can be accomplished within the Delivery Pipeline UI (IBM Cloud UI).
To create a valid dockerconfigjson for my registry de.icr.io I had to use the following kubectl command:
kubectl create secret docker-registry mysecret \
--dry-run=client \
--docker-server=de.icr.io \
--docker-username=iamapikey \
--docker-password=<MY_API_KEY> \
--docker-email=<MY_EMAIL> \
-o yaml
and then within the output there is a valid base64 encoded .dockerconfigjson field.
Please also note that there is a public catalog of sample tekton tasks:
https://github.com/open-toolchain/tekton-catalog/tree/master/container-registry
More on IBM Cloud Continuous Delivery Tekton:
https://www.ibm.com/cloud/blog/ibm-cloud-continuous-delivery-tekton-pipelines-tools-and-resources
Tektonized Toolchain Templates: https://www.ibm.com/cloud/blog/toolchain-templates-with-tekton-pipelines
The secret you created (type basic-auth) would not allow Kubelet to pull your Pods images.
The doc mentions those secrets are meant to provision some configuration, inside your tasks containers runtime. Which may then be used during your build jobs, pulling or pushing images to registries.
Although Kubelet needs some different configuration (eg: type dockercfg), to authenticate when pulling images / starting containers.
I have a need to define a standalone patch as YAML.
More specifically, I want to do the following:
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "registry-my-registry"}]}'
The catch is I can't use kubectl patch. I'm using a GitOps workflow with flux, and that resource I want to patch is a default resource created outside of flux.
In other terms, I need to do the same thing as the command above but with kubectl apply only:
kubectl apply patch.yaml
I wasn't able to figure out if you can define such a patch.
The key bit is that I can't predict the name of the default secret token on a new cluster (as the name is random, i.e. default-token-uudge)
Fields set and deleted from Resource Config are merged into Resources by Kubectl apply:
If a Resource already exists, Apply updates the Resources by merging the
local Resource Config into the remote Resources
Fields removed from the Resource Config will be deleted from the remote Resource
You can learn more about Kubernetes Field Merge Semantics.
If your limitation is not knowing the secret default-token-xxxxx name, no problem, just keep that field out of your yaml.
As long as the yaml has enough fields to identify the target resource (name, kind, namespace) it will add/edit the fields you set.
I created a cluster (minikube in this example, but it could be any) and retrieved the current default serviceAccount:
$ kubectl get serviceaccount default -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2020-07-01T14:51:38Z"
name: default
namespace: default
resourceVersion: "330"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: a9e5ff4a-8bfb-466f-8873-58c2172a5d11
secrets:
- name: default-token-j6zx2
Then, we create a yaml file with the content's that we want to add:
$ cat add-image-pull-secrets.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: default
namespace: default
imagePullSecrets:
- name: registry-my-registry
Now we apply and verify:
$ kubectl apply -f add-image-pull-secrets.yaml
serviceaccount/default configured
$ kubectl get serviceaccount default -o yaml
apiVersion: v1
imagePullSecrets:
- name: registry-my-registry
kind: ServiceAccount
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","imagePullSecrets":[{"name":"registry-my-registry2"}],"kind":"ServiceAccount","metadata":{"annotations":{},"name":"default","namespace":"default"}}
creationTimestamp: "2020-07-01T14:51:38Z"
name: default
namespace: default
resourceVersion: "2382"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: a9e5ff4a-8bfb-466f-8873-58c2172a5d11
secrets:
- name: default-token-j6zx2
As you can see, the ImagePullPolicy was added to the resource.
I hope it fits your needs. If you have any further questions let me know in the comments.
Let say, your service account YAML looks like bellow:
$ kubectl get sa demo -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo
namespace: default
secrets:
- name: default-token-uudge
Now, you want to add or change the imagePullSecrets for that service account. To do so, edit the YAML file and add imagePullSecrets.
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo
namespace: default
secrets:
- name: default-token-uudge
imagePullSecrets:
- name: myregistrykey
And finally, apply the changes:
$ kubectl apply -f service-account.yaml
With ansible: is it possible to patch resources with json or yaml snippets? I basically want to be able to accomplish the same thing as kubectl patch <Resource> <Name> --type='merge' -p='{"spec":{ "test":"hello }}', to append/modify resource specs.
https://docs.ansible.com/ansible/latest/modules/k8s_module.html
Is it possible to do this with the k8s ansible module? It says that if a resource already exists and "status: present" is set that it will patch it, however it isn't patching as far as I can tell
Thanks
Yes, you can provide just a patch and if the resource already exists it should send a strategic-merge-patch (or just a merge-patch if it's a custom resource). Here's an example playbook that creates and modifies a configmap:
---
- hosts: localhost
connection: local
gather_facts: no
vars:
cm: "{{ lookup('k8s',
api_version='v1',
kind='ConfigMap',
namespace='default',
resource_name='test') }}"
tasks:
- name: Create the ConfigMap
k8s:
definition:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
namespace: default
data:
hello: world
- name: We will see the ConfigMap defined above
debug:
var: cm
- name: Add a field to the ConfigMap (this will be a PATCH request)
k8s:
definition:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
namespace: default
data:
added: field
- name: The same ConfigMap as before, but with an extra field in data
debug:
var: cm
- name: Change a field in the ConfigMap (this will be a PATCH request)
k8s:
definition:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
namespace: default
data:
hello: everyone
- name: The added field is unchanged, but the hello field has a new value
debug:
var: cm
- name: Delete the added field in the ConfigMap (this will be a PATCH request)
k8s:
definition:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
namespace: default
data:
added: null
- name: The hello field is unchanged, but the added field is now gone
debug:
var: cm
I new in kube, and im trying to create deployment with configmap file. I have the following:
app-mydeploy.yaml
--------
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-mydeploy
labels:
app: app-mydeploy
spec:
replicas: 3
selector:
matchLabels:
app: mydeploy
template:
metadata:
labels:
app: mydeploy
spec:
containers:
- name: mydeploy-1
image: mydeploy:tag-latest
envFrom:
- configMapRef:
name: map-mydeploy
map-mydeploy
-----
apiVersion: v1
kind: ConfigMap
metadata:
name: map-mydeploy
namespace: default
data:
my_var: 10.240.12.1
I created the config and the deploy with the following commands:
kubectl create -f app-mydeploy.yaml
kubectl create configmap map-mydeploy --from-file=map-mydeploy
when im doing kubectl describe deployments, im getting among the rest:
Environment Variables from:
map-mydeploy ConfigMap Optional: false
also kubectl describe configmaps map-mydeploy give me the right results.
the issue is that my container is CrashLoopBackOff, when I look at the logs, it says: time="2019-02-05T14:47:53Z" level=fatal msg="Required environment variable my_var is not set.
this log is from my container that says that the my_var is not defined in the env vars.
what im doing wrong?
I think you are missing you key in the command
kubectl create configmap map-mydeploy --from-file=map-mydeploy
try to this kubectl create configmap map-mydeploy --from-file=my_var=map-mydeploy
also I highly recommend that if you are just using one value, create you configMap from literal kubectl create configmap my-config --from-literal=my_var=10.240.12.1 then related the configMap in your deployment as you are currently doing it.