I'm trying to learn Openshift/Origin/Kubernetes, so am stuck on one of many newbie hiccups.
If I build an image using this yml file:
apiVersion: v1
items:
- apiVersion: v1
kind: ImageStream
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: myapp-dev
name: myapp-dev
spec: {}
status:
dockerImageRepository: ""
- apiVersion: v1
kind: BuildConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: myapp-dev
name: myapp-dev
spec:
output:
to:
kind: ImageStreamTag
name: myapp-dev:latest
postCommit: {}
resources: {}
source:
git:
ref: master
uri: git#git.host:myproject/myapp.git
secrets: []
sourceSecret:
name: "deploykey"
type: Git
strategy:
dockerStrategy:
dockerfilePath: Dockerfile
type: Docker
triggers:
- type: ConfigChange
- imageChange: {}
type: ImageChange
status:
lastVersion: 0
kind: List
metadata: {}
And I have other Dockerfiles that I want to use the output image from the previous build, how do I reference the integrated registry within the Dockerfile? Right now, I'm just watching the build log and using the IP and port listed in the logs in the Dockerfile's FROM directive.
So the build logs show:
Successfully built 40ff8724d4dd
I1017 17:32:24.330274 1 docker.go:93] Pushing image 123.123.123.123:5000/myproject/myapp-dev:latest ...
So I used this in the Dockerfile:
FROM 123.123.123.123:5000/myproject/myapp-dev:latest
Any guidance you can provide will be awesome.
I would like to do something like:
FROM integrated.registry/myproject/myapp-dev:latest
Thank you for your time!
The build config object lets you override the FROM. If you look at the build config created by oc new-build or new-app you'll see the field spec.strategy.dockerStrategy.from which can point to any docker image you want. To point to an image stream use "kind" as "ImageStreamTag", set "name" to "myapp-dev:latest"
If you're building outside of OpenShift and have given your registry a public DNS name you can simply set the FROM to registry/project/name:tag
Related
We have a EKS cluster running with Traefik deployed in CRD style (full setup on GitHub) and wan't to deploy our app https://gitlab.com/jonashackt/microservice-api-spring-boot with the Kubernetes objects Deployment, Service and IngressRoute (see configuration repository here). The manifests look like this:
deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: microservice-api-spring-boot
spec:
replicas: 3
revisionHistoryLimit: 3
selector:
matchLabels:
app: microservice-api-spring-boot
branch: main
template:
metadata:
labels:
app: microservice-api-spring-boot
branch: main
spec:
containers:
- image: registry.gitlab.com/jonashackt/microservice-api-spring-boot:c25a74c8f919a72e3f00928917dc4ab2944ab061
name: microservice-api-spring-boot
ports:
- containerPort: 8098
imagePullSecrets:
- name: gitlab-container-registry
service.yml:
apiVersion: v1
kind: Service
metadata:
name: microservice-api-spring-boot
spec:
ports:
- port: 80
targetPort: 8098
selector:
app: microservice-api-spring-boot
branch: main
traefik-ingress-route.yml:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: microservice-api-spring-boot-ingressroute
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`microservice-api-spring-boot-BRANCHNAME.tekton-argocd.de`)
kind: Rule
services:
- name: microservice-api-spring-boot
port: 80
We already use Kustomize and especially the kustomize CLI (on a Mac or in GitHub Actions install with brew install kustomize) with the following folder structure:
├── deployment.yml
├── kustomization.yaml
├── service.yml
└── traefik-ingress-route.yml
Our kustomization.yaml looks like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yml
- service.yml
- traefik-ingress-route.yml
images:
- name: registry.gitlab.com/jonashackt/microservice-api-spring-boot
newTag: foobar
commonLabels:
branch: foobar
nameSuffix: foobar
Now changing the metadata.name dynamically to add a suffix to the Deployment's, Service's and IngressRoute's .metadata.name from within our GitHub Actions workflow is easy with kustomize CLI (because we want the suffix to use a prefixed -, we need to use the -- -barfoo syntax here):
kustomize edit set namesuffix -- -barfoo
Check the result with
kustomize build .
Also changing the .spec.selector.matchLabels.branch, .spec.template.metadata.labels.branch and .spec.selector.branch in the Deployment and Service is no problem:
kustomize edit set label branch:barfoo
Changing the .spec.template.spec.containers[0].image of our Deployment works with:
kustomize edit set image registry.gitlab.com/jonashackt/microservice-api-spring-boot:barfoo
But looking into our IngressRoute it seems that .spec.routes[0].services[0].name and .spec.routes[0].match = Host() can't be changed with Kustomize out of the box?! So how can we change both fields without the need for a replacement tooling like yq or even sed/ envsubst?
1. Change the IngressRoutes .spec.routes[0].services[0].name with Kustomize
Changing the IngressRoutes .spec.routes[0].services[0].name is possible with Kustomize using a NameReference transformer (see docs here) - luckily I found inspiration in this issue. Therefore we need to include the configurations keyword in our kustomize.yaml:
nameSuffix: foobar
configurations:
# Tie target Service metadata.name to IngressRoute's spec.routes.services.name
# Once Service name is changed, the IngressRoute referrerd service name will be changed as well.
- nameReference.yml
We also need to add file called nameReference.yml:
nameReference:
- kind: Service
fieldSpecs:
- kind: IngressRoute
path: spec/routes/services/name
As you can see we tie the Service's name to the IngressRoutes spec/routes/services/name. Now running
kustomize edit set namesuffix barfoo
will not only change the metadata.name tags of the Deployment, Service and IngressRoute - but also the .spec.routes[0].services[0].name of the IngressRoute, since it is now linked to the metadata.name of the Service. Note that this only, if both the referrer and the target's have a name tag.
2. Change a part of the IngressRoutes .spec.routes[0].match = Host()
The second part of the question ask how to change a part of the IngressRoutes .spec.routes[0].match = Host(). There's an open issue in the Kustomize GitHub project. Right now Kustomize doesn't support this use case - only writing a custom generator plugin for Kustomize. As this might not be a preferred option, there's another way inspired by this blog post. As we can create yaml files inline in our console using the syntax cat > ./myyamlfile.yml <<EOF ... EOF we could also use the inline variable substitution.
So first define the branch name as variable:
RULE_HOST_BRANCHNAME=foobar
And then use the described syntax to create a ingressroute-patch.yml file inline:
cat > ./ingressroute-patch.yml <<EOF
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: microservice-api-spring-boot-ingressroute
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(\`microservice-api-spring-boot-$RULE_HOST_BRANCHNAME.tekton-argocd.de\`)
kind: Rule
services:
- name: microservice-api-spring-boot
port: 80
EOF
The last step is to use the ingressroute-patch.yml file as patchesStrategicMerge inside our kustomization.yaml like this:
patchesStrategicMerge:
- ingressroute-patch.yml
Now running kustomize build . should output the correct Deployment, Service and IngressRoute for our setup:
apiVersion: v1
kind: Service
metadata:
labels:
branch: barfoo
name: microservice-api-spring-boot-barfoo
spec:
ports:
- port: 80
targetPort: 8098
selector:
app: microservice-api-spring-boot
branch: barfoo
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
branch: barfoo
name: microservice-api-spring-boot-barfoo
spec:
replicas: 3
revisionHistoryLimit: 3
selector:
matchLabels:
app: microservice-api-spring-boot
branch: barfoo
template:
metadata:
labels:
app: microservice-api-spring-boot
branch: barfoo
spec:
containers:
- image: registry.gitlab.com/jonashackt/microservice-api-spring-boot:barfoo
name: microservice-api-spring-boot
ports:
- containerPort: 8098
imagePullSecrets:
- name: gitlab-container-registry
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
labels:
branch: barfoo
name: microservice-api-spring-boot-ingressroute-barfoo
namespace: default
spec:
entryPoints:
- web
routes:
- kind: Rule
match: Host(`microservice-api-spring-boot-barfoo.tekton-argocd.de`)
services:
- name: microservice-api-spring-boot-barfoo
port: 80
I am trying to use a container image from a private container registry in one of my tasks.
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: echo-hello-world
spec:
steps:
- name: echo
image: de.icr.io/reporting/status:latest
command:
- echo
args:
- "Hello World"
But when I run this task within an IBM Cloud Delivery Pipeline (Tekton) the image can not be pulled
message: 'Failed to pull image "de.icr.io/reporting/status:latest": rpc error: code = Unknown desc = failed to pull and unpack image "de.icr.io/reporting/status:latest": failed to resolve reference "de.icr.io/reporting/status:latest": failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized'
I read several tutorials and blogs, but so far couldn't find a solution. This is probably what I need to accomplish, so that the IBM Cloud Delivery Pipeline (Tekton) can access my private container registry: https://tekton.dev/vault/pipelines-v0.15.2/auth/#basic-authentication-docker
So far I have created a secret.yaml file in my .tekton directory:
apiVersion: v1
kind: Secret
metadata:
name: basic-user-pass
annotations:
tekton.dev/docker-0: https://de.icr.io # Described below
type: kubernetes.io/basic-auth
stringData:
username: $(params.DOCKER_USERNAME)
password: $(params.DOCKER_PASSWORD)
I am also creating a ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: default-runner
secrets:
- name: basic-user-pass
And in my trigger definition I telling the pipeline to use the default-runner ServiceAccount:
apiVersion: tekton.dev/v1beta1
kind: TriggerTemplate
metadata:
name: theTemplateTrigger
spec:
resourcetemplates:
- apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pipelinerun-$(uid)
spec:
serviceAccountName: default-runner
pipelineRef:
name: hello-goodbye
I found a way to pass my API key to my IBM Cloud Delivery Pipeline (Tekton) and the tasks in my pipeline are now able to pull container images from my private container registry.
This is my working trigger template:
apiVersion: tekton.dev/v1beta1
kind: TriggerTemplate
metadata:
name: theTemplateTrigger
spec:
params:
- name: pipeline-dockerconfigjson
description: dockerconfigjson for images used in .pipeline-config.yaml
default: "eyJhdXRocyI6e319" # ie. {"auths":{}} base64 encoded
resourcetemplates:
- apiVersion: v1
kind: Secret
data:
.dockerconfigjson: $(tt.params.pipeline-dockerconfigjson)
metadata:
name: pipeline-pull-secret
type: kubernetes.io/dockerconfigjson
- apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pipelinerun-$(uid)
spec:
pipelineRef:
name: hello-goodbye
podTemplate:
imagePullSecrets:
- name: pipeline-pull-secret
It first defines a parameter called pipeline-dockerconfigjson:
params:
- name: pipeline-dockerconfigjson
description: dockerconfigjson for images used in .pipeline-config.yaml
default: "eyJhdXRocyI6e319" # ie. {"auths":{}} base64 encoded
The second part turns the value passed into this parameter into a Kubernetes secret:
- apiVersion: v1
kind: Secret
data:
.dockerconfigjson: $(tt.params.pipeline-dockerconfigjson)
metadata:
name: pipeline-pull-secret
type: kubernetes.io/dockerconfigjson
And this secret is then pushed into the imagePullSecrets field of the PodTemplate.
The last step is to populate the parameter with a valid dockerconfigjson and this can be accomplished within the Delivery Pipeline UI (IBM Cloud UI).
To create a valid dockerconfigjson for my registry de.icr.io I had to use the following kubectl command:
kubectl create secret docker-registry mysecret \
--dry-run=client \
--docker-server=de.icr.io \
--docker-username=iamapikey \
--docker-password=<MY_API_KEY> \
--docker-email=<MY_EMAIL> \
-o yaml
and then within the output there is a valid base64 encoded .dockerconfigjson field.
Please also note that there is a public catalog of sample tekton tasks:
https://github.com/open-toolchain/tekton-catalog/tree/master/container-registry
More on IBM Cloud Continuous Delivery Tekton:
https://www.ibm.com/cloud/blog/ibm-cloud-continuous-delivery-tekton-pipelines-tools-and-resources
Tektonized Toolchain Templates: https://www.ibm.com/cloud/blog/toolchain-templates-with-tekton-pipelines
The secret you created (type basic-auth) would not allow Kubelet to pull your Pods images.
The doc mentions those secrets are meant to provision some configuration, inside your tasks containers runtime. Which may then be used during your build jobs, pulling or pushing images to registries.
Although Kubelet needs some different configuration (eg: type dockercfg), to authenticate when pulling images / starting containers.
I've noticed that we can crate a setter contains list of string based on kpt [documentation][1]. Then I found out that complex setter contains list of object is not supported based on [this github issue][1]. Since the issue itself mentioned that this should be supported in kpt function can we use it with the current kpt function version?
[1]: Kpt Apply Setters. https://catalog.kpt.dev/apply-setters/v0.1/
[1]: Setters for list of objects. https://github.com/GoogleContainerTools/kpt/issues/1533
I've discussed a bit with my coworkers, and turned out this is possible by doing the following setup:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
replicas: 4 # kpt-set: ${nginx-replicas}
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: "nginx:1.16.1" # kpt-set: nginx:${tag}
ports:
- protocol: TCP
containerPort: 80
---
apiVersion: v1
kind: MyKind
metadata:
name: foo
environments: # kpt-set: ${env}
- dev
- stage
---
apiVersion: v1
kind: MyKind
metadata:
name: bar
environments: # kpt-set: ${nested-env}
- key: some-key
value: some-value
After that we can define the following setters:
apiVersion: v1
kind: ConfigMap
metadata:
name: setters
data:
env: |-
- prod
- dev
nested-env: |-
- key: some-other-key
value: some-other-value
nginx-replicas: "3"
tag: 1.16.2
And then we can call the following command:
$ kpt fn render apply-setters-simple
I've send a Pull Request to the repository to add documentation about this.
I am doing my first deployment in Kubernetes and I've hosted my API in my namespace and it's up and running. So I tried to connect my API with MongoDB. Added my database details in ConfigMaps via Rancher.
I tried to invoke the DB in my deployment YAML file but got an error stating Unknown Field - ConfigMapref
Below is my deployment YAML file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myfistproject
namespace: Owncloud
spec
replicas: 2
selector:
matchLables:
app: myfirstproject
version: 1.0.0
template:
metadata:
labels:
app: myfirstproject
version: 1.0.0
spec:
containers:
- name: myfirstproject
image: **my image repo location**
imagePullPolicy: always
ports:
- containerPort: 80
configMapRef:
- name: myfirstprojectdb # This is the name of the config map created via rancher
myfirstprojectdb ConfigMap will store all the details like the database name, username, password, etc.
On executing the pipeline I get the below error.
How do I need to refer my config map in deployment yaml?
Validation Error(Deployment.spec.template.spec.container[0]): unknown field "ConfigMapref" in io.k8s.api.core.v1.Container
There are some more typos (e.g. missing : after spec or Always should be with capital letter). Also indentation should be consistent in the whole yaml file - see yaml indentation and separation.
I corrected your yaml so it passes api server's check + added config map reference (considering it contains env variables):
apiVersion: apps/v1
kind: Deployment
metadata:
name: myfistproject
namespace: Owncloud
spec:
replicas: 2
selector:
matchLabels:
app: myfirstproject
version: 1.0.0
template:
metadata:
labels:
app: myfirstproject
version: 1.0.0
spec:
containers:
- name: myfirstproject
image: **my image repo location**
imagePullPolicy: Always
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: myfirstprojectdb
Useful link:
Configure all key-value pairs in a ConfigMap as container environment variables which is related to this question.
I need some help from the community, I'm still new to K8 and Spring Boot. Thanks all in advance.
what I need is to have 4 K8 pods running in K8 environment and each pod have slightly different configuration from each other, for example, I have a property in one of my java class called regions, it extract it's value from Application.yml, like
#Value("${regions}")
Private String regions;
Now after deploy it to K8 env I want to have 4 pods(I can configure it in helm file) running and in each pod the regions field should have different value.
Is this something achievable ? Can anyone please give any advice ?
If you want to run 4 different pods with different configurations, you have to deploy the 4 different deployments in kubernetes.
You can create the different configmap as per need storing the whole Application.yaml file or environment variables and inject it to different deployments.
how to store whole application.yaml inside config map
apiVersion: v1
kind: ConfigMap
metadata:
name: yaml-region-first
data:
application.yaml: |-
data: test,
region: first-region
the same way you can create the config map with the second deployment.
apiVersion: v1
kind: ConfigMap
metadata:
name: yaml-region-second
data:
application.yaml: |-
data: test,
region: second-region
you can inject this configmap to each deployment
example :
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hello-app
name: hello-app
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: hello-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: hello-app
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /etc/nginx/app.yaml
name: yaml-file
readOnly: true
volumes:
- configMap:
name: yaml-region-second
optional: false
name: yaml-file
accordingly, you can also create the helm chart.
If you just to pass the single environment instead of storing the whole file inside the configmap you can directly add value to the deployment.
Example :
apiVersion: v1
kind: Pod
metadata:
name: print-greeting
spec:
containers:
- name: env-print-demo
image: bash
env:
- name: REGION
value: "one"
- name: HONORIFIC
value: "The Most Honorable"
- name: NAME
value: "Kubernetes"
command: ["echo"]
args: ["$(GREETING) $(HONORIFIC) $(NAME)"]
https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
for each deployment, your environment will be different and in helm, you can dynamically also update or overwrite it using CLI command.