Use of kubernetes secret value in Tekton Pipeline params - kubernetes

I am currently implementing a CI Pipeline using Tekton. I was wondering if there is a way to use some kind of valueFromEnv for pipeline params.
For example to authenticate a Task for sonarqube analysis with my company's sonar host i need the login token, which I would rather want to insert via reference to a secret than passing it directly.
As I am relatively new to tekton I am unsure if I just haven't grasped the tekton way of doing this. Two possibilities that crossed my mind were:
A "Pre-Task" which reads the env in it's step definition and publishes it as a result (which then can be used as param to the next Task)
Mounting the secret as a file for the Task to load the secret (e.g. by catting it)
Both of those ideas do not feel like I should do it this way, but maybe I am wrong here.
Any help is appreciated!

Your first Idea is not impossible, but in my eyes ugly as well. You can set the desired ENV in your image via DockerFile and use it later in the task:
Docker file (example):
FROM gradle:7.4-jdk11
USER root
RUN apt-get update && apt-get install -y npm
YOUR_VARIABLE_KEY="any VALUE"
afterwards you can just use it in script tasks like:
echo $YOUR_VARIABLE_KEY
RECOMMENDED (for Openshift)
The cleaner way is, to define it as Secret (Key/value) or as a SealeedSecret (Opaque)
this can be done directly within the namespace on the openshift-UI or as Code.
Next step is to "bind" it in your task:
spec:
description: |-
any
params:
- name: any-secret-name
default: "any-secret"
type: string
stepTemplate:
name: ""
resources:
limits:
cpu: 1500m
memory: 4Gi
requests:
cpu: 250m
memory: 500Mi
steps:
- image: $(params.BUILDER_IMAGE)
name: posting
resources:
limits:
cpu: 1500m
memory: 4Gi
requests:
cpu: 250m
memory: 500Mi
env:
- name: YOU_NAME_IT
valueFrom:
secretKeyRef:
name: $(params.any-secret-name)
key: "any-secret-key"
script: |
#!/usr/bin/env sh
set -eu
set +x
echo $YOU_NAME_IT
set -x
BEWARE!!! If you run it that way - nothing should be logged - if you leave out set +x before and set -x after the echo it is logged.
Now I saw you're may not working in openshift - here is the kubernetes page: https://kubernetes.io/docs/concepts/configuration/secret/ => Using Secrets as environment variables (is close to your first idea - but the whole page looks like good cookbook)

Related

How to modify Apache Nifi node adresses while deploying in Kubernetes?

In Kubernetes I would like to deploy Apache Nifi Cluster in StatefulSet with 3 nodes.
Problem is I would like to modify node adresses recursively in an init container in my yaml file.
I have to modify these parameters for each nodes in Kubernetes:
'nifi.remote.input.host'
'nifi.cluster.node.address'
I need to have these FQDN added recursively in Nifi properties:
nifi-0.nifi.NAMESPACENAME.svc.cluster.local
nifi-1.nifi.NAMESPACENAME.svc.cluster.local
nifi-2.nifi.NAMESPACENAME.svc.cluster.local
I have to modify the properties before deploying so I tried the following init container but doesn't work :
initContainers:
- name: modify-nifi-properties
image: busybox:v01
command:
- sh
- -c
- |
# Modify nifi.properties to use the correct hostname for each node
for i in {1..3}; do
sed -i "s/nifi-$((i-1))/nifi-$((i-1)).nifinamespace.nifi.svc.cluster.local/g" /opt/nifi/conf/nifi.properties
done
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 100m
memory: 100Mi
How can I do it ?

How can I use multiple Community Mongo replica sets in one Kubernetes namespace?

I've been running a community version of the MongoDB replica set in Kubernetes for over a year now. For reference, I am deploying with these Helm charts:
https://github.com/mongodb/helm-charts/tree/main/charts/community-operator
https://github.com/mongodb/helm-charts/tree/main/charts/community-operator-crds
I now have a need to spin up a couple extra replica sets. I cannot reuse the existing rs because this new deployment is software from one of our subcontractors and we want to keep it separate. When I try to start a new rs, all the rs on the namespace get into a bad state, failing readiness probes.
Do the different replica sets each require their own operator? In an attempt to test that theory, I modified the values.yaml and deployed an operator for each rs but I'm still getting the error.
I think I am missing a config on the DB deployment that tells it which operator to use, but I can't find that config option in the helm chart (referencing the 2nd link from earlier https://github.com/mongodb/helm-charts/blob/main/charts/community-operator-crds/templates/mongodbcommunity.mongodb.com_mongodbcommunity.yaml)
EDIT: For a little extra info, it seems like the mongodb can be used without issue. Kubernetes is just showing an error, saying the readiness probes have failed.
EDIT: Actually, this didn't fix the entire problem. It seems to work for a few hours then all the readiness probes for all rs start to fail again.
It looks like you can deploy multiple operators and rs in a single namespace. What I was missing was linking the operators and rs together.
I also left out an important detail. I don't actually deploy the mongodb using that helm chart because the config options are too limited. At the bottom of the values.yaml there's a setting called createResource: false which I have set to false. I then deploy a separate yaml that defines the mongo replicaset like so:
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: my-mongo-rs
spec:
members: 3
type: ReplicaSet
version: "5.0.5"
security:
authentication:
modes: ["SCRAM"]
# tls:
# enabled: true
# certificateKeySecretRef:
# name: mongo-tls
# caConfigMapRef:
# name: mongo-ca
statefulSet:
spec:
template:
spec:
containers:
- name: "mongodb-agent"
resources:
requests:
cpu: 200m
memory: 500M
limits: {}
- name: "mongod"
resources:
requests:
cpu: 1000m
memory: 5G
limits: {}
serviceAccountName: my-mongodb-database
volumeClaimTemplates:
- metadata:
name: data-volume
spec:
storageClassName: my-retainer-sc
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 20G
# replicaSetHorizons:
# - external: myhostname.com:9000
# - external: myhostname.com:9001
# - external: myhostname.com:9002
users:
- name: my-mongo-user
db: admin
passwordSecretRef: # a reference to the secret that will be used to generate the user's password
name: my-mongo-user-creds
roles:
- db: admin
name: clusterAdmin
- db: admin
name: readWriteAnyDatabase
- db: admin
name: dbAdminAnyDatabase
- db: admin
name: userAdminAnyDatabase
- db: admin
name: root
scramCredentialsSecretName: my-user-scram
additionalMongodConfig:
storage.wiredTiger.engineConfig.journalCompressor: zlib
Anywhere in this config where you see "my", I actually use my app name but I've genericized it for this post.
To link the operator and rs together, it's done using a name. In the operator yaml it's this line:
database:
name: my-mongodb-database
That name creates a serviceaccount in kubernetes and you must define your database pod to use that specific serviceaccount. Otherwise, it tries to default to a serviceaccount named mongodb-database which either won't exist, or you'll end up with multiple rs using the same serviceaccount (therefore, the same operator).
And in the rs yaml it's this line:
serviceAccountName: my-mongodb-database
This will link it to the correct serviceaccount.

The active users count dont match on execution through 2 containers in Kubernates

We are running jmx through Tauras using 2 containers in Kubernetes.
We are seeing only 50 users in results instead of 100(50*2 containers).
Can anyone please through some light if we are missing something here.
We get two jtl and checking them individual or combined the total users are same 50 only. Is it related to same Thread name being generated and logged in jtl file or something else.
Here is the yml details:
apiVersion: v1
kind: ConfigMap
metadata:
name: joba
namespace: AAA
data:
protocol: "https"
serverUrl: “testurl”
users: "50”
duration: "1m"
nodeName: "Nodename"
---
apiVersion: /v1
kind: Job
metadata:
name: perftest
namespace: dev
spec:
template:
spec:
containers:
- args: ["split -l ${users} --numeric-suffixes Test.csv Test-; /bin/bash ./Shellscripttoread_assignvariables.sh;"]
command: ["/bin/bash", "-c"]
env:
- name: JobNumber
value: "00"
envFrom:
- configMapRef:
name: job-multi
image: imagepath
name: ubuntu-00
resources:
limits:
memory: “8000Mi"
cpu: "2880m"
- args: ["split -l ${users} --numeric-suffixes Test.csv Test-; /bin/bash ./Shellscripttoread_assignvariables.sh;"]
command: ["/bin/bash", "-c"]
env:
- name: JobNumber
value: "01”
envFrom:
- configMapRef:
name: job-multi
image: imagepath
name: ubuntu-01
resources:
limits:
memory: “8000Mi"
cpu: "2880m"
Your YAML is very nice but it doesn't tell anything about how do you launch JMeter or what these shell scripts you invoke are doing.
If you just kick off 2 separate JMeter instances by means of k8s - JMeter will look at the number of active threads from the .jtl file and given the Sampler/Transaction names are the same JMeter "thinks" that the tests were executed on one engine.
The workaround is to add i.e. machineName() or __machineIP() function to sampler/transaction labels, this way JMeter will distinguish the results coming from different instances and you will see real number of active threads.
The solution would be running your JMeter test in Distributed Mode so master will run in one pod, slaves in their own pods and the master will be responsible for transferring .jmx script to the slaves and collecting results from them

kubernetes with multiple jobs counter

New to kubernetes i´m trying to move a current pipeline we have using a queing system without k8s.
I have a perl script that generates a list of batch jobs (yml files) for each of the samples that i have to process.
Then i run kubectl apply --recursive -f 16S_jobscripts/
For example each sample needs to be treated sequentially and go through different processing
Exemple:
SampleA -> clean -> quality -> some_calculation
SampleB -> clean -> quality -> some_calculation
and so on for 300 samples.
So the idea is to prepare all the yml files and run them sequentially. This is working.
BUT, with this approach i need to wait that all samples are processed (let´s say that all the clean jobs need to completed before i run the next jobs quality).
what would be the best approach in such case, run each sample independently ?? how ?
The yml below describe one Sample for one job. You can see that i´m using a counter (mergereads-1 for sample1(A))
apiVersion: batch/v1
kind: Job
metadata:
name: merge-reads-1
namespace: namespace-id-16s
labels:
jobgroup: mergereads
spec:
template:
metadata:
name: mergereads-1
labels:
jobgroup: mergereads
spec:
containers:
- name: mergereads-$idx
image: .../bbmap:latest
command: ['sh', '-c']
args: ['
cd workdir &&
bbmerge.sh -Xmx1200m in1=files/trimmed/1.R1.trimmed.fq.gz in2=files/trimmed/1.R2.trimmed.fq.gz out=files/mergedpairs/1.merged.fq.gz merge=t mininsert=300 qtrim2=t minq=27 ratiomode=t &&
ls files/mergedpairs/
']
resources:
limits:
cpu: 1
memory: 2000Mi
requests:
cpu: 0.8
memory: 1500Mi
volumeMounts:
- mountPath: '/workdir'
name: db
volumes:
- name: db
persistentVolumeClaim:
claimName: workdir
restartPolicy: Never
If i understand you correctly you can use parallel-jobs with a use of Job Patterns.
It does support parallel processing of a set of independent but
related work items.
Also you can consider using Argo.
https://github.com/argoproj/argo
Argo Workflows is an open source container-native workflow engine for
orchestrating parallel jobs on Kubernetes. Argo Workflows is
implemented as a Kubernetes CRD (Custom Resource Definition).
Please let me know if that helps.

Error in running DPDK L2FWD application on a container managed by Kubernetes

I am trying to run DPDK L2FWD application on a container managed by Kubernetes.
To achieve this I have done the below steps -
I have created single node K8s setup where both master and client are running on host machine. As network plug-in, I have used Calico Network.
To create customized DPDK docker image, I have used the below Dockerfile
FROM ubuntu:16.04 RUN apt-get update
RUN apt-get install -y net-tools
RUN apt-get install -y python
RUN apt-get install -y kmod
RUN apt-get install -y iproute2
RUN apt-get install -y net-tools
ADD ./dpdk/ /home/sdn/dpdk/
WORKDIR /home/sdn/dpdk/
To run DPDK application inside POD, below host's directories are mounted to POD with privileged access:
/mnt/huge
/usr
/lib
/etc
Below is k8s deployment yaml used to create the POD
apiVersion: v1
kind: Pod
metadata:
name: dpdk-pod126
spec:
containers:
- name: dpdk126
image: dpdk-test126
imagePullPolicy: IfNotPresent
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello; sleep 10;done"]
resources:
requests:
memory: "2Gi"
cpu: "100m"
volumeMounts:
- name: hostvol1
mountPath: /mnt/huge
- name: hostvol2
mountPath: /usr
- name: hostvol3
mountPath: /lib
- name: hostvol4
mountPath: /etc
securityContext:
privileged: true
volumes:
- name: hostvol1
hostPath:
path: /mnt/huge
- name: hostvol2
hostPath:
path: /usr
- name: hostvol3
hostPath:
path: /home/sdn/kubernetes-test/libtest
- name: hostvol4
hostPath:
path: /etc
Below configurations are already done in host -
Huge page mounting.
Interface binding in user space.
After successful creation of POD, when trying to run a DPDK L2FWD application inside POD, I am getting the below error -
root#dpdk-pod126:/home/sdn/dpdk# ./examples/l2fwd/build/l2fwd -c 0x0f -- -p 0x03 -q 1
EAL: Detected 16 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: 1007 hugepages of size 2097152 reserved, but no mounted hugetlbfs found for that size
EAL: FATAL: Cannot get hugepage information.
EAL: Cannot get hugepage information.
EAL: Error - exiting with code: 1
Cause: Invalid EAL arguments
According to this, you might be missing
medium: HugePages from your hugepage volume.
Also, hugepages can be a bit finnicky. Can you provide the output of:
cat /proc/meminfo | grep -i huge
and check if there's any files in /mnt/huge?
Also maybe this can be helpful. Can you somehow check if the hugepages are being mounted as mount -t hugetlbfs nodev /mnt/huge?
First of all, you have to verify, that you have enough hugepages in your system. Check it with kubectl command:
kubectl describe nodes
where you could see something like this:
Capacity:
cpu: 12
ephemeral-storage: 129719908Ki
hugepages-1Gi: 0
hugepages-2Mi: 8Gi
memory: 65863024Ki
pods: 110
If your hugepages-2Mi is empty, then your k8s don't see mounted hugepages
After mounting hugepages into your host, you can prepare your pod to work with hugepages. You don't need to mount hugepages folder as you shown. You can simply add emptyDir volume like this:
volumes:
- name: hugepage-2mi
emptyDir:
medium: HugePages-2Mi
HugePages-2Mi is a specific resource name that corresponds with hugepages of 2Mb size. If you want to use 1Gb size hugepages then there is another resource for it - hugepages-1Gi
After defining the volume, you can use it in volumeMounts like this:
volumeMounts:
- mountPath: /hugepages-2Mi
name: hugepage-2mi
And there is one additional step. You have to define resource limitations for hugepages usage:
resources:
limits:
hugepages-2Mi: 128Mi
memory: 128Mi
requests:
memory: 128Mi
After all this steps, you can run your container with hugepages inside container
As #AdamTL mentioned, you can find additional info here