I am unable to pass arguments to a container entry point.
The error I am getting is:
Error: container create failed: time="2022-09-30T12:23:31Z" level=error msg="runc create failed: unable to start container process: exec: \"--base-currency=EUR\": executable file not found in $PATH"
How can I pass an arg with -- in it to the arg option in a PodSpec?
I have tried using args directly, and as environment variables, both don't work. I have also tried arguments without "--" - still no joy.
The DeploymentConfig is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: market-pricing
spec:
replicas: 1
selector:
matchLabels:
application: market-pricing
template:
metadata:
labels:
application: market-pricing
spec:
containers:
- name: market-pricing
image: quay.io/brbaker/market-pricing:v0.5.0
args: ["$(BASE)","$(CURRENCIES)", "$(OUTPUT)"]
imagePullPolicy: Always
volumeMounts:
- name: config
mountPath: "/app/config"
readOnly: true
env:
- name: BASE
value: "--base-currency=EUR"
- name: CURRENCIES
value: "--currencies=AUD,NZD,USD"
- name: OUTPUT
value: "--dry-run"
volumes:
- name: config
configMap:
name: app-config # Provide the name of the ConfigMap you want to mount.
items: # An array of keys from the ConfigMap to create as files
- key: "kafka.properties"
path: "kafka.properties"
- key: "app-config.properties"
path: "app-config.properties"
The Dockerfile is:
FROM registry.redhat.io/ubi9:9.0.0-1640
WORKDIR /app
COPY bin/market-pricing-svc .
CMD ["/app/market-pricing-svc"]
The ConfigMap is:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
kafka.properties: |
bootstrap.servers=fx-pricing-dev-kafka-bootstrap.kafka.svc.cluster.local:9092
security.protocol=plaintext
acks=all
app-config.properties: |
market-data-publisher=kafka-publisher
market-data-source=ecb
reader=time-reader
Related
Is there a way to use an environment variable as the default for another? For example:
apiVersion: v1
kind: Pod
metadata:
name: Work
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: ALWAYS_SET
value: "default"
- name: SOMETIMES_SET
value: "custom"
- name: RESULT
value: "$(SOMETIMES_SET) ? $(SOMETIMES_SET) : $(ALWAYS_SET)"
I don't think there is a way to do that but anyway you can try something like this
apiVersion: v1
kind: Pod
metadata:
name: Work
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
args:
- RESULT=${SOMETIMES_SET:-${ALWAYS_SET}}; command_to_run_app
command:
- sh
- -c
env:
- name: ALWAYS_SET
value: "default"
- name: SOMETIMES_SET
value: "custom"
How can I use ConfigMap to write cluster node information to a JSON file?
The below gives me Node information :
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(#.type=="Hostname")].address}'
How can I use Configmap to write the above output to a text file?
You can save the output of command in any file.
Then use the file or data inside file to create configmap.
After creating the configmap you can mount it as a file in your deployment/pod.
For example:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: appname
name: appname
namespace: development
spec:
selector:
matchLabels:
app: appname
tier: sometier
template:
metadata:
creationTimestamp: null
labels:
app: appname
tier: sometier
spec:
containers:
- env:
- name: NODE_ENV
value: development
- name: PORT
value: "3000"
- name: SOME_VAR
value: xxx
image: someimage
imagePullPolicy: Always
name: appname
volumeMounts:
- name: your-volume-name
mountPath: "your/path/to/store/the/file"
readOnly: true
volumes:
- name: your-volume-name
configMap:
name: your-configmap-name
items:
- key: your-filename-inside-pod
path: your-filename-inside-pod
I added the following configuration in deployment:
volumeMounts:
- name: your-volume-name
mountPath: "your/path/to/store/the/file"
readOnly: true
volumes:
- name: your-volume-name
configMap:
name: your-configmap-name
items:
- key: your-filename-inside-pod
path: your-filename-inside-pod
To create ConfigMap from file:
kubectl create configmap your-configmap-name --from-file=your-file-path
Or just create ConfigMap with the output of your command:
apiVersion: v1
kind: ConfigMap
metadata:
name: your-configmap-name
namespace: your-namespace
data:
your-filename-inside-pod: |
output of command
At first save output of kubect get nodes command into JSON file:
$ exampleCommand > node-info.json
Then create proper ConfigMap.
Here is an example:
apiVersion: v1
kind: ConfigMap
metadata:
name: example-config
data:
node-info.json: |
{
"array": [
1,
2
],
"boolean": true,
"number": 123,
"object": {
"a": "egg",
"b": "egg1"
},
"string": "Welcome"
}
Then remember to add following lines below specification section in pod configuration file:
env:
- name: NODE_CONFIG_JSON
valueFrom:
configMapKeyRef:
name: example-config
key: node-info.json
You can also use PodPresent.
PodPreset is an object that enable to inject information egg. environment variables into pods during creation time.
Look at the example below:
apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
name: example
spec:
selector:
matchLabels:
app: your-pod
env:
- name: DB_PORT
value: "6379"
envFrom:
- configMapRef:
name: etcd-env-config
key: node-info.json
but remember that you have to also add:
env:
- name: NODE_CONFIG_JSON
valueFrom:
configMapKeyRef:
name: example-config
key: node-info.json
section to your pod definition proper to your PodPresent and ConfigMap configuration.
More information you can find here: podpresent, pod-present-configuration.
I have create docker registry as a pod with a service and it's working login, push and pull. But when I would like to create a pod that use an image from this registry, the kubelet can't get the image from the registry.
My pod registry:
apiVersion: v1
kind: Pod
metadata:
name: registry-docker
labels:
registry: docker
spec:
containers:
- name: registry-docker
image: registry:2
volumeMounts:
- mountPath: /opt/registry/data
name: data
- mountPath: /opt/registry/auth
name: auth
ports:
- containerPort: 5000
env:
- name: REGISTRY_AUTH
value: htpasswd
- name: REGISTRY_AUTH_HTPASSWD_PATH
value: /opt/registry/auth/htpasswd
- name: REGISTRY_AUTH_HTPASSWD_REALM
value: Registry Realm
volumes:
- name: data
hostPath:
path: /opt/registry/data
- name: auth
hostPath:
path: /opt/registry/auth
pod I would like to create from registry:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: 10.96.81.252:5000/nginx:latest
imagePullSecrets:
- name: registrypullsecret
Error I get from my registry logs:
time="2018-08-09T07:17:21Z" level=warning msg="error authorizing
context: basic authentication challenge for realm \"Registry Realm\":
invalid authorization credential" go.version=go1.7.6
http.request.host="10.96.81.252:5000"
http.request.id=655f76a6-ef05-4cdc-a677-d10f70ed557e
http.request.method=GET http.request.remoteaddr="10.40.0.0:59088"
http.request.uri="/v2/" http.request.useragent="docker/18.06.0-ce
go/go1.10.3 git-commit/0ffa825 kernel/4.4.0-130-generic os/linux
arch/amd64 UpstreamClient(Go-http-client/1.1)"
instance.id=ec01566d-5397-4c90-aaac-f56d857d9ae4 version=v2.6.2
10.40.0.0 - - [09/Aug/2018:07:17:21 +0000] "GET /v2/ HTTP/1.1" 401 87 "" "docker/18.06.0-ce go/go1.10.3 git-commit/0ffa825
kernel/4.4.0-130-generic os/linux arch/amd64
UpstreamClient(Go-http-client/1.1)"
The secret I use created from cat ~/.docker/config.json | base64:
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
data:
.dockerconfigjson: ewoJImF1dGhzIjogewoJCSJsb2NhbGhvc3Q6NTAwMCI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZaRzlqYTJWeU1USXoiCgkJfQoJfSwKCSJIdHRwSGVhZGVycyI6IHsKCQkiVXNlci1BZ2VudCI6ICJEb2NrZXItQ2xpZW50LzE4LjA2$
type: kubernetes.io/dockerconfigjson
The modification I have made to my default serviceaccount:
cat ./sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2018-08-03T09:49:47Z
name: default
namespace: default
# resourceVersion: "51625"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: 8eecb592-9702-11e8-af15-02f6928eb0b4
secrets:
- name: default-token-rfqfp
imagePullSecrets:
- name: registrypullsecret
file ~/.docker/config.json:
{
"auths": {
"localhost:5000": {
"auth": "YWRtaW46ZG9ja2VyMTIz"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/18.06.0-ce (linux)"
}
The auths data has login credentials for "localhost:5000", but your image is at "10.96.81.252:5000/nginx:latest".
I am trying to install Jhipster Registry on Kubernetes as stateful set with the given (git as jhipster-registry.yml).
I see the services & statefulset coming up. I dont see the worker PODs :-(
Could you share, how to get the worker PODs up .
Update:
I am totally new to this jhipster & kubernetes. Thx for your response. Pasted the jhipster-register.yml below.
Cmds i have tried is to kubectl create -f jhipster-register.yml.
Error # kubectl describe statefulset
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: application-config
Optional: false
Volume Claims: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1d 5s 4599 statefulset Warning FailedCreate create Pod jhipster-registry-0 in StatefulSet jhipster-registry failed error: Pod "jhipster-registry-0" is invalid: spec.containers[0].env[8].name: Invalid value: "JHIPSTER_LOGGING_LOGSTASH_ENABLED=true": a valid C identifier must start with alphabetic character or '_', followed by a string of alphanumeric characters or '_' (e.g. 'my_name', or 'MY_NAME', or 'MyName', regex used for validation is '[A-Za-z_][A-Za-z0-9_]*')
YML:
apiVersion: v1
kind: Service
metadata:
name: jhipster-registry
labels:
app: jhipster-registry
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
ports:
- port: 9761
clusterIP: None
selector:
app: jhipster-registry
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: jhipster-registry
spec:
serviceName: jhipster-registry
replicas: 2
template:
metadata:
labels:
app: jhipster-registry
spec:
terminationGracePeriodSeconds: 10
containers:
- name: jhipster-registry
image: jhipster/jhipster-registry:v3.2.3
ports:
- containerPort: 9761
env:
- name: CLUSTER_SIZE
value: "2"
- name: STATEFULSET_NAME
value: "jhipster-registry"
- name: STATEFULSET_NAMESPACE
value: "stage"
- name: SPRING_PROFILES_ACTIVE
value: prod,swagger,native
- name: SECURITY_USER_PASSWORD
value: admin
- name: JHIPSTER_SECURITY_AUTHENTICATION_JWT_SECRET
value: secret-is-nothing-its-just-inside-you
- name: EUREKA_CLIENT_FETCH_REGISTRY
value: 'true'
- name: EUREKA_CLIENT_REGISTER_WITH_EUREKA
value: 'true'
- name: JHIPSTER_LOGGING_LOGSTASH_ENABLED=true
value: 'true'
- name: GIT_URI
value: https://github.com/jhipster/jhipster-registry/
- name: GIT_SEARCH_PATHS
value: central-config
command:
- "/bin/sh"
- "-ec"
- |
HOSTNAME=$(hostname)
export EUREKA_INSTANCE_HOSTNAME="${HOSTNAME}.jhipster-registry.${STATEFULSET_NAMESPACE}.svc.cluster.local"
echo "Setting EUREKA_INSTANCE_HOSTNAME=${EUREKA_INSTANCE_HOSTNAME}"
echo "Configuring Registry Replicas for CLUSTER_SIZE=${CLUSTER_SIZE}"
LAST_POD_INDEX=$((${CLUSTER_SIZE} - 1))
REPLICAS=""
for i in $(seq 0 $LAST_POD_INDEX); do
REPLICAS="${REPLICAS}http://admin:${SECURITY_USER_PASSWORD}#${STATEFULSET_NAME}-${i}.jhipster-registry.${STATEFULSET_NAMESPACE}.svc.cluster.local:8761/eureka/"
if [ $i -lt $LAST_POD_INDEX ]; then
REPLICAS="${REPLICAS},"
fi
done
export EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=$REPLICAS
echo "EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=${REPLICAS}"
java -jar /jhipster-registry.war --spring.cloud.config.server.git.uri=${GIT_URI} --spring.cloud.config.server.git.search-paths=${GIT_SEARCH_PATHS} -Djava.security.egd=file:/dev/./urandom
volumeMounts:
- name: config-volume
mountPath: /central-config
volumes:
- name: config-volume
configMap:
name: application-config
Your StatefulSet is invalid because of invalid ENV name.
- name: JHIPSTER_LOGGING_LOGSTASH_ENABLED=true
value: 'true'
You have used ENV name JHIPSTER_LOGGING_LOGSTASH_ENABLED=true which is invalid.
Correct format: [A-Za-z_][A-Za-z0-9_]*
Thats why Pods are not coming
Change StatefulSet to use as
- name: JHIPSTER_LOGGING_LOGSTASH_ENABLED
value: 'true'
This will work.
I'm unable to find good information describing these errors:
[sarah#localhost helm] helm install statefulset --name statefulset --debug
[debug] Created tunnel using local port: '33172'
[debug] SERVER: "localhost:33172"
[debug] Original chart version: ""
[debug] CHART PATH: /home/helm/statefulset/
Error: error validating "": error validating data: [field spec.template for v1beta1.StatefulSetSpec is required, field spec.serviceName for v1beta1.StatefulSetSpec is required, found invalid field containers for v1beta1.StatefulSetSpec]
I'm still new to Helm; I've built two working charts that were similar to this template and didn't have these errors, even though the code isn't much different. I'm thinking there might be some kind of formatting error that I'm not noticing. Either that, or it's due to the different type (the others were Pods, this is StatefulSet).
The YAML file it's referencing is here:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: "{{.Values.PrimaryName}}"
labels:
name: "{{.Values.PrimaryName}}"
app: "{{.Values.PrimaryName}}"
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
"helm.sh/created": {{.Release.Time.Seconds | quote }}
spec:
#serviceAccount: "{{.Values.PrimaryName}}-sa"
containers:
- name: {{.Values.ContainerName}}
image: "{{.Values.PostgresImage}}"
ports:
- containerPort: 5432
protocol: TCP
name: postgres
resources:
requests:
cpu: {{default "100m" .Values.Cpu}}
memory: {{default "100M" .Values.Memory}}
env:
- name: PGHOST
value: /tmp
- name: PG_PRIMARY_USER
value: primaryuser
- name: PG_MODE
value: set
- name: PG_PRIMARY_PORT
value: "5432"
- name: PG_PRIMARY_PASSWORD
value: "{{.Values.PrimaryPassword}}"
- name: PG_USER
value: testuser
- name: PG_PASSWORD
value: "{{.Values.UserPassword}}"
- name: PG_DATABASE
value: userdb
- name: PG_ROOT_PASSWORD
value: "{{.Values.RootPassword}}"
volumeMounts:
- name: pgdata
mountPath: "/pgdata"
readOnly: false
volumes:
- name: pgdata
persistentVolumeClaim:
claimName: {{.Values.PVCName}}
Would someone be able to a) point me in the right direction to find out how to implement the spec.template and spec.serviceName required fields, b) understand why the field 'containers' is invalid, and/or c) give mention of any tool that can help debug Helm charts? I've attempted 'helm lint' and the '--debug' flag but 'helm lint' shows no errors, and the flag output is shown with the errors above.
Is it possible the errors are coming from a different file, also?
StatefulSets objects has different structure than Pods are. You need to modify your yaml file a little:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: "{{.Values.PrimaryName}}"
labels:
name: "{{.Values.PrimaryName}}"
app: "{{.Values.PrimaryName}}"
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
"helm.sh/created": {{.Release.Time.Seconds | quote }}
spec:
selector:
matchLabels:
app: "" # has to match .spec.template.metadata.labels
serviceName: "" # put your serviceName here
replicas: 1 # by default is 1
template:
metadata:
labels:
app: "" # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: {{.Values.ContainerName}}
image: "{{.Values.PostgresImage}}"
ports:
- containerPort: 5432
protocol: TCP
name: postgres
resources:
requests:
cpu: {{default "100m" .Values.Cpu}}
memory: {{default "100M" .Values.Memory}}
env:
- name: PGHOST
value: /tmp
- name: PG_PRIMARY_USER
value: primaryuser
- name: PG_MODE
value: set
- name: PG_PRIMARY_PORT
value: "5432"
- name: PG_PRIMARY_PASSWORD
value: "{{.Values.PrimaryPassword}}"
- name: PG_USER
value: testuser
- name: PG_PASSWORD
value: "{{.Values.UserPassword}}
- name: PG_DATABASE
value: userdb
- name: PG_ROOT_PASSWORD
value: "{{.Values.RootPassword}}"
volumeMounts:
- name: pgdata
mountPath: "/pgdata"
readOnly: false
volumes:
- name: pgdata
persistentVolumeClaim:
claimName: {{.Values.PVCName}}