How to kustomize resouce in k8s? - kubernetes

Assume I have cronjob 、service and deployment just like below:
# Create a directory to hold the base
mkdir base
# Create a base/deployment.yaml
cat <<EOF > base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
EOF
# Create a base/service.yaml file
cat <<EOF > base/service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
ports:
- port: 80
protocol: TCP
selector:
run: my-nginx
EOF
# Create a base/cronjob.yaml file
cat <<EOF > base/cronjob.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
EOF
# Create a base/kustomization.yaml
cat <<EOF > base/kustomization.yaml
resources:
- deployment.yaml
- service.yaml
- cronjob.yaml
EOF
Normally, everything is working fine, but sometime my project don't require to run a cronjob, so I want to disable the cronjob.yaml import.
So, Is there a way to do that? for example like the jinja2,
- deployment.yaml
- service.yaml
{{ IF set CRONJOB }}
- cronjob.yaml
{{ ENDIF }}
I Understand #AniAggarwal posted, i can use a filter before run my kubectl apply -f but it no a very good for my project.
Any suggestions are welcome.

Assume the following is your file that gets generated after kustomize command. And the service is supposed to be conditional like your cronjob.
kind: Pod
apiVersion: v1
metadata:
name: echo-app
labels:
app: demo
spec:
containers:
- name: nginx
image: nginx
## if/end data.values.service.enabled:
---
kind: Service
apiVersion: v1
metadata:
name: echo-service
spec:
selector:
labels:
app: name
ports:
- name: port
port: 80
You can pipe the output of Your kustomize command to ytt to add or remove the service.
kustomize build | ytt --data-value service.enabled=false -f - | kubectl apply -f -
Checkout the project playground for another example
https://carvel.dev/ytt/#playground
https://github.com/vmware-tanzu/carvel-ytt/blob/develop/examples/data-values/run.sh
Hope it is the solution you're looking for. Good luck!

Related

TensorFlow Setting model_config_file runtime argument in YAML file for K8s

I've been having a hell of a time trying to figure-out how to serve multiple models using a yaml configuration file for K8s.
I can run directly in Bash using the following, but having trouble converting it to yaml.
docker run -p 8500:8500 -p 8501:8501 \
[container id] \
--model_config_file=/models/model_config.config \
--model_config_file_poll_wait_seconds=60
I read that model_config_file can be added using a command element, but not sure where to put it, and I keep receiving errors around valid commands or not being able to find the file.
command:
- '--model_config_file=/models/model_config.config'
- '--model_config_file_poll_wait_seconds=60'
Sample YAML config below for K8s, where would the command go referencing the docker run command above?
---
apiVersion: v1
kind: Namespace
metadata:
name: model-test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tensorflow-test-rw-deployment
namespace: model-test
spec:
selector:
matchLabels:
app: rate-predictions-server
replicas: 1
template:
metadata:
labels:
app: rate-predictions-server
spec:
containers:
- name: rate-predictions-container
image: aws-ecr-path
command:
- --model_config_file=/models/model_config.config
- --model_config_file_poll_wait_seconds=60
ports:
#- grpc: 8500
- containerPort: 8500
- containerPort: 8501
---
apiVersion: v1
kind: Service
metadata:
labels:
run: rate-predictions-service
name: rate-predictions-service
namespace: model-test
spec:
type: ClusterIP
selector:
app: rate-predictions-server
ports:
- port: 8501
targetPort: 8501
What you are passing on seems to be the arguments and not the command. Command should be set as the entrypoint in the container and arguments should be passed in args. Please see following link.
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/

k8s custom metric exporter json returned values

I've tried to make a custom metric exporter for my kubernetes ..
And still failing to get the right value in order to exploit if within an hpa with this simplistic "a" variable ..
I don't really know how to export it according to different routes .. if I do {"a":"1"}, the error message on the hpa tells me it's missing kind .. then another fields ..
So I've did sum up the complete reproducible experiment below, in order to make it the simplest ever
Any Idea on how to complete this task ?
Thanks a lot for any clue, advice, enlightment, comment, notice
apiVersion: v1
kind: ConfigMap
metadata:
name: exporter
namespace: test
data:
os.php: |
<?php // If anyone knows a simple replacement other than swoole .. he's welcome --- Somehow I only think I'll need to know what the json output is expected for theses routes
$server = new Swoole\HTTP\Server("0.0.0.0",443,SWOOLE_PROCESS,SWOOLE_SOCK_TCP | SWOOLE_SSL);
$server->set(['worker_num' => 1,'ssl_cert_file' => __DIR__ . '/example.com+5.pem','ssl_key_file' => __DIR__ . '/example.com+5-key.pem']);
$server->on('Request', 'onMessage');
$server->start();
function onMessage($req, $res){
$value=1;
$url=$req->server['request_uri'];
file_put_contents('monolog.log',"\n".$url,8);//Log
if($url=='/'){
$res->end('{"status":"healthy"}');return;
} elseif($url=='/metrics'){
$res->end('a '.$value);return;
} elseif($url=='/apis/custom.metrics.k8s.io/v1beta1'){ // <-- This url is called lots of time in the logs
$res->end('{"kind": "APIResourceList","apiVersion": "v1","groupVersion": "custom.metrics.k8s.io/v1beta1","resources": [{"name": "namespaces/a","singularName": "","namespaced": false,"kind": "MetricValueList","verbs": ["get"]}]}');return;
} elseif($url=='/apis/custom.metrics.k8s.io/v1beta1/namespaces/test/services/test-metrics-exporter/a'){
$res->end('{"kind": "MetricValueList","apiVersion": "custom.metrics.k8s.io/v1beta1","metadata": {"selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/test/services/test-metrics-exporter-svc/a"},"items": [{"describedObject": {"kind": "Service","namespace": "test","name": "test-metrics-exporter-svc","apiVersion": "/v1"},"metricName": "a","timestamp": "2020-06-21T08:35:58Z","value": "'.$value.'","selector": null}]}');return;
}
$res->status(404);return;
}
---
apiVersion: v1
kind: Service
metadata:
name: test-metrics-exporter
namespace: test
annotations:
prometheus.io/port: '443'
prometheus.io/scrape: 'true'
spec:
ports:
- port: 443
protocol: TCP
selector:
app: test-metrics-exporter
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-metrics-exporter
namespace: test
spec:
selector:
matchLabels:
app: test-metrics-exporter
template:
metadata:
labels:
app: test-metrics-exporter
spec:
terminationGracePeriodSeconds: 1
volumes:
- name: exporter
configMap:
name: exporter
defaultMode: 0744
items:
- key: os.php
path: os.php
containers:
- name: test-metrics-exporter
image: openswoole/swoole:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- name: exporter
mountPath: /var/www/os.php
subPath: os.php
command:
- /bin/sh
- -c
- |
touch monolog.log;
apt update && apt install wget -y && wget https://github.com/FiloSottile/mkcert/releases/download/v1.4.3/mkcert-v1.4.3-linux-amd64
cp mkcert-v1.4.3-linux-amd64 /usr/local/bin/mkcert && chmod +x /usr/local/bin/mkcert
mkcert example.com "*.example.com" example.test localhost 127.0.0.1 ::1
php os.php &
tail -f monolog.log
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: alpine
namespace: test
spec:
selector:
matchLabels:
app: alpine
template:
metadata:
labels:
app: alpine
spec:
terminationGracePeriodSeconds: 1
containers:
- name: alpine
image: alpine
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- |
tail -f /dev/null
---
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
name: alpine
namespace: test
spec:
scaleTargetRef:
kind: Deployment
name: alpine
apiVersion: apps/v1
minReplicas: 1
maxReplicas: 10
metrics:
- type: Object
object:
target:
kind: Service
name: test-metrics-exporter
metricName: a
targetValue: '1'
---
# This is the Api hook for custom metrics
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.custom.metrics.k8s.io
namespace: test
spec:
insecureSkipTLSVerify: true
group: custom.metrics.k8s.io
groupPriorityMinimum: 1000
versionPriority: 5
version: v1beta1
service:
name: test-metrics-exporter
namespace: test
port: 443

Kubernetes create StatefulSet with image pull secret?

For Kubernetes Deployment we can specify imagePullSecrets to allow it to pull Docker images from our private registry. But as far as I can tell, StatefulSet doesn't support this?
How can I supply a pullsecret to my StatefulSet?
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
namespace: {{ .Values.namespace }}
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
serviceName: redis-service
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: redis
spec:
terminationGracePeriodSeconds: 10
# imagePullSecrets not valid here for StatefulSet :-(
containers:
- image: {{ .Values.image }}
StatefulSet supports imagePullSecrets. You can check it as follows.
$ kubectl explain statefulset.spec.template.spec --api-version apps/v1
:
imagePullSecrets <[]Object>
ImagePullSecrets is an optional list of references to secrets in the same
namespace to use for pulling any of the images used by this PodSpec. If
specified, these secrets will be passed to individual puller
implementations for them to use. For example, in the case of docker, only
DockerConfig type secrets are honored. More info:
https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod
:
For instance, you can try if the following sample StatefulSet can create in your cluster first.
$ kubectl create -f - <<EOF
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
imagePullSecrets:
- name: YOUR-PULL-SECRET-NAME
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
EOF
$ kubectl get pod web-0 -o yaml | \
grep -E '^[[:space:]]+imagePullSecrets:' -A1
imagePullSecrets:
- name: YOUR-PULL-SECRET-NAME

Kubernetes: Is there a way to retrieve or inject local env vars into configmap.yaml? [duplicate]

I am setting up the kubernetes setup for django webapp.
I am passing environment variable while creating deployment as below
kubectl create -f deployment.yml -l key1=value1
I am getting error as below
error: no objects passed to create
Able to create the deployment successfully, If i remove the env variable -l key1=value1 while creating deployment.
deployment.yaml as below
#Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: sigma-service
name: $key1
What will be the reason for causing the above error while creating deployment?
I used envsubst (https://www.gnu.org/software/gettext/manual/html_node/envsubst-Invocation.html) for this. Create a deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: $NAME
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Then:
export NAME=my-test-nginx
envsubst < deployment.yaml | kubectl apply -f -
Not sure what OS are you using to run this. On macOS, envsubst installed like:
brew install gettext
brew link --force gettext
This isn't a right way to use the deployment, you can't provide half details in yaml and half in kubectl commands. If you want to pass environment variables in your deployment you should add those detail in the deployment spec.template.spec:
You should add following block to your deployment.yaml
spec:
containers:
- env:
- name: var1
value: val1
This will export your environment variables inside the container.
The other way to export the environment variable is use kubectl run (not advisable) as it is going to be depreciated very soon. You can use following command:
kubectl run nginx --image=nginx --restart=Always --replicas=1 --env=var1=val1
The above command will create a deployment nginx with replica 1 and environment variable var1=val1
You cannot pass variables to "kubectl create -f". YAML files should be complete manifests without variables. Also you cannot use "-l" flag to "kubectl create -f".
If you want to pass environment variables to pod you should do like that:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
env:
- name: MY_VAT
value: MY_VALUE
ports:
- containerPort: 80
Read more here: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
Follow the below steps
create test-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: MYAPP
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
using sed command you can update the deployment name at deployment time
sed -e 's|MYAPP|my-nginx|g' test-deploy.yaml | kubectl apply -f -
File: ./deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: MYAPP
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
File: ./service.yaml
apiVersion: v1
kind: Service
metadata:
name: MYAPP
labels:
app: nginx
spec:
type: ClusterIP
ports:
- port: 80
selector:
app: nginx
File: ./kustomization.yaml
resources:
- deployment.yaml
- service.yaml
If you're using https://kustomize.io/, you can do this trick in a CI:
sh '( echo "images:" ; echo " - name: $IMAGE" ; echo " newTag: $VERSION" ) >> ./kustomization.yaml'
sh "kubectl apply --kustomize ."
I chose yq since it is yaml aware and gives a precise control where text substitutions happen.
To set an image from bash env var:
export IMAGE="your_image:latest"
yq eval '.spec.template.spec.containers[0].image = "'$IMAGE'"' manifests/daemonset.yaml | kubectl apply -f -
yq is available on MacPorts (as of 19/04/21 v4.4.1) with sudo port install yq
I was facing the same problem. I created a python script to change simple/complex or add values to the YAML file.
This became very handy in a similar situation that you describe. Also, switching to the python domain can allow for more complex scenarios.
The code and how to use it are available at this gist.
https://gist.github.com/washraf/f81153270c80b0b4ecf90a53872abde7
Please try following
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: kdpd00201
name: frontend
labels:
app: nginx
spec:
replicas: 6
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: frontend
image: ifccncf/nginx:1.14.2
ports:
- containerPort: 8001
env:
- name: NGINX_PORT
value: "8001"
My solution is then
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: frontend
name: frontend
namespace: kdpd00201
spec:
replicas: 4
selector:
matchLabels:
app: frontend
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: frontend
spec:
containers:
- env: # modified level
- name: NGINX_PORT
value: "8080"
image: lfccncf/nginx:1.13.7
name: nginx
ports:
- containerPort: 8080

View all applied configurations of an object

The command:
kubectl apply view-last-applied -f object.yml
displays the latest applied configuration file of an object.
Does a command exist that gives the entire 'applied' history of a given object?
For example, given the created configuration (using kubectl create -f pod.spec --save-config):
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
and the applied configurations (using kubectl apply -f pod.spec):
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.9
revision 2:
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.9.1
the command should give:
$ kubectl appy log -f pod.spec
applied <later date>:
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.9.1
applied <earlier date>:
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.9
No, only the latest applied configuration is persisted in the object