Helm Chart - Camunda - kubernetes-helm

Good Morning.
I'm currently using an helmchart to deploy camunda inside an openshift namespace/cluster.
For your information, Camunda has a default process called "Invoice" and that process is responsible to create a default user called "demo".
I would like to avoid that user creation, so i was able to do it through docker with the following command:
docker run -d --name camunda -p 8080:8080 -v
/tmp/empty:/camunda/webapps/camunda-invoice
camunda/camunda-bpm-platform:latest
But now, my helm chart uses a custom "values.yaml" that calls the camunda image, and then issues a command to start it:
image:
name: camunda/camunda-bpm-platform
tag: run-latest
command: ['./camunda.sh']
So is it possible to use the same behavior as docker command shown above, to empty the "webapps" directory after calling the camunda.sh?
I know that I can pass through the args: [ ] the argument "--webapps" but the issue is that it will remove the "tasklist" and "cockpit" that allows users to access the Camunda UI.
Thank you everyone.
Have a nice day!
EDIT:
While speaking with Camunda team, i just had the information that i can send the "--webapps --swaggerui --rest" arguments in order to start the application without having the default BPMN Process (Invoice).
So I'm currently try to use multiple arguments in my Helm Chart values.yaml like this:
image:
name: camunda/camunda-bpm-platform
tag: run-latest
command: ['./camunda.sh']
args: ["--webapps", "--rest", "--swaggerui"]
Unfortunately, it's not working this way. What am i doing wrong?
If I send just one argument like "--webapps" it reads the arguments and creates the container.
But if i send multiple arguments, like the example shown above, it just doesn't create the container.
Am i doing something wrong?

The different start arguments for the Camunda 7 RUN distribution are documented here: https://docs.camunda.org/manual/7.18/user-guide/camunda-bpm-run/#start-script-arguments
Here is a helm value file example using these parameters:
image:
name: camunda/camunda-bpm-platform
tag: run-latest
command: ['./camunda.sh']
args: ['--production','--webapps','--rest','--swaggerui']
extraEnvs:
- name: DB_VALIDATE_ON_BORROW
value: "false"

Related

How to create a "DOckerfile" to containerize a "Flutter" app to deploy it on a Kubernetes cluster?

I am just wondering to know how should I create a docker file for a Flutter app then deploy it on a Kubernetes cluster?
I found the following Dockerfile and server.sh script from this website but I am not sure if this a correct way of doing it?
# Install Operating system and dependencies
FROM ubuntu:22.04
RUN apt-get update
RUN apt-get install -y curl git wget unzip libgconf-2-4 gdb libstdc++6 libglu1-mesa fonts-droid-fallback lib32stdc++6 python3
RUN apt-get clean
# download Flutter SDK from Flutter Github repo
RUN git clone https://github.com/flutter/flutter.git /usr/local/flutter
# Set flutter environment path
ENV PATH="/usr/local/flutter/bin:/usr/local/flutter/bin/cache/dart-sdk/bin:${PATH}"
# Run flutter doctor
RUN flutter doctor
# Enable flutter web
RUN flutter channel master
RUN flutter upgrade
RUN flutter config --enable-web
# Copy files to container and build
RUN mkdir /app/
COPY . /app/
WORKDIR /app/
RUN flutter build web
# Record the exposed port
EXPOSE 5000
# make server startup script executable and start the web server
RUN ["chmod", "+x", "/app/server/server.sh"]
ENTRYPOINT [ "/app/server/server.sh"]
And:
#!/bin/bash
# Set the port
PORT=5000
# Stop any program currently running on the set port
echo 'preparing port' $PORT '...'
fuser -k 5000/tcp
# switch directories
cd build/web/
# Start the server
echo 'Server starting on port' $PORT '...'
python3 -m http.server $PORT
I did all the steps and it seems it works fine but as long as I use skaffold I don't know how/where to put the following command to automate this step as well (I have already ran this command manually):
docker run -i -p 8080:5000 -td flutter_docker
I still like to know was the above files, proper/official way to doing that or there is a better way of it?
EDIT: I created the following deployment & service file to put the deploy the created image on Kubernetes local Kind cluster but when I run kubectl get pods I can not find this image but I find it by doing docker images. Why this happens and how can I put in on a Kubernetes pod instead of docker images?
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-depl
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: front
---
apiVersion: v1
kind: Service
metadata:
name: client-srv
spec:
selector:
app: client
ports:
- name: client
protocol: TCP
port: 3000
targetPort: 3000
The question (title) is misleading.
There are 2 parts.
How to containerize the app (in this case flutter app).
How to deploy the app on the k8s cluster.
To deal with the first part, You have Dockerfile. There is room for improvement but I think this Dockerfile should work. Then you need to build a container image. Please refer to the official documentation. Finally, you need to push this created container image to some repository. (We may skip this pushing stage but to make things simple I am suggesting pushing the image)
For the second part, you should be familiar with basic Kubernetes concepts. You can run the container from a previously built container image with the help of the k8s Pod object. To access the application, you need one more k8s object and that is the Service (Load balancer or Node port type).
I know things are a bit complex (at initial levels) but please follow a good course/book I have gone through the blog post you shared, and this talks only about the first part and not the second part. You will have a container image at the end of this blog post.
I suggest going through the free playground offered by killer shell, if you don't want to set up a k8s cluster on your own, that is again another learning curve. Skip the first tile on this page this is just a playground, but from the second tile, they have enough material.
Improvements for Edited Question:
server.sh: maintaining a startup script is quite standard practice if you have complex logic to start the process. We can skip this file but in that case, a few steps will be added to Dockerfile.
kubectl get pods does not show you images but it will show you running pods in the cluster (in default namespace). Not sure how you ran and connected to the cluster. But try to add output of the command.
few pointers to impve dockerfile:
Use a small base image footprint. Ubuntu: xx has many packages pre-installed, maybe you don't need all of them. Ubuntu has slim images also or try to find a flutter image.
Try to reduce Run statements. you can club 2-3 commands in one. this will reduce layers in the image.
instead of RUN git clone, you should clone code before docker build and copy/add code in the container image. In this way, you can control which files you need to add to the image. You also don't require to have a git tool installed in the container image.
RUN ["chmod", "+x", "/app/server/server.sh"] and RUN mkdir both statements are not needed at all if you write Dockerfile smartly.
Dockerfiles should be clean, crisp, and precise.
PS: Sorry but this is not a classroom section. I know this is a bit complex thing for beginners. But please try to learn from some good sources/books.

Docker container can't download WordPress installation files automatically

I have the following compose file:
version: "3"
services:
wordpress:
image: visiblevc/wordpress
privileged: true
network_mode: bridge #should help?
# required for mounting bindfs
cap_add:
- SYS_ADMIN
devices:
- /dev/fuse
# required on certain cloud hosts
security_opt:
- apparmor:unconfined
ports:
- 8080:80
- 8081:443
...WP parameters ommitted for brewity
When I run it, I get the following error:
When I use the CLI and curl the SAME url, I can download the information.
What should I change in the yaml to make it work automatically?
UPDATE:
As curl works manually, I don't think it as a DNS resolve error, but to make sure, I modified the resolv.conf file to a valid nameserver address:
Unfortunately, it didn't solve the issue.
Try to put on image this:
image: visiblevc/wordpress:latest
You need to specify the tag.
We checked the initialization shell script of the Docker image.
We couldn't determine why the resolve fails when wp core download was called so we created a workaround.
We added a pre-init section inside the script to use the explicit curl commands that worked as demonstrated in the CLI. As a sideeffect, we download the necessary files so it bypasses the need of downloading and the potential resolve timeout.
Note: we needed several curl calls for the API as the resolve kept failing after only one call.
This is how the new script starts:
h1 Pre-init
curl "https://api.wordpress.org/core/version-check/1.7/?locale=en_US" >/dev/null
curl "https://downloads.wordpress.org/release/wordpress-5.9.3.zip" --output wordpress-5.9.3.zip
curl "https://api.wordpress.org/core/version-check/1.7/?locale=en_US" >/dev/null
curl "https://downloads.wordpress.org/plugin/classic-editor.1.6.2.zip" --output classic-editor.1.6.2.zip
curl "https://downloads.wordpress.org/translation/core/5.9.3/hu_HU.zip" --output hu_HU.zip
curl "https://api.wordpress.org/core/version-check/1.7/?locale=en_US" >/dev/null
h1 'Begin WordPress Installation'
Using this script, the installation succeeded.

localstack+ssm how to configure parameters from docker compose

I am trying to run locally localstack with SSM, the default port I am getting is 4566
but when trying to init params via the docer compose I just ant figure oout how I do it from the docker compose
this is what I have :
localstack:
image: 'localstack/localstack'
ports:
- '4566:4566'
environment:
- SERVICES=lambda,ssm
- DEBUG=1
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- LAMBDA_EXECUTOR=local
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=${TMPDIR}
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
I am trying to figure out how to pass multiple values from the docker-compose file
I am aware it can be done after by aws cli
aws --endpoint-url=http://localhost:4566 ssm put-parameter --name "/dev/some_key" --type String --value "vaue" --overwrite --region "us-east-1"
any thoughts?
I know this is an old question, but for anyone who is still looking for answers:
Localstack has a few lifecycle stages which we can hook on to. In this case, since we want to create SSM parameters in localstack after localstack is ready, we would want to use the init/ready.d hook. Which means, create a script with the awlocal command and mount it into /etc/localstack/init/ready.d/.
volumes:
- "/path/to/init-aws.sh:/etc/localstack/init/ready.d/init-aws.sh"
I know this is an old question, but for anyone who is still looking for answers:
LocalStack has a few lifecycle stages which we can hook on to. In your case, since you want to create SSM parameters in LocalStack after LocalStack is ready, you would want to use the init/ready.d hook. Which means, create a script with your awlocal command and mount it into /etc/localstack/init/ready.d/. If you watch the logs after LocalStack is up and ready, you would see the script getting applied and the SSM parameters getting created.
volumes:
- "/path/to/init-aws.sh:/etc/localstack/init/ready.d/init-aws.sh"

Apply label to docker compose service depending on environment configuration?

Say I have a docker-compose.yml like so:
version: "2.1"
services:
web:
image: foo
cli:
image: bar
Upon docker-compose up, depending on the value of an environment variable, I would like to add a specific label to either the web service or the cli service, but never both.
What are some solutions for this?
EDIT: An additional stipulation is that the compose file can have an arbitrary set of services in it (i.e. the set of services is not constant, it is variable).
You might want to split your compose.yml file and add some shell scripting around docker to achieve this.
So you could create a bash script that checks your environment variable, and switches the appropriate yml files into the 'docker compose up' command it calls.

How to run ad hoc docker compose commands in Ansible?

I have to run several docker-compose run commands for my Phoenix web app project. From the terminal I have to run this:
$ sudo docker-compose run web mix do deps.get, compile
$ sudo docker-compose run web mix ecto.create
$ sudo docker-compose run web mix ecto.migrate
While this works fine, I would like to automate it using Ansible. I'm well aware there is the docker_service Ansible module that consumes the docker-compose API and I'm also aware of the definition option that makes it easy to covert integrate the configuration inside docker-compose.yml into my playbook.
What I don't know is how do I ensure that the commands above will be run before starting the containers. Can anyone help me with this issue?
I faced a similar situation like yours, finding no way to run docker-compose run commands via docker dedicated modules for Ansible. However I ended using Ansible's shell module with success for my purposes. Here we have some examples, adapted for your situation.
One by one, explicit way
- name: Run mix deps.get and compile
shell: docker-compose run web mix do deps.get, compile
args:
chdir: /path/to/directory/having/your/docker-compose.yml
become: True # because you're using sudo
- name: Run mix ecto.create
shell: docker-compose run web mix ecto.create
args:
chdir: /path/to/directory/having/your/docker-compose.yml
become: True
- name: Run mix ecto.migrate
shell: docker-compose run web mix ecto.migrate
args:
chdir: /path/to/directory/having/your/docker-compose.yml
become: True
Equivalent way, but shorter
- name: Run mix commands
shell: docker-compose run web mix "{{ item }}"
args:
chdir: /path/to/directory/having/your/docker-compose.yml
loop:
- "do deps.get, compile"
- "ecto.create"
- "ecto.migrate"
become: True
To run those commands before starting the other containers defined in the docker-compose.yml file, maybe a combination of these points can help:
Use docker volumes to persist the results of getting dependencies, compilation and Ecto commands
Use the depends_on configuration option inside the docker-compose.yml file
Use the service parameter of Ansible's docker_service module in your playbook to run only a subset of containers
Use disposable containers with your docker-compose run commands, via the --rm option and possibly with the --no-deps option
In your playbook, execute your docker-compose run commands before the docker_service task
Some notes:
I'm using Ansible 2.5 at the moment of writing this answer.
I'm assuming that docker-compose binary is already installed, it's working fine and it's available on the standard system PATH on the managed host.
The docker-compose.yml file already exists and has the path /path/to/directory/having/your/docker-compose.yml, as used in the examples. A variable for that file path could also be used.
That's it!