I've been trying to install custom pack using these links on a single node K8 cluster.
https://github.com/StackStorm/st2packs-dockerfiles/
https://github.com/stackstorm/stackstorm-ha
Stackstorm is installed successfully with default dashboard but when I tried to build custom pack and helm upgrade it's not working.
Here is my stackstorm pack dir and image Dockerfile:
/home/manisha.tanwar/st2packs-dockerfiles # ll st2packs-image/packs/st2_chef/
total 28
drwxr-xr-x. 4 manisha.tanwar domain users 4096 Apr 28 16:11 actions
drwxr-xr-x. 2 manisha.tanwar domain users 4096 Apr 28 16:11 aliases
-rwxr-xr-x. 1 manisha.tanwar domain users 211 Apr 28 16:11 pack.yaml
-rwxr-xr-x. 1 manisha.tanwar domain users 65 Apr 28 16:11 README.md
-rwxr-xr-x. 1 manisha.tanwar domain users 293 Apr 28 17:47 requirements.txt
drwxr-xr-x. 2 manisha.tanwar domain users 4096 Apr 28 16:11 rules
drwxr-xr-x. 2 manisha.tanwar domain users 4096 Apr 28 16:11 sensors
/home/manisha.tanwar/st2packs-dockerfiles # cat st2packs-image/Dockerfile
ARG PACKS="file:///tmp/stackstorm-st2"
FROM stackstorm/st2packs:builder AS builder
COPY packs/st2_chef /tmp/stackstorm-st2/
RUN ls -la /tmp/stackstorm-st2
RUN git config --global http.sslVerify false
# Install custom packs
RUN /opt/stackstorm/st2/bin/st2-pack-install ${PACKS}
###########################
# Minimize the image size. Start with alpine:3.8,
# and add only packs and virtualenvs from builder.
FROM stackstorm/st2packs:runtime
Image is created using command
docker build -t st2_chef:v0.0.2 st2packs-image
And then I changed values.yaml as below:
packs:
configs:
packs.yaml: |
---
# chef pack
image:
name: st2_chef
tag: 0.0.1
pullPolicy: Always
And run
helm upgrade <release-name>.
but it doesn't show anything on dashboard as well as cmd.
Please help, We are planning to upgrade to Stackstorm HA from standalone stackstorm and I need to get POC done for that.
Thanks in advance!!
Got it working with the help of community. Here's the link if anyone want to follow:
https://github.com/StackStorm/stackstorm-ha/issues/128
I wasn't using docker registery to push the image and use it in helm configuration.
Updated values.yaml as :
packs:
# Custom StackStorm pack configs. Each record creates a file in '/opt/stackstorm/configs/'
# https://docs.stackstorm.com/reference/pack_configs.html#configuration-file
configs:
core.yaml: |
---
image:
# Uncomment the following block to make the custom packs image available to the necessary pods
#repository: your-remote-docker-registry.io
repository: manishatanwar
name: st2_nagios
tag: "0.0.1"
pullPolicy: Always
Related
Hi there I'm having some trouble adding in a 3rd party module into the Kubernetes NGINX-Ingress custom image, and haven't been successful in figuring out how to do it.
I have seen one other has had this problem, and they seemed to of compiled it from scratch but dont seem to provide details as to how they added the file.
I'm installing the ingress controller via Helm, with a simple values.yaml file to make the alterations shown below:
values.yaml
controller:
image:
registry: [registry]
repository: [repo]
image: [image]
tag: 1.0.2
pullPolicy: Always
config:
entries:
main-snippets: "load_module /etc/nginx/modules/ngx_http_redis2_module.so"
prometheus:
create: true
Which was accepted when doing a helm install, in pulling a custom image into the pod/container for the ingress controller. I have been able to dynamically compile the module into a .so file that I'm keeping locally at this time to add into the custom image. But the issue that I'm having it seems that when building a Docker file for this custom image I seem to be unable to add the module file.
Dockerfile
FROM nginx/nginx-ingress:2.1.0
#Adds nginx redis2 module
USER root
COPY ngx_http_redis2_module.so /etc/nginx/modules/ngx_http_redis2_module.so
USER www-data
The dockerfile is what I'm using above to attempt to add the file into the proper place, which should be under the /etc/nginx/modules folder like the others. But after running the pod, and bashing into it I'm only seeing the following:
-rwxr-xr-x 1 www-data www-data 164256 Sep 26 17:40 ngx_http_auth_digest_module.so
-rwxr-xr-x 1 www-data www-data 5388256 Sep 26 17:40 ngx_http_brotli_filter_module.so
-rwxr-xr-x 1 www-data www-data 78152 Sep 26 17:40 ngx_http_brotli_static_module.so
-rwxr-xr-x 1 www-data www-data 100704 Sep 26 17:40 ngx_http_geoip2_module.so
-rwxr-xr-x 1 www-data www-data 113056 Sep 26 17:40 ngx_http_influxdb_module.so
-rwxr-xr-x 1 www-data www-data 267080 Sep 26 17:40 ngx_http_modsecurity_module.so
-rwxr-xr-x 1 www-data www-data 2468184 Sep 26 17:40 ngx_http_opentracing_module.so
-rwxr-xr-x 1 www-data www-data 74856 Sep 26 17:40 ngx_stream_geoip2_module.so
It seems that I'm making the image incorrectly or something else entirely and any help would be appreciated.
So I found that I was doing a few things wrong partially, but thanks to #jordanm for helping me out here.
I was able to run it using the docker run -it --entrypoint=/bin/sh command pointing to my image. I saw that my commands, of adding the file on there was correct but I was doing a few things incorrectly. First off I was using the wrong image, should of been using the controller, so here is the proper dockerfile:
Dockerfile
FROM k8s.gcr.io/ingress-nginx/controller:v1.1.1
#Adds nginx redis2 module
USER root
COPY --chmod=744 ngx_http_redis2_module.so /etc/nginx/modules/ngx_http_redis2_module.so
USER www-data
Also I had to tweak my values.yml file for helm to be a bit different also:
values.yml
controller:
image:
registry: [registry]
image: [image]
tag: "1.0.5"
digest: "sha256:[sha value]"
pullPolicy: Always
config:
main-snippets: "load_module /etc/nginx/modules/ngx_http_redis2_module.so;"
prometheus:
create: true
I had to add in the digest value which didnt seem to be stated in the helm instructions. In order to get the SHA256 value use this command docker inspect --format='{{index .RepoDigests 0}}' [IMAGE]:[TAG] and use that output in the digest value.
What was happening which I didnt take notice, was the container was failing so fast and a different one was being spun up before I even noticed it. So what I did was I went onto my test control plane / node and uninstalled NGINX first, then I did a sudo docker system prune -af which removed unused images. This way I was being guaranteed my image was being pulled through and deployed, and not reverting back to a different image.
I dont know why but the description of the pod would state that it was deploying my image, but I believe under the hood it would use another image.
I am trying to configure a fluend to send logs to an elasticsearch. After configuring it, I could not see any pod logs in the elasticsearch.
While debuging what is happening, I have seen that there are no logs in the node in path var/log/pods:
cd var/logs/pods
ls -la
drwxr-xr-x. 34 root root 8192 Dec 9 12:26 .
drwxr-xr-x. 14 root root 4096 Dec 9 02:21 ..
drwxr-xr-x. 3 root root 21 Dec 7 03:14 pod1
drwxr-xr-x. 6 root root 119 Dec 7 11:17 pod2
cd pod1/containerName
ls -la
total 0
drwxr-xr-x. 2 root root 6 Dec 7 03:14 .
drwxr-xr-x. 3 root root 21 Dec 7 03:14 ..
But I can see the logs when executing kubectl logs pod1
As I have seen in the documentation logs should be in this path. Do you have any idea why there are no logs stored in the node?
I have found what was happening. The problem was related with the log driver. It was configured to send the logs to journald:
docker inspect -f '{{.HostConfig.LogConfig.Type}}' ID
journald
I have changed it to json-file. Now it is writing logs var/log/pods
Here there are the different logging configuration options
I'm running a WSL2 Ubuntu terminal with docker for windows, and every time I run docker-compose up the permissions of the folder that contains the project get changed.
Before running docker-compose:
drwxr-xr-x 12 cesarvarela cesarvarela 4096 Jun 24 15:37 .
After:
drwxr-xr-x 12 999 cesarvarela 4096 Jun 24 15:37
This prevents me from changing git branch, editing files, etc. I have to chown the folder again to my user to do that, but I would like to not having to do this everytime.
I have followed the below steps to monitor kafka with Prometheus and Grafana.
jmx port not get opened
wget http://ftp.heanet.ie/mirrors/www.apache.org/dist/kafka/0.10.1.0/kafka_2.11-0.10.1.0.tgz
tar -xzf kafka_*.tgz
cd kafka_*
wget https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.6/jmx_prometheus_javaagent-0.6.jar
wget https://raw.githubusercontent.com/prometheus/jmx_exporter/master/example_configs/kafka-0-8-2.yml
./bin/zookeeper-server-start.sh config/zookeeper.properties &
KAFKA_OPTS="$KAFKA_OPTS -javaagent:$PWD/jmx_prometheus_javaagent-0.6.jar=7071:$PWD/kafka-0-8-2.yml"
./bin/kafka-server-start.sh config/server.properties &
Then i have the checked with curl http://localhost:7071/metrics in the terminal
it reports curl: (7) Failed connect to localhost:7071; Connection refused
Currently i have opened all my ports to my network in the server.
while i m checking with netstat -tupln | grep LISTEN
port number 7071 was not listed in the output
The below is the kafka directory's contents:
drwxr-xr-x. 3 root root 4096 Aug 23 12:22 bin
drwxr-xr-x. 2 root root 4096 Oct 15 2016 config
-rw-r--r--. 1 root root 20356 Aug 21 10:50 hs_err_pid1496.log
-rw-r--r--. 1 root root 19432 Aug 21 10:55 hs_err_pid2447.log
-rw-r--r--. 1 root root 1225418 Feb 5 2016 jmx_prometheus_javaagent-0.6.jar
-rw-r--r--. 1 root root 2824 Aug 21 10:48 kafka-0-8-2.yml
drwxr-xr-x. 2 root root 4096 Aug 21 10:48 libs
-rw-r--r--. 1 root root 28824 Oct 5 2016 LICENSE
drwxr-xr-x. 2 root root 4096 Oct 11 15:05 logs
-rw-------. 1 root root 8453 Aug 23 12:08 nohup.out
-rw-r--r--. 1 root root 336 Oct 5 2016 NOTICE
drwxr-xr-x. 2 root root 46 Oct 15 2016 site-docs
kafka is running in 2181 port and zookeeper is also running
If you do not mind opening up the jmx port, you can also do it like this:
export JMX_PORT=9999
export KAFKA_JMX_OPTS=-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.rmi.port=9999'
./bin/kafka-server-start.sh config/server.properties &
java -jar jmx_prometheus_httpserver-0.10-jar-with-dependencies.jar 9300 kafka-0-8-2.yaml &
The jar-with-dependencies you build from the source with mvn package.
I had the same problem when setting KAFKA_OPTS environment variable in the bash. The worse situation is when you add the environment variable to ~/.profile file. The problem with this approach is that the KAFKA_OPTS is used for both kafka-server-start.sh and zookeeper-server-start.sh so when you start Zookeeper, port 7071 will be used by Zookeeper for exporting metrics. Then, when you run Kafka you will receive the "7071 port is in use error".
I solved the problem by setting the environment at systemd service file. I described it at my article last week:
[Unit]
...
[Service]
...
Restart=no
Environment=KAFKA_OPTS=-javaagent:/home/morteza/myworks/jmx_prometheus_javaagent-0.9.jar=7071:/home/morteza/myworks/kafka-2_0_0.yml
[Install]
...
I'm using thoughworks go for a build pipeline as shown below:
The "Test" stage fetches artefacts from the build stage and runs each of it's jobs in parallel (unit tests, integration test, acceptance tests, package) on different ages. However, each of those jobs is a shell script.
When those tasks are run on a different agent they are failing because permission is denied. Each job is a shell script, and when I ssh into the agent I can see it does not have executable permissions as shown below:
drwxrwxr-x 2 go go 4096 Mar 4 09:48 .
drwxrwxr-x 8 go go 4096 Mar 4 09:48 ..
-rw-rw-r-- 1 go go 408 Mar 4 09:48 aa_tests.sh
-rw-rw-r-- 1 go go 443 Mar 4 09:48 Dockerfile
-rw-rw-r-- 1 go go 121 Mar 4 09:48 run.sh
However, in the git repository they have executable permission and they seem to execute fine on the build agent that clones the git repository.
I solved the problem by executing the script with bash. E.g "bash sriptname.sh" as the command for the task.