How to generate coredump file in alpine container - alpine-linux

I'm trying to work on a open source TSDB TDengine, and compile it in alpine to make it dockerized. After compiled, just run the taosd binary, it causes segment fault(coredumped), but I can't find the core file.
I've searched and use sysctl to set the core pattern and ulimic -c is unlimited. But it failed to apply sysctl like below.
# ulimic -c
unlimited
# sysctl -w kernel.core_pattern=core-%e.%p.%h.%t
sysctl: error setting key 'kernel.core_pattern': Read-only file system
How to generate the core file in alpine?

I finally found the solution:
docker run -it --rm --ulimit core=-1 --privileged -v $PWD:/coredump <myimage> bash
In container, set core pattern and run app:
sysctl -w kernel.core_pattern=/coredump/core-%e.%p.%h.%t
app # coredumped to /coredump/ directory
Since we mount $PWD to /coredump, so we can see core file in current directory.

Related

How to get access to filesystem with "kubectl debug" (ephemeral containers)?

If I do
POD=$($KUBECTL get pod -lsvc=app,env=production -o jsonpath="{.items[0].metadata.name}")
kubectl debug -it --image=mpen/tinker "$POD" -- zsh -i
I can get into a shell running inside my pod, but I want access to the filesystem for a container I've called "php". I think this should be at /proc/1/root/app but that directory doesn't exist. For reference, my Dockerfile has:
WORKDIR /app
COPY . .
So all the files should be in the root /app directory.
If I add --target=php then I get permission denied:
❯ cd /proc/1/root
cd: permission denied: /proc/1/root
How do I get access to the files?
Reading through the documentation, using kubectl debug won't give you access to the filesystem in another container.
The simplest option may be to use kubectl exec to start a shell inside an existing container. There are some cases in which this isn't an option (for example, some containers contain only a single binary, and won't have a shell or other common utilities avaiable), but a php container will typically have a complete filesystem.
In this case, you can simply:
kubectl exec -it $POD -- sh
You can replace sh by bash or zsh depending on what shells are available in the existing image.
The linked documentation provides several other debugging options, but all involve working on copies of the pod.

Custom postgres shared_preload_libraries dir - how to configure?

I am trying to install pg_cron extension in postgres in alpine linux docker.
When running
CREATE EXTENSION pg_cron;
in psql console I get:
ERROR: could not open extension control file "/usr/local/share/postgresql/extension/pg_cron.control": No such file or directory
The problem is that the actual pg_cron.control is not under /usr/local/share/... but under /usr/share/..
Where in postgresql.conf I can define the path?
Steps taken:
docker run --name postgres-0 -e POSTGRES_PASSWORD=Password1 -p 5432:5432 -d postgres:10-alpine
docker exec -it postgres-0 /bin/bash
apk update
apk add postgresql-pg_cron --repository=http://dl-cdn.alpinelinux.org/alpine/edge/community
cat <<EOT >> /var/lib/postgresql/data/postgresql.conf
shared_preload_libraries='pg_cron'
EOT
pg_ctl reload
PostgreSQL expects to find the extension files in the SHAREDIR/extension/ directory associated with the installation (execute pg_config --sharedir to confirm the value of SHAREDIR for your particular installation).
There is however no facility for specifying an alternative location for extension files; it looks like something is wrong with the packaging.
I'm not familiar with Alpine Linux, but a quick Google search brings up e.g. this issue: Postgres extensions are installed into incorrect path and the suggested solution is to use a bare Alpine Linux image and install PostgreSQL via the apk command, so you might want to try that.

Locust not generating failures.csv and expections.csv in distributed mode without UI in docker

When testing API using locust distributed mode without UI in docker. The distribution.csv, requests.csv are getting generated but the failures.csv and expection.csv are not getting generated but the requests.csv show failures as given below.
"Method","Name","# requests","# failures","Median response time","Average response time","Min response time","Max response time","Average Content Size","Requests/s"
"POST","/api/something/something",197009,56,470,559,78,156714,1,436.31
Can you please help.
The problem is that file need to be written to a folder that it has permission to, and a volume that is mounted to your host. If you add a mounted folder before the file name, it should work. For example:
Docker file:
# Set base image
FROM locustio/locust
ADD locustfile.py locustfile.py
Docker create Command:
docker build -t mykey/myimage:1.0 .
Docker run command (on Windows, replace with %CD% with $pwd on linux):
docker run --volume "%CD%:/mnt/locust" -e LOCUSTFILE_PATH=/mnt/locust/locustfile.py -e TARGET_URL=https://example.com -e LOCUST_OPTS="--clients=10 --no-web --run-time=600 --csv=/mnt/locust/output" mykey/myimage:1.0
The files will now write to the same folder where locustfile.py is located.

Replacing postgresql.conf in a docker container

I am pulling the postgres:12.0-alpine docker image to build my database. My intention is to replace the postgresql.conf file in the container to reflect the changes I want (changing data directory, modify backup options etc). I am trying with the following docker file
FROM postgres:12.0-alpine
# create the custom user
RUN addgroup -S custom && adduser -S custom_admin -G custom
# create the appropriate directories
ENV APP_HOME=/home/data
ENV APP_SETTINGS=/var/lib/postgresql/data
WORKDIR $APP_HOME
# copy entrypoint.sh
COPY ./entrypoint.sh $APP_HOME/entrypoint.sh
# copy postgresql.conf
COPY ./postgresql.conf $APP_HOME/postgresql.conf
RUN chmod +x /home/data/entrypoint.sh
# chown all the files to the app user
RUN chown -R custom_admin:custom $APP_HOME
RUN chown -R custom_admin:custom $APP_SETTINGS
# change to the app user
USER custom_admin
# run entrypoint.sh
ENTRYPOINT ["/home/data/entrypoint.sh"]
CMD ["custom_admin"]
my entrypoint.sh looks like
#!/bin/sh
rm /var/lib/postgresql/data/postgresql.conf
cp ./postgresql.conf /var/lib/postgresql/data/postgresql.conf
echo "replaced .conf file"
exec "$#"
But I get an exec error saying 'custom_admin: not found on the 'exec "$#"' line. What am I missing here?
In order to provide a custom configuration. Please use the below command:
docker run -d --name some-postgres -v "$PWD/my-postgres.conf":/etc/postgresql/postgresql.conf postgres -c 'config_file=/etc/postgresql/postgresql.conf'
Here my-postgres.conf is your custom configuration file.
Refer the docker hub page for more information about the postgres image.
Better to use the suggested answer by #Thilak, you do not need custom image to just use the custom config.
Now the problem with the CMD ["custom_admin"] in the Dockerfile, any command that is passed to CMD in the Dockerfile, you are executing that command in the end of the entrypoint, normally such command refers to main or long-running process of the container. Where custom_admin seems like a user, not a command. you need to replace this with the process which would run as a main process of the container.
Change CMD to
CMD ["postgres"]
I would suggest modifying the offical entrypoint which do many tasks out of the box, as you own entrypoint just starting the container no DB initialization etc.

Boot2Docker (on Windows) running Mongo with shared folder (This file system is not supported)

I am trying to start a Mongo container using shared folders on Windows using Boot2Docker. When starting using run -it -v /c/Users/310145787/Desktop/mongo:/data/db mongo i get a warning message inside the container saying:
WARNING: This file system is not supported.
After starting mongo shutsdown immediately.
Any hints or tips on how to solve this?
Apparently, according to this gist and Sev (sevastos), mongo doesn't support mounted volume through the VirtualBox shared folder:
See mongoDB Productions Notes:
MongoDB requires a filesystem that supports fsync() on directories.
For example, HGFS and Virtual Box’s shared folders do not support this operation.
the easiest solutions of all and a proper way for data persistance is Data Volumes:
Assuming you have a container that has VOLUME ["/data"]
# Create a data volume
docker create -v /data --name yourData busybox true
# and use
docker run --volumes-from yourData ...
This isn't always ideal (but the following is for Mac, by Edward Chu (chuyik)):
I don't think it's a good solution, because the data just moved to another container right?
But it still inside the container rather than local system(mac disk).
I found another solution, that is to use sshfs to map data between boot2docker vm and your mac, which may be better since data is not stored inside linux container.
Create a directory to store data inside boot2docker:
boot2docker ssh
mkdir -p /mnt/sda1/dev
Use sshfs to link boot2docker and mac:
echo tcuser | sshfs docker#localhost:/mnt/sda1/dev <your mac dir path> -p 2022 -o password_stdin
Run image with mongo installed:
docker run -v /mnt/sda1/dev:/data/db <mongodb-image> mongod
The corresponding boot2docker issue points out to docker issue 12590 ( Problem with -v shared folders in 1.6 #12590), which points to the work around of using double-slash.
using a double slash seems to work. I checked it locally and it works.
docker run -d -v //c/Users/marco/Desktop/data:/data <image name>
it also works with
docker run -v /$(pwd):/data
As an workaround I just copy from a folder before mongo deamon starts. Also, in my case I don't care of journal files, so i only copy database files.
I've used this command on my docker-compose.yml
command: bash -c "(rm /data/db/*.lock && cd /prev && cp *.* /data/db) && mongod"
And everytime before stoping the container I use:
docker exec <container_name> bash -c 'cd /data/db && cp $(ls *.* | grep -v *.lock) /prev'
Note: /prev is set as a volume. path/to/your/prev:/prev
Another workaround is to use mongodump and mongorestore.
in docker-compose.yml: command: bash -c "(sleep 30; mongorestore
--quiet) & mongod"
in terminal: docker exec <container_name> mongodump
Note: I use sleep because I want to make sure that mongo started, and it takes a while.
I know this involves manual work etc, but I am happy that at least I got mongo with existing data running on my Windows 10 machine, and still can work on my Macbook when I want.
It's seems like you don't need the data directory for MongoDb, removing those lines from your docker-composer.yml should run without problems.
The data directory is only used by mongo for cache.