s3fs not receiving s3 updates on ECS container - amazon-ecs

I have an ECS container running that is not receiving updates from new files written to the s3 bucket it is mounting.
Meaning, when a new file is written to the S3 bucket, I am unable to see it in the container I am mounting in.
Image:
FROM cubejs/cube:v0.29.17
RUN apt-get update
RUN apt-get -y install s3fs
COPY ./entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["cubejs", "server"]
entrypoint.sh:
#!/bin/bash
set -e
bucket=muhbucket
[ ! -d /cube/conf/schema ] && mkdir /cube/conf/schema
s3fs ${bucket} /cube/conf/schema -o ecs
echo "Mounted ${bucket} to /cube/conf/schema"
exec "$#"

s3fs 1.87 and later have a stat_cache_expire value of 900 seconds (15 minutes) which can delay updates. You can reduce this value although it will make operations like readdir slower. s3fs 1.86 and older cached files forever which made multi-client updates impossible. Some older Linux distributions like Ubuntu 20.04 continue to ship these older s3fs versions so you might accidentally be using these.

Related

Artifactory upgrade fail, postgres 9.5 -> 9.6 upgrade instructions needed

I had planned an upgrade of artifactory from 6.7.5 to 6.8.1. As part of the upgrade I checked jfrog's repo on github and it looks like they have a new recommended nginx and postgres version.
The current docker-compose is using postgres 9.5 and the new default version if 9.6. Simply pulling down postgres 9.6 however does not do an inplace upgrade.
FATAL: database files are incompatible with server DETAIL: The data
directory was initialized by PostgreSQL version 9.5, which is not
compatible with this version 9.6.11.
The upgrade instructions do not mention anything about how to do the upgrade.
The examples provided in github (https://github.com/jfrog/artifactory-docker-examples) are just examples.
Using them in production could cause issues and backwards compatibility is not guaranteed.
To get over the PostgreSQL matter when upgrading, I would suggest:
$ docker-compose -f yml-file-name.yml stop
edit the yml-file-name.yml and change the docker.bintray.io/postgres:9.6.11 to docker.bintray.io/postgres:9.5.2
$ docker-compose -f yml-file-name.yml up -d
Artifactory should be upgraded after following this, however it will keep using the previous version of the PostgreSQL DB
I have been able to upgrade database using following approach:
Dump all database to an SQL script using old database image; store it in a volume for future import:
# Override PostgreSQL image used to export using old binaries
printf "version: '2.1'\nservices:\n postgresql:\n image: docker.bintray.io/postgres:9.5.2\n" > image_override.yml
started_container=$(docker-compose -f artifactory-pro.yml -f image_override.yml run -d -v sql_dump_volume:/tmp/dump --no-deps postgresql)
# Dump database to a text file in a volume (to make it available for import)
docker exec "${started_container}" bash -c "until pg_isready -q; do sleep 1; done"
docker exec "${started_container}" bash -c "pg_dumpall --clean --if-exists --username=\${POSTGRES_USER} > /tmp/dump/dump.sql"
docker stop "${started_container}"
docker rm --force "${started_container}"
Back up old database directory and prepare a new one:
mv -fv /data/postgresql /data/postgresql.old
mkdir -p /data/postgresql
chown --reference=/data/postgresql.old /data/postgresql
chmod --reference=/data/postgresql.old /data/postgresql
Run a new database image with mounting dump script from step 1. It processes SQL scripts upon startup when setting up a new database, provided it's started as postgres something. We just don't need to leave the server running afterwards, so I provided --version to make entrypoint execute, import the data and quit:
docker-compose -f artifactory-pro.yml run --rm --no-deps -e POSTGRES_DB=postgres -e POSTGRES_USER=root -v sql_dump_volume:/docker-entrypoint-initdb.d postgresql postgres --version
After all this is done, I was able to start Artifactory normally with docker-compose -f artifactory-pro.yml up -d and it started up normally, applying rest of schema and file upgrade procedure as usual.
I have also prepared a script that basically does the above steps along with some additional checks and cleanup. Feel free to use if you find it useful.

How to Mount Multiple CephFS on Client-Node?

I'd Created three CephFS and try to Mount it on Client node but didn't find any way to mount specific one Cephfs. I'd tried
mount -t ceph mon-node:/ /mnt/apachefs/ -o mds_namespace=webfs,secret=ceph-authtool -p /etc/ceph/ceph.client.admin.keyring
But it fails, Is there any other way to Mount Multiple File systems on Client node with use of kernel Driver, mount.ceph or ceph-fuse?
It is possible to specify multiple CephFS by following options.
-o mds_namespace ... kernel Driver (mount -t ceph)
--client_mds_namespace ... ceph fuse (cephf-fuse)
I am pretty sure that -o mds_namespace did not work due to old kernel version. If you are using CentOS7, please test it with ceph-fuse 12.2.4 or later version with (--client_mds_namespace). It worked fine on my env.
If you using Debian based system, you can install ceph-fs-common package with apt, like: apt-get install -y ceph-fs-common.
ceph fs volume create nextcloud [<placement>]
ceph fs volume create okd-admin [<placement>]
#/etc/fstab
### one
10.10.20.6:6789:/folder1 /USERDATA ceph name=admin,secretfile=/etc/ceph/secret.key,fs=nextcloud,noatime,_netdev 0 2
### two
10.10.20.5:6789:/folder2 /mnt/cephfs ceph name=okd-admin,secretfile=/etc/ceph/secret-openshift.key,fs=openshift,noatime,_netdev 0 2

Docker Lamp Centos7: '/bin/sh -c systemctl start httpd.service' returned a non-zero code: 1

I'm starting to work with docker to automate envorinments, then I'm trying to build a simple LAMP so the Dockerfile is the following:
FROM centos:7
ENV container=docker
RUN yum -y swap -- remove systemd-container systemd-container-libs -- install systemd systemd-libs
RUN yum -y update; yum clean all; \
(cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
RUN yum -y update && yum clean all
RUN yum -y install firewalld httpd mariadb-server mariadb php php-mysql php-gd php-pear php-xml php-bcmath php-mbstring php-mcrypt php-php-gettext
#Enable services
RUN systemctl enable httpd.service
RUN systemctl enable mariadb.service
#start services
RUN systemctl start httpd.service
RUN systemctl start mariadb.service
#Open firewall ports
RUN firewall-cmd --permanent --add-service=http
RUN firewall-cmd --permanent --add-service=https
RUN firewall-cmd --reload
EXPOSE 80
CMD ["/usr/sbin/init"]
so when I build the image
docker build -t myimage .
Then when I run the code I get the following mistake:
The command '/bin/sh -c systemctl start httpd.service' returned a non-zero code: 1
When I enter to interactive mode(jumping the commands after RUN systemctl start httpd.service and rebuidling the image):
docker run -t -i myimage /bin/bash
And after try to start manually the service httpd I get the following mistake:
Failed to get D-Bus connection: No connection to service manager.
so, I don't know what am I doing wrong?
First of all, welcome to Docker! :-) Loads of Docker tutorials and docs are written around Ubuntu containers, but I like Centos too.
Ok, there are a couple of things to talk about here:
You're running up against a known issue with systemd-based Docker containers where they seem to need extra privileges to run, and even then lots of extra config is required to get them working. The Red Hat team are experimenting with some fixes (mentioned in comments) but not sure where that's at.
If you wish to try getting it working, these are the best instructions I've found, but I've played with this several times in the last couple of weeks and not got it working yet.
What people might say is "the real issue" here is that a Docker container should not be thought of as a "mini Virtual Machine". Docker is designed to run one "root" process per container, and the container system makes it easy to compose multiple containers together - they are small on disk, light on memory usage and easy to network together.
Here's a blog post from Docker which gives some background on this. There's also the "Docker Fundamentals" docs on Dockerizing applications and Working with containers.
So arguably the best way to proceed with the setup you're attempting to create here (though it might sound more complicated at the beginning) is to break your "stack" up into the services you need, and then use a tool like docker-compose (introduction, documentation) to create single-purpose Docker containers as required.
In your case above, you have two services, a web server and a database server. Therefore two Docker containers should work well, connected together by the database network connection. Here are some examples:
example with Symfony app, nginx and MariaDB
example with MariaDB + NodeJS
If you run one service per Docker container, you don't need to use systemd to manage them, as the Docker daemon manages each container sort of like it is a Unix process. When the process dies, the Docker container dies, and this is important because the Docker server monitors containers and can restart them automatically, or notify you.
This looks more like a perfect example where my docker-systemctl-replacement would fit into. It can easily interpret "systemctl start httpd.service" without an active SystemD around. I have done the same for some database services but not specifically the mariadb.service - may be you could give it a try.

Postgresql raises 'data directory has wrong ownership' when trying to use volume

I'm trying to run postgresql in docker container, but of course I need to have my database data to be persistent, so I'm trying to use data only container which expose volume to store database at this place.
So, my data container has such Dockerfile:
FROM ubuntu
# Create data directory
RUN mkdir -p /data/postgresql
# Create /data volume
VOLUME /data/postgresql
Which I run:
docker run --name postgresql_data lyapun/postgresql_data true
In my postgresql.conf I set:
data_directory = '/data/postgresql'
Then I run my postgresql container in such way:
docker run -d --name postgre --volumes-from postgresql_data lyapun/postgresql
And I got:
2014-07-04 07:45:57 GMT FATAL: data directory "/data/postgresql" has wrong ownership
2014-07-04 07:45:57 GMT HINT: The server must be started by the user that owns the data directory.
How to deal with this issue? I googled a lot to find some information about using postgresql with docker volumes, but I didn't found anything.
Thanks!
Ok, seems like I found workaround for this issue.
Instead of running postgres in such way:
CMD ["/usr/lib/postgresql/9.1/bin/postgres", "-D", "/var/lib/postgresql/9.1/main", "-c", "config_file=/etc/postgresql/9.1/main/postgresql.conf"]
I wrote bash script:
chown -Rf postgres:postgres /data/postgresql
chmod -R 700 /data/postgresql
sudo -u postgres /usr/lib/postgresql/9.1/bin/postgres -D /var/lib/postgresql/9.1/main -c config_file=/etc/postgresql/9.1/main/postgresql.conf
And replaced CMD in postgresql image to:
CMD ["bash", "/run.sh"]
It works!
You have to set ownership of directory /data/postgresql to the same user, under which you are running your postgresql binary. For example, in Ubuntu it is usually postgres user.
Then you have to use this command:
chown postgres.postgres /data/postgresql
A better way to solve that issue, assuming your postgres images is named "postgres" and that your backup is ./backup.tar:
First, add this to your postgres Dockerfile:
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
Then run:
docker run -it --name postgres -v $(pwd):/db postgres sh -c "tar xvf /db/backup.tar --no-overwrite-dir" && \
docker run -it --name data --volumes-from postgres busybox true && \
docker rm postgres && \
docker run -it --name postgres --volumes-from=data postgres
You don't have permission issues since the archive is extracted by the postgres user of your postgres image, so it is the owner of the extracted files.
You can then backup your data using the data container. The advantage of this solution is that you don't chmod/chown every time you run the image.
This type of errors is quite common when you link a NTFS directory into your docker container. NTFS directories don't support ext3 file & directory access control.
The only way to make it work is to link directory from a ext3 drive into your container.
I got a bit desperate when I played around Apache / PHP containers with linking the www folder. After I linked files reside on a ext3 filesystem the problem disappear.
I published a short Docker tutorial on youtube, may it helps to understand this problem: https://www.youtube.com/watch?v=eS9O05TTFjM

Can't reliably create a directory on startup after ec2 instance mounts it's ephemeral drives

This particular instance type mounts two ephemeral drives, in /dev/ and /mnt/. I have to create working directories for one of my services in these paths at startup or the services won't launch. Sometimes this script below works, sometimes it does not. I suspect it's a race between the folder being mounted, and my rc.local script kicking off. Is there a more reliable place I can create these directories? The last time I booted up, the /mnt/mongodb dir did get created, but the /dev/ one did not. I'm running the 12.04 HVM ubuntu instance from amazon.
Here is my rc.local file:
cd /mnt/
sudo mkdir mongodb
sudo chown -R mongodb mongodb
sudo chgrp -R mongodb mongodb
cd /dev/
sudo mkdir mongodb
sudo chown -R mongodb mongodb
sudo chgrp -R mongodb mongodb
sudo service mongodb start
exit 0