Share host timezone with docker container - date

I'm trying to sync the timezone of a docker container with my host. My host is using ISM and the docker container (using a tomcat image) uses UTC by default. I've read that we should mount a volume to share the timezone of the host:
$ docker run -t -i -p 8080:8080 -p 8090:8090 -v /etc/localtime:/etc/localtime:ro tomcat:7.0.69-jre8 /bin/bash
After that I can check that the date retrieved is the same as the host:
$ date
Fri Jul 22 13:53:45 IST 2016
When I deploy my application and I try to update a date, I can see that the date 22/07/2016 is using my browser timezone, which is the same as the host where the docker container is running. But debbuging the server side of the app I can see that the date is converted into UTC timezone. This means that the docker container is not really using the host volume I did mount.
Am I missing anything?
Another way I tried and did work was updating the timezone in the docker container:
$ dpkg-reconfigure tzdata // Selecting the corresponding options afterwards
This way I can see the same timezone in both: client side and server side of my app.

After debugging and reading about date and time, I think it makes sense that the backend stores the date and time in UTC/GMT, that way is independent of the client's timezone when it's saved in the DB. So it wouldn't be a good practise to change the tomcat server timezone to match the host (it shouldn't really matter).
The issue I had was the front end was using date and time (UTC/GMT +1) and the time was set to 00:00h and when it reaches the back end, the date and time is converted to UTC/GMT which makes it 23:00 of the previous day. The persistence layer was just storing the date which it's wrong as we lose data (the time) and when we try to retrieve that record from the DB we will get the previous date without the time so it's not the result we would expect.
I hope my explanation makes sense

Related

How does openfaas solve the time zone problem of the container in the pod?

I am currently deploying openfaas on my local virtual machine's kubernetes cluster. I found that the time zone of the container started after publishing the function is inconsistent with the host machine. How should I solve this problem?
[root#k8s-node-1 ~]# date
# Host time
2021年 06月 09日 星期三 11:24:40 CST
[root#k8s-node-1 ~]# docker exec -it 5410c0b41f7a date
# Container time
Wed Jun 9 03:24:40 UTC 2021
As #coderanger pointed out in the comments section, the timezone difference is not related to OpenFaaS.
It depends on the image you are using, most of the images use UTC timezone.
Normally this shouldn't be a problem, but in some special cases you may want to change this timezone.
As described in this article, you can use the TZ environment variable to set the timezone of a container (there are also other ways to change the timezone).
If you have your own Dockerfile, you can use the ENV instruction to set this variable:
NOTE: The tzdata package has to be installed in the container for setting the TZ variable.
$ cat Dockerfile
FROM nginx:latest
RUN apt-get install -y tzdata
ENV TZ="Europe/Warsaw"
$ docker build -t mattjcontainerregistry/web-app-1 .
$ docker push mattjcontainerregistry/web-app-1
$ kubectl run time-test --image=mattjcontainerregistry/web-app-1
pod/time-test created
$ kubectl exec -it time-test -- bash
root#time-test:/# date
Wed Jun 9 17:22:03 CEST 2021
root#time-test:/# echo $TZ
Europe/Warsaw

How to safely stop/start my postgres server when using docker-compose

I sometimes stop/start docker very often when I am release new features in my application.
docker-compose up -d
docker-compose stop
I am using pretty much the bare bones postgres docker setup (see below).
I am mapping the /data folder to my host.
Is there anything I should be worried about if I stop/start docker many times in a day in terms of data getting corrupted?
Is calling docker-compose stop the best way to be stopping my postgres instance?
My postgres service in my docker-compose looks like this:
db:
image: postgres:9.4
volumes:
- "/home/deploy/data/pgdata:/var/lib/postgresql/data"
restart: always
This setup currently is running smoothly in development, but once it goes to production I want to make sure I am following best practices etc.
Use,
docker-compose down -v
What it does is basically removes all the volumes you added. If you don't those volumes will hang on and eat up your space. It only removes the volume inside the docker container. The volume in your host stays and survives container removal in case if you want that data to survive container removal.
Whenever you create a docker container by docker run, Docker creates a volume/ directory to keep the details about the containers. After you execute docker run, if you look into /var/lib/docker/containers, you will see one directory for each container you started. If you have not removed the volumes for previous container, you will see many directories under the "container" directory. The name of these directories will be very long random letters and number. So, if you don't tell the docker to remove these directories when you stop the container, it will be there forever. The v option I mentioned above, will delete these directories when you take down the container.
Keep in mind, you can view the contents of the directory /var/lib/docker only as a root user. To change to root user, use sudo -i before you attempt to view the contents of the directory.
Databases in particular are usually designed so that it's very hard to lose data, even if the machine loses power in the middle of writing something to disk. (This comes at some performance cost.) So long as you don't have more than one PostgreSQL instance at a time using the same backing data store, I'd expect it to not lose data or otherwise corrupt itself; the worst you should expect to see is a message at startup that it's recovering from a write-ahead log or something along those lines.
docker stop will send a signal to a container that prompts it to shut down cleanly, and PostgreSQL will take this as a cue to shut down. It looks like docker-compose stop, docker-compose down, and sending ^C to docker-compose up all use the same mechanism. So the way you're doing it now should result in a clean shutdown (provided PostgreSQL finishes its cleanup within 10 seconds).
I believe you can docker-compose restart specific services, or docker-compose up --force-recreate them. This would help if you rebuilt your application container and needed to restart that, but not its database.

Docker container date/time totally different to host PC

When I run a docker container on my PC it has a totally different date/time to the host PC. See commands below. The time on the container recognizerDev is for the previous day, different hour, different minutes to the host. Any idea what is going on?
PS C:\Users\Bobby> date
11 October 2016 19:51:38
PS C:\Users\Bobby> docker exec recognizerDev date
Mon Oct 10 21:43:54 UTC 2016
When I try the same thing running on an AWS linux host the date/time is correct except for a 1 hour difference due to timezones.
Note that the first command returns the correct time/date in UTC+1 (London) as per my PC. The second command says it is in UTC but this cannot be right as if so it would return the same result less 1 hour.
This is only a partial answer (because it does not necessarily resolve the problem), but may help with diagnosis.
When you are running docker under Linux (as on your AWS host), you are just running processes on the host. That is, there isn't a substantial difference between docker run fedora ls vs running ls, except that the former has a slightly different view of system resources. The time reported in the container will always match the time reported on the host, modulo timezone settings.
When you running docker anywhere else (e.g., under Windows or MacOS), there is an additional layer in play: docker spawns a Linux virtual machine (historically using VirtualBox, although I think they may take advantage of other options these days) and then runs docker inside the virtual machine.
Because this is effectively a different machine from your host, it is possible for the time to drift. There are various ways of solving this sort of problem, including running ntp inside the virtual machine or running special guest agents that take care of keeping time in sync with the host. I don't know enough about how Docker configures these systems to know how or if they handle this explicitly.
If your docker vm has been running for a long period of time, simply restarting it may resolve the problem. Possibly docker machine restart is what you need.
To some people looking on still how to correct the date and time, not sure if this will work to you but it happens on me using windows.
Just have to restart the locker, and once the docker booted up, it will sync time and time to the localhost machine :)
If you don't want to restart the docker then the following script can synch time by and disabling and re-enabling time synch through Windows PowerShell
$vm = Get-VM -Name DockerDesktopVM
$feature = "Time Synchronization"
Disable-VMIntegrationService -vm $vm -Name $feature
Enable-VMIntegrationService -vm $vm -Name $feature
You can try specifying that you wish to get the time settings from the host PC by adding the following lines to the docker compose file
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
restarting my PC resolve the problem for me.

Postgres 9.0 File System level backup on Debian Jessie

I'm on Debian 8.2.0 and trying to run a postgres server from a folder I received. Version is 9.0.18. Here is the command I issue:
./postgres -D /home/swapps/project/PostgreSQL/9.0/data/
but the cursor keeps blinking in the terminal. I'm not sure what is happening?
Thanks
Sounds like it's started, and log_min_messages is set to a high enough value that you don't see any output.
Using another terminal session connect to the server on the port it's running on. If you don't know that check the port value in the postgresql.conf inside the data directory.
Generally you should use pg_ctl -D blah -w start rather than postgres directly. See the manual.
Or, for long term use, set it up to run on startup via an init script.

Why doesn't postgres official docker repo start db service at build time?

Under the background of https://github.com/docker-library/postgres (github repo) and https://registry.hub.docker.com/_/postgres/ (docker hub)
It can be seen database is started by Entrypoint and CMD with bash script
/docker-entrypoint.sh
with
ENTRYPOINT ["/docker-entrypoint.sh"]
EXPOSE 5432
CMD ["postgres"]
another script hook provided to change database is
/docker-entrypoint-initdb.d
which means the database starts (can be pqsl) only at runtime, when docker run command is typed in.
This causes a problem, we could not customize the database before it runs in build time, for example add extensions and populate db with data.
Of course, it could be done in run time. But it has the advantage to repeat the operation every time when the image is run.
So, what is the logic behind this design from docker or postgres perspective? How could I add extension and populate data in build time ?
If you were to customize (create, populate data) a database at build time, that would imply that the database data is written into the docker image filesystem itself (as one cannot mount a volume at build time).
The issue with that is that the docker image filesystem is a special one (AUFS or btrfs, etc) which isn't delivering good I/O performances for data intensive applications such as a database server.
As a consequence, you want to have your data written on a volume instead of on the docker container filesystem. As you don't know at build time what would be the volume used at run time, and as there is no mean anyway to mount volumes at build time, no one should create database at build time.
Furthermore, if you take a close look at the Dockerfile of the official PostgreSQL image, you will see that there is a VOLUME instruction that makes the path at which the data is written a volume. That means that the image is designed so that the data will never hit the docker container filesystem.
If you take a look at other Dockerfiles for other databases or data intensive applications, you will notice that they all operate in this manner. An other reason for that is that it is accepted as a good practice to make your docker containers immutable.
If you want to install additional modules to your image, it is fine as long as those do not depend on data that would be written on a volume, and as long as you make sure to declare a volume for any path they would write data on.
tl;dr
Application code/binary → docker image filesystem
Application data → docker volume
This is right from the docker page for the postgres image (library/postgres):
If you would like to do additional initialization in an image derived from this one, add a *.sql or *.sh script under /docker-entrypoint-initdb.d (creating the directory if necessary). After the entrypoint calls initdb to create the default postgres user and database, it will run any *.sql files and source any *.sh script found in that directory to do further initialization before starting the service.
You can also extend the image with a simple Dockerfile to set the locale. The following example will set the default locale to de_DE.utf8:
FROM postgres:9.4
RUN localedef -i de_DE -c -f UTF-8 -A /usr/share/locale/locale.alias de_DE.UTF-8
ENV LANG de_DE.utf8
Since database initialization only happens on container startup, this allows us to set the language before it is created.
You have the ability to extend an image just as the example shows from the docs that I pasted above. You can also use the exec command and execute virtually anything within the container right from your host machine. It took me a little while to get used to it, I continue to discover things as I play with it more and more.
UPDATE:
sudo docker run --name some-postgres -v ~/PATH/TO/some-postgres/data:/var/lib/postgres/data -p 127.0.0.1:5432:5432 -e POSTGRES_PASSWORD=test -d postgres