docker revert changes to container - import

I'm trying to snapshot my docker container so that I can revert back to a single point in time.
I've looked at docker save and docker export but neither of these seems to do what I'm looking for. Am I missing something?

You might want to use docker commit. This command will create a new docker image from one of your docker containers. This way you can easily create a new container later on based on that new image.
Be aware that the docker commit command won't save any data stored in Docker data volumes. For those you need to make backups.
For instance if you are working with the following Dockerfile which declares a volume and will write the date every 5 seconds to two files (one being in the volume, the other not):
FROM base
VOLUME /data
CMD while true; do date >> /data/foo.txt; date >> /tmp/bar.txt; sleep 5; done
Build a image from it:
$ docker build --force-rm -t so-26323286 .
and run a new container from it:
$ docker run -d so-26323286
Wait a bit so that the running docker container have a chance to write the date to the two files a couple of times.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
07b094be1bb2 so-26323286:latest "/bin/sh -c 'while t 5 seconds ago Up 5 seconds agitated_lovelace
Then commit your container into a new image so-26323286:snapshot1:
$ docker commit agitated_lovelace so-26323286:snapshot1
You can now see that you have two images availables:
$ docker images | grep so-26323286
so-26323286 snapshot1 03180a816db8 19 seconds ago 175.3 MB
so-26323286 latest 4ffd141d7d6f 9 minutes ago 175.3 MB
Now let's verify that a new container run from so-26323286:snapshot1 would have the /tmp/bar.txt file:
$ docker run --rm so-26323286:snapshot1 cat /tmp/bar.txt
Sun Oct 12 09:00:21 UTC 2014
Sun Oct 12 09:00:26 UTC 2014
Sun Oct 12 09:00:31 UTC 2014
Sun Oct 12 09:00:36 UTC 2014
Sun Oct 12 09:00:41 UTC 2014
Sun Oct 12 09:00:46 UTC 2014
Sun Oct 12 09:00:51 UTC 2014
And witness that such a container does not have any /data/foo.txt file (as /data is a data volume):
$ docker run --rm so-26323286:snapshot1 cat /data/foo.txt
cat: /data/foo.txt: No such file or directory
Finally if you want to access to the /data/foo.txt file which is in the first (still running) container, you can use the docker run --volumes-from option:
$ docker run --rm --volumes-from agitated_lovelace base cat /data/foo.txt
Sun Oct 12 09:00:21 UTC 2014
Sun Oct 12 09:00:26 UTC 2014
Sun Oct 12 09:00:31 UTC 2014
Sun Oct 12 09:00:36 UTC 2014
Sun Oct 12 09:00:41 UTC 2014
Sun Oct 12 09:00:46 UTC 2014
Sun Oct 12 09:00:51 UTC 2014
Sun Oct 12 09:00:56 UTC 2014
Sun Oct 12 09:01:01 UTC 2014
Sun Oct 12 09:01:06 UTC 2014
Sun Oct 12 09:01:11 UTC 2014
Sun Oct 12 09:01:16 UTC 2014

Here is an example of how to do it with the hello-world image from Docker Hub
First run the hello-world image, thereby downloading the image:
docker run hello-world
Then get the hash of the image you want to get the has of
docker history hello-world
You will see something like:
IMAGE CREATED
fce289e99eb9 15 months ago
fce289e99eb9 is your hash-code.
To tag this image, you run:
docker tag fce289e99eb9 hello-world:SNAPSHOT-1.0
To list all the tags for a repository, use:
docker image ls hello-world
And you will get something like:
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world SNAPSHOT-1.0 fce289e99eb9 15 months ago 1.84kB
hello-world latest fce289e99eb9 15 months ago 1.84kB

Related

Why can't linux read hwclock some month shift?

We have a linux system that we are building with yocto.
We can read our hardware clock after reboots, change both system time and hardware time without any error (most of the time). However; after some new month, every year that we have tried we are running in to this error. "hwclock: RTC_RD_TIME: Invalid argument".
Example 1:
root#:~# date
Thu Apr 30 23:59:50 UTC 2020
root#:~# hwclock
Thu Apr 30 23:59:52 2020 0.000000 seconds
root#:~#
root#:~#
root#:~# date
Fri May 1 00:00:10 UTC 2020
root#:~# hwclock
hwclock: RTC_TD_TIME: Invalid argument
root#:~#
This is not happening every new month, if I do the same test in January linux can read the hwclock without any issues. It does also not matter if the unit is powered or not. If I set the hwclock to first of May 00:00:00 it can keep track of the time.
The same error occurs on the following month shift:
Feb (it does not matter if it is leap year or not) -> Mar
Apr -> May
Jun -> Jul
Sep -> Oct
Nov -> Dec
Dec (Not sure because of new year or new month) -> Jan
In my understanding, this is happening because rtc-lib.c cannot verify the time correctly.
I have tried on multiple different hardware
Does anyone have any idea what might cause this?
Solution:
The fault was not in rtc-lib.c. The cause of the error was a faulty RTC implementation. The RTC month value is 1-indexed, but the kernel assumes it is 0-indexed. Added a patch for this to rtc-[my_rtc_model].c and now it seems to be working.

Change date inside container

I have problem with dynamically date change inside container. What I have:
docker run -it --privileged myDocker
the command above run container with privilege to change date
I can change date by:
date --set "12-12-12"
but after a few seconds, the date and time are setted the same as the host
root#660acd776c6b:/# date --set "12-12-12"
root#660acd776c6b:/# date
Wed Dec 12 00:00:01 UTC 2012
root#660acd776c6b:/# date
Wed Dec 12 00:00:02 UTC 2012
root#660acd776c6b:/# date
Wed Dec 12 00:00:02 UTC 2012
root#660acd776c6b:/# date
Tue Jan 15 10:14:26 UTC 2019
root#660acd776c6b:/# date
Tue Jan 15 10:14:27 UTC 2019
The container doesn't have installed ntp.
I can't use faketime because I use the date in .net application which doesn't use time from faketime. I can't change system clock using faketime.
How could I disable synchronization between container and host ?

systemd: seems like ExecStop script is executed immediately after the start command is run

I am trying to start a docker-compose project as a systemd service on RHEL 7. Here is my systemd script (/etc/systemd/system/wp.service):
[Unit]
Description=wp service with docker compose
Requires=docker.service
After=docker.service
[Service]
EnvironmentFile=/home/ec2-user/projects/wp/project-dir/vars.env
WorkingDirectory=/home/ec2-user/projects/wp/project-dir
# ExecStartPre=/usr/bin/docker-compose down
ExecStart=/usr/bin/docker-compose up -d --build --remove-orphans
# ExecStop=/usr/bin/docker-compose down
[Install]
WantedBy=multi-user.target
When I execute the following command:
sudo systemctl status wp.service
Everything works fine - the containers run and stay running. Here is the output of sudo systemctl status wp.service
Aug 15 03:07:22 ip-172-31-33-87.ec2.internal docker-compose[4185]: ---> Using cache
Aug 15 03:07:22 ip-172-31-33-87.ec2.internal docker-compose[4185]: ---> 7392974149d3
Aug 15 03:07:22 ip-172-31-33-87.ec2.internal docker-compose[4185]: Successfully built 7392974149d3
Aug 15 03:07:22 ip-172-31-33-87.ec2.internal docker-compose[4185]: Successfully tagged foo_wp:latest
Aug 15 03:07:22 ip-172-31-33-87.ec2.internal docker-compose[4185]: Creating mysql ...
Aug 15 03:07:22 ip-172-31-33-87.ec2.internal docker-compose[4185]: [55B blob data]
Aug 15 03:07:23 ip-172-31-33-87.ec2.internal docker-compose[4185]: [37B blob data]
and the containers are up:
[ec2-user#ip-172-31-33-87 ~]$ sudo docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
579b52c8e3bc foo_wp "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:9101->80/tcp wp
3c418cfe2b9c mariadb:10.3.8-bionic "docker-entrypoint.s…" About a minute ago Up About a minute 3306/tcp mysql
If, however, I uncomment the ExecStop line above (and run docker-compos down and reload the service), then the containers are removed after they are run. The output of the status command is:
Loaded: loaded (/etc/systemd/system/wp.service; disabled; vendor preset: disabled)
Active: deactivating (stop) since Wed 2018-08-15 03:12:12 UTC; 7s ago
Process: 4862 ExecStart=/usr/bin/docker-compose up -d --build --remove-orphans (code=exited, status=0/SUCCESS)
Main PID: 4862 (code=exited, status=0/SUCCESS); : 5165 (docker-compose)
Tasks: 2
Memory: 19.0M
CGroup: /system.slice/wp.service
└─control
└─5165 /usr/bin/python2 /usr/bin/docker-compose down
Aug 15 03:12:11 ip-172-31-33-87.ec2.internal docker-compose[4862]: Step 3/3 : COPY wordpress/ /var/www/html/
Aug 15 03:12:11 ip-172-31-33-87.ec2.internal docker-compose[4862]: ---> Using cache
Aug 15 03:12:11 ip-172-31-33-87.ec2.internal docker-compose[4862]: ---> 7392974149d3
Aug 15 03:12:11 ip-172-31-33-87.ec2.internal docker-compose[4862]: Successfully built 7392974149d3
Aug 15 03:12:11 ip-172-31-33-87.ec2.internal docker-compose[4862]: Successfully tagged foo_wp:latest
Aug 15 03:12:11 ip-172-31-33-87.ec2.internal docker-compose[4862]: Creating mysql ...
Aug 15 03:12:11 ip-172-31-33-87.ec2.internal docker-compose[4862]: [55B blob data]
Aug 15 03:12:12 ip-172-31-33-87.ec2.internal docker-compose[4862]: [37B blob data]
Aug 15 03:12:12 ip-172-31-33-87.ec2.internal docker-compose[5165]: Stopping wp ...
Aug 15 03:12:12 ip-172-31-33-87.ec2.internal docker-compose[5165]: Stopping mysql ...
and the containers have been removed:
sudo docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[ec2-user#ip-172-31-33-87 foo]$
It seems as though the systemd service is executing the ExecStop script immediately after the ExecStart script. What could be the cause?
You are running docker-compose in detached mode (option -d). After starting the containers, docker-compose will daemonise the containers and exit.
Systemd monitors the PID of docker-compose, and when it exits, assumes that your program has stopped and will invoke the ExecStop commands.
Try running it without the -d option.
The reason systemd does this is because you haven't specified the type of your unit and by default it reverts to Type=simple.
See the official documentation for Type and ExecStop.

Get output of mongo shell script

as a part of my mongoDB maintenance I'm running mongo shell and make it to load 2 scripts. The command I'm running looks like follows:
$MONGO_HOME/bin/mongo --verbose --port 27017 replSetConfig.js initializeReplicaSet.js
The output I got is:
MongoDB shell version: 2.2.3
Thu Mar 7 03:00:00 versionCmpTest passed
Thu Mar 7 03:00:00 versionArrayTest passed connecting to: 127.0.0.1:27017/test
Thu Mar 7 03:00:01 creating new connection to:127.0.0.1:27017
Thu Mar 7 03:00:01 BackgroundJob starting: ConnectBG
Thu Mar 7 03:00:01 connected connection!
loading file: js/replSet.config.js
loading file: js/initializeReplicaSet.js
I'm redirecting the output to a log file but I would like to see some output of the loaded scripts as well. I.e. the output which I see in the shell if I start it and call load("...") for the very same scripts. Is there a way how to capture the output?
Thanks
To add output of scripts you must use print() or printjson() statements otherwise MongoDB will remain quiet about any output of a script.

mongodbrestore don't seem to restore all documents

In dump/enron directory messages.bson and messages.medata.json fiels. It should restore
120,477 documents.
I want to restore data from it.
I input command:
mongorestore -v --db enron --drop dump/enron
After command is finished I get a messagee:
120477 objects found
don't know what to do with file [dump/enron/messages.metadata.json]
But in the collection message I see 112196 documents using:
db.messages.count()
Could you please tell me what's the problem with it?
The output of the command:
c:\mongodb\mongodb-win32-i386-2.0.5\bin>mongorestore -v dump/enron/
Tue Dec 11 14:17:39 creating new connection to:127.0.0.1
Tue Dec 11 14:17:39 BackgroundJob starting: ConnectBG
Tue Dec 11 14:17:39 connected connection!
connected to: 127.0.0.1
Tue Dec 11 14:17:39 dump/enron/messages.bson
Tue Dec 11 14:17:39 going into namespace [enron.messages]
Tue Dec 11 14:17:39 file size: 396236668
126878231/396236668 32%
270206614/396236668 68%
375698921/396236668 94%
381433738/396236668 96%
387378348/396236668 97%
394626836/396236668 99%
120477 objects found
don't know what to do with file [dump/enron/messages.metadata.json]
What does the message: "don't know what to do with file [dump/enron/messages.metadata.json]" mean?
This should work:
ssharma$ mongorestore -d enron --collection messages /dump/enron/messages.bson
connected to: 127.0.0.1
Thu Mar 7 13:05:05 /dump/enron/messages.bson
Thu Mar 7 13:05:05 going into namespace [enron.messages]
Thu Mar 7 13:05:09 74234213/396236668 18% (bytes)
Thu Mar 7 13:05:12 126614885/396236668 31% (bytes)
Thu Mar 7 13:05:15 192098158/396236668 48% (bytes)
Thu Mar 7 13:05:18 208083274/396236668 52% (bytes)
Thu Mar 7 13:05:21 231816712/396236668 58% (bytes)
Thu Mar 7 13:05:24 293564538/396236668 74% (bytes)
Thu Mar 7 13:05:27 356071219/396236668 89% (bytes)
Thu Mar 7 13:05:30 379387449/396236668 95% (bytes)
120477 objects found
Thu Mar 7 13:05:32 Creating index: { key: { _id: 1 }, ns: "enron.messages", name: "_id_" }
ssharma$ ./mongo
MongoDB shell version: 2.2.2
connecting to: test
> use enron
switched to db enron
> db.messages.count()
120477
You need to specify the database and the collection when restoring the bson file.
It works for me like so:
$ mongodump -d mark --collection coll
connected to: 127.0.0.1
DATABASE: mark to dump/mark
mark.coll to dump/mark/coll.bson
1000 objects
and
$ mongorestore -d mark --collection newcoll dump/mark/
connected to: 127.0.0.1
Wed Aug 29 11:48:39 dump/mark/coll.bson
Wed Aug 29 11:48:39 going into namespace [mark.newcoll]
1000 objects found
Can you try -
mongorestore -d enron --collection messages /dump/enron/