Dockerfile copy keep subdirectory structure - copy

I'm trying to copy a number of files and folders to a docker image build from my localhost.
The files are like this:
folder1/
file1
file2
folder2/
file1
file2
I'm trying to make the copy like this:
COPY files/* /files/
However, all of the files from folder1/ and folder2/ are placed in /files/ directly, without their folders:
files/
file1
file2
Is there a way in Docker to keep the subdirectory structure as well as copying the files into their directories? Like this:
files/
folder1/
file1
file2
folder2/
file1
file2

Remove star from COPY, with this Dockerfile:
FROM ubuntu
COPY files/ /files/
RUN ls -la /files/*
Structure is there:
$ docker build .
Sending build context to Docker daemon 5.632 kB
Sending build context to Docker daemon
Step 0 : FROM ubuntu
---> d0955f21bf24
Step 1 : COPY files/ /files/
---> 5cc4ae8708a6
Removing intermediate container c6f7f7ec8ccf
Step 2 : RUN ls -la /files/*
---> Running in 08ab9a1e042f
/files/folder1:
total 8
drwxr-xr-x 2 root root 4096 May 13 16:04 .
drwxr-xr-x 4 root root 4096 May 13 16:05 ..
-rw-r--r-- 1 root root 0 May 13 16:04 file1
-rw-r--r-- 1 root root 0 May 13 16:04 file2
/files/folder2:
total 8
drwxr-xr-x 2 root root 4096 May 13 16:04 .
drwxr-xr-x 4 root root 4096 May 13 16:05 ..
-rw-r--r-- 1 root root 0 May 13 16:04 file1
-rw-r--r-- 1 root root 0 May 13 16:04 file2
---> 03ff0a5d0e4b
Removing intermediate container 08ab9a1e042f
Successfully built 03ff0a5d0e4b

Alternatively you can use a "." instead of *, as this will take all the files in the working directory, include the folders and subfolders:
FROM ubuntu
COPY . /
RUN ls -la /

To merge a local directory into a directory within an image, do this.
It will not delete files already present within the image. It will only add files that are present locally, overwriting the files in the image if a file of the same name already exists.
COPY ./local-path/. /image-path/

I could not get any of these answers to work for me. I had to add a dot for the current directory, so that the working docker file looks like:
FROM ubuntu
WORKDIR /usr/local
COPY files/ ./files/
Also using RUN ls to verify wasn't working for me and getting it to work was looking really involved, a much easier way to verify what is in the docker file is to run an interactive shell and check out what is in there, using docker run -it <tagname> sh.

If you want to copy a source directory entirely with the same directory structure,
Then don't use a star(*). Write COPY command in Dockerfile as below.
COPY . destinatio-directory/

Related

Create empty directories with cloud_init

I am trying to configure an user account using one cloud-init yaml file that include a call to write_files module, like this:
write_files:
#passwd file for vncserver
- path: /home/ubuntu/.vnc/passwd
owner: ubuntu:ubuntu
permissions: '0600'
defer: true
encoding: b64
content: bmtzZGN1eQo=
The file is created as expected, but the problem is that the parent directory is owned by root, and not by ubuntu user.
$ ls -la .vnc/
total 12
drwxr-xr-x 2 root root 4096 Dec 20 16:24 .
drwxr-x--- 5 ubuntu ubuntu 4096 Dec 20 16:24 ..
-rw------- 1 ubuntu ubuntu 8 Dec 20 16:24 passwd
I tried to manually create the /home/ubuntu/.vnc/ directory prior to create the passwd file to be able to set the ownership of the directory, just to find that documentation of write_files does not explain how to create (empty) directories.
I know that I could do this using runcmd module to insert a command like this:
runcmd:
- mkdir --mode 0600 --parents /home/ubuntu/.vnc
- echo bmtzZGN1eQo | base64 -d > /home/ubuntu/.vnc/passwd
- chmod 0600 /home/ubuntu/.vnc/passwd
but this seems to be too complex to do such small task.
It is possible to use write_files module to create directories or change ownership/permission of existing directories?

Why wget does not download included videos?

I use wget to download an entire website with all included assets, the problem is that wget does not download included videos.
For example with this website, if I execute the following command :
wget -q -r ‐‐page-requisites http://videohtml5.byethost11.com/index.html
It download almost everything but if you open the web page, you'll see that the video is not downloaded.
I have tried the following options without results:
-r : for recursion
--page-requisites : to download all included assets
However if I directly put the link to the video as an option of wget it works :
wget -q -r ‐‐page-requisites http://videohtml5.byethost11.com/movie.mp4
But I would like to download everything in one command. I have read the wget manual but I didn't see any other option that could do that. That's why I am asking your help.
EDIT : I change the url to really match my need
SOLUTION : Because I am using Windows, I didn't get the latest released which has the fix for the bug. Do not download wget from http://gnuwin32.sourceforge.net/packages/wget.htm, but use https://eternallybored.org/misc/wget/ instead.
The video is hosted at a different domain: you need the -H parameter.
See the manpage section about spanning hosts: https://www.gnu.org/software/wget/manual/wget.html#Spanning-Hosts
== Update ==
It seems wget has a bug preventing to download the <source> of the <video> tag. See https://lists.gnu.org/archive/html/bug-wget/2013-06/msg00070.html
This works as you expect:
wget -H -r --level=1 -k -p http://camendesign.com/code/video_for_everybody/
...
drwxr-xr-x 24 root root 4096 Apr 17 10:08 camendesign.com
drwxr-xr-x 2 root root 4096 Apr 17 10:08 clips.vorwaerts-gmbh.de
drwxr-xr-x 2 root root 4096 Apr 17 10:08 forum.camendesign.com
-rw-r--r-- 1 root root 13700 May 12 2013 test.html
drwxr-xr-x 2 root root 4096 Apr 17 10:08 www.youtube.com
root#test /tmp/test# cd clips.vorwaerts-gmbh.de/
root#test /tmp/test/clips.vorwaerts-gmbh.de# ll
total 5396
-rw-r--r-- 1 root root 5510872 Feb 9 2010 big_buck_bunny.mp4

Cannot remove file or Directory

I have root on the server in question.
OS: Solaris 10 sparc
When I ls the audit_old directory I get:
root#z10801 audit_old # ls
qm2_ora_24871_1c.aud.gz
ls -al results in:
root#z10801 audit_old # ls -al
total 250658
drwxr-x--- 2 oraqm2 dba 128261632 Mar 6 21:55 .
drwxr-x--- 17 oraqm2 dba 512 Mar 6 20:55 ..
rm gives me:
root#z10801 audit_old # rm qm2_ora_24871_1c.aud.gz
qm2_ora_24871_1c.aud.gz: No such file or directory
rm -rf the dir gives me:
root#z10801 rdbms # rm -rf audit_old/
rm: Unable to remove directory audit_old/: File exists
Any help would be great!
Thanks!
This behavior may be due to a file currently open by a separate process.
Even though you have removed it, the file is not truly removed by the OS until the process closes the file.
Try to find out the process that has the file open by using:
$ fuser .
In the directory which has the problem.
This command will print the process Id's which have files currently in use.

gitweb not seeing repository because it can't see the HEAD file?

I'm trying to get gitweb setup on a CentOS 6.2 server with git/gitweb 1.7.1 and httpd 2.2.15 installed.
gitweb's default project root (verified in the CGI script) is /var/lib/git, so I've created that and a bare git repository in there:
$ ls -laF /var/lib/git
total 12
drwxrwxr-x. 3 git git 4096 Feb 8 16:37 ./
drwxr-xr-x. 15 root root 4096 Feb 8 14:20 ../
drwxrwxr-x. 7 git git 4096 Feb 8 15:37 foo/
$ git init --bare --shared foo
Initialized empty shared Git repository in /var/lib/git/foo/
$ ls -lF foo
total 32
drwxrwsr-x. 2 git git 4096 Feb 8 17:16 branches/
-rw-rw-r--. 1 git git 126 Feb 8 17:16 config
-rw-rw-r--. 1 git git 73 Feb 8 17:16 description
-rw-rw-r--. 1 git git 23 Feb 8 17:16 HEAD
drwxrwsr-x. 2 git git 4096 Feb 8 17:16 hooks/
drwxrwsr-x. 2 git git 4096 Feb 8 17:16 info/
drwxrwsr-x. 4 git git 4096 Feb 8 17:16 objects/
drwxrwsr-x. 4 git git 4096 Feb 8 17:16 refs/
$ cat foo/HEAD
ref: refs/heads/master
However on viewing http://localhost/git/, I see "404 No projects found".
I've debugged through the script and can see that it's finding /var/lib/git/foo, but Perl's -e operator fails on /var/lib/git/foo/HEAD. At the same place in the file, a backticked call to ls shows that the file is visible there, but I cannot make Perl -e see the file.
Any idea what might be making this fail? This makes no sense to me.
EDIT: note that SELinux extensions on this CentOS box appear to be disabled:
$ sudo sestatus
SELinux status: disabled
EDIT: moving everything from /var/lib/git to /git hasn't helped. I've changed the apache user to have a real shell, logged in as that user, and verified that it has access to all the directories and files in question.
It was in fact SELinux. Even though SELinux reported it was disabled, it was somehow preventing access to some files for CGI scripts running under httpd. Enabling SELinux and setting it to permissive mode, it started working.
This seems highly non-intuitive and frustrates me, but at least it's working.
I'm still thinking it's an issue with permissions... but I could be wrong. Have you ensured that all of the parent directories leading up to your /var/lib/git directory are permission-accessible?
Someone else had a similar problem here and it might be worth trying a completely different directory... maybe even /opt.

Install mongodb php driver on mediatemple dv 4.0

By following the official instructions http://www.mongodb.org/display/DOCS/Quickstart+Unix and this post http://blog.phy5ics.com/2010/03/27/installing-mongodb-on-mediatemple-dv/ I've just about managed to get mongodb installed on MediaTemples DV 4.0 server (I think).
I am however having problems installing the PHP driver http://www.mongodb.org/display/DOCS/PHP+Language+Center
In SSH I get this:
[root#xxx]# cd /var/tmp
[root#xxx]# pecl install mongo
downloading mongo-1.1.4.tgz ...
Starting to download mongo-1.1.4.tgz (68,924 bytes)
.................done: 68,924 bytes
18 source files, building
running: phpize
Configuring for:
PHP Api Version: 20090626
Zend Module Api No: 20090626
Zend Extension Api No: 220090626
/usr/bin/phpize: /var/tmp/mongo/build/shtool: /bin/sh: bad interpreter: Permission denied
Cannot find autoconf. Please check your autoconf installation and the $PHP_AUTOCONF environment variable. Then, rerun this script.
ERROR: `phpize' failed
I am logged in as the root user - I don't understand why it's failing and what steps I need to take to install the PHP driver?
Thanks
Run the following commands on your server's command line:
$ mkdir /root/tmp
$ mount --bind /root/tmp /tmp
$ umount /tmp; umount /var/tmp
$ pecl install mongo
A few things:
/root/tmp is just an arbitrary temp directory. You can use whatever you want, provided it exists.
Some instructions say to use --host instead of --bind. On RHEL/CentOS mount says --host is an unrecognized option.
If you're on a VM, it's likely that you'll have to do this each time you restart your VM/Container.
For Media Temple customers, I can confirm that this works on both (dv) and (ve) servers with CentOS 5 and 6.
From media temple support: Need to create a temporary directory (/root/tmpz):
$ mkdir /root/tmpz
$ mount --host /root/tmpz /tmp
$ umount /tmp; umount /var/tmp
$ pecl install mongo
Build complete.
Don't forget to run 'make test'.
running: make INSTALL_ROOT="/var/tmp/pear-build-root/install-mongo-1.1.4" install
Installing shared extensions: /var/tmp/pear-build-root/install-mongo-1.1.4/usr/lib64/php /modules/
running: find "/var/tmp/pear-build-root/install-mongo-1.1.4" | xargs ls -dils
69094140 4 drwxr-xr-x 3 root root 4096 Feb 22 13:40 /var/tmp/pear-build-root/install-mongo-1.1.4
69275176 4 drwxr-xr-x 3 root root 4096 Feb 22 13:40 /var/tmp/pear-build-root/install-mongo-1.1.4/usr
69275177 4 drwxr-xr-x 3 root root 4096 Feb 22 13:40 /var/tmp/pear-build-root/install-mongo-1.1.4/usr/lib64
69290445 4 drwxr-xr-x 3 root root 4096 Feb 22 13:40 /var/tmp/pear-build-root/install-mongo-1.1.4/usr/lib64/php
69290447 4 drwxr-xr-x 2 root root 4096 Feb 22 13:40 /var/tmp/pear-build-root/install-mongo-1.1.4/usr/lib64/php/modules
69290448 676 -rwxr-xr-x 1 root root 684126 Feb 22 13:40 /var/tmp/pear-build-root/install-mongo-1.1.4/usr/lib64/php/modules/mongo.so
Build process completed successfully
Installing '/usr/lib64/php/modules/mongo.so'
install ok: channel://pecl.php.net/mongo-1.1.4
configuration option "php_ini" is not set to php.ini location
You should add "extension=mongo.so" to php.ini
Do you have php-dev installed? phpize is basically "compiling" the MongoDB driver, but unless you have the -dev installed, this may not work.