When enter the command in powershell I get this error
"invalid argument "Dockerfile2**" for "-t, --tag" flag: invalid reference format: repository name must be lowercase
See docker build --help.
The Dockerfile I created is in word but I saved it as a plain text.
This is what I typed in my Dockerfile.
FROM centos:7
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]
You are tagging your docker image as "Dockerfile2".
You can't use the Upper case letter for tagging your docker file.
change -t parameter from "Dockerfile2" to "dockerfile2" while building docker image.
Based off the error message, when naming tags, you have to have them in lowercase.
Try changing "Dockerfile2" in your command to the all lowercase: "dockerfile2"
Please do the following in order to be able to build it successfully using Powershell:
First check Docker is installed on your system by entering "docker --version" command in your powershell. If you see your docker version, you are good to go, otherwise install docker properly.
Create a simple text (not word document etc.) file called Dockerfile (if you use other file names you will have to specify the file name with -f option)
Paste your dockerfile entries in it and save the file
In your powershell, go the path that includes your Dockerfile and run "docker build -t ."
check your new image by running "docker image ls"
In my environment, your file was built successfully but there was a warning regarding one of your command in your Dockerfile entries:
[WARNING]: Empty continuation line found in:
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in ; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); rm -f /lib/systemd/system/multi-usemd-tr.target.wants/;rm -f /etc/systemd/system/.wants/;rm -f /lib/systemd/system/local-.getmpfiles-setup.servifs.target.wants/; rm -f /lib/systemd/system/sockets.target.wants/udev; rm -f /lib/..wa/systemd/system/.wa.wants/;rm -f /lib/systemd/system/anaconda.target.wants/*; tm/sts.target.wants/ude
[WARNING]: Empty continuation lines will become errors in a future release.
Related
Suppose i've got a zip file available under some URL. I need to get its hash, which should be identical to the one output by nix-prefetch-url --unpack <URL>, but without a working Nix installation. How can one do it?
Seems there is no easy way, as nix-prefetch-url adds the file to the store. More details here: https://discourse.nixos.org/t/generate-a-file-hash-similar-to-the-one-output-by-nix-prefetch-url/19907 (many thanks to prompt and thorough community member's response)
Use Docker.
Demo:
$ nix-prefetch-url --unpack https://github.com/hraban/git-hly/archive/06ff628d5f2b02d1a883c94b01d58187d117f4f3.tar.gz
path is '/nix/store/gxx1pfp19s3a39j6gl0xw197b4409cmp-06ff628d5f2b02d1a883c94b01d58187d117f4f3.tar.gz'
164gyvpdm6l6rdvn2rwjz95j1jz0w2igcbk9shy862sdx2rdw9hn
$ # Or .zip: it's the same, because of --unpack:
$ nix-prefetch-url --unpack https://github.com/hraban/git-hly/archive/06ff628d5f2b02d1a883c94b01d58187d117f4f3.zip
path is '/nix/store/1bpjlzknnmq1x3hq213r44jwag1xkaqs-06ff628d5f2b02d1a883c94b01d58187d117f4f3.zip'
164gyvpdm6l6rdvn2rwjz95j1jz0w2igcbk9shy862sdx2rdw9hn
Download to a local directory
$ cd "$(mktemp -d)"
$ curl -sSL --fail https://github.com/hraban/git-hly/archive/06ff628d5f2b02d1a883c94b01d58187d117f4f3.tar.gz | tar xz
$ cd *
And test it:
$ # Using the modern nix command:
$ nix hash path --base32 .
164gyvpdm6l6rdvn2rwjz95j1jz0w2igcbk9shy862sdx2rdw9hn
$ # Or the same, using nix-hash:
$ nix-hash --type sha256 --base32 .
164gyvpdm6l6rdvn2rwjz95j1jz0w2igcbk9shy862sdx2rdw9hn
Same in Docker:
$ docker run --rm -v "$PWD":/data nixos/nix nix --extra-experimental-features nix-command hash path --base32 /data
164gyvpdm6l6rdvn2rwjz95j1jz0w2igcbk9shy862sdx2rdw9hn
$ docker run --rm -v "$PWD":/data nixos/nix nix-hash --type sha256 --base32 /data
164gyvpdm6l6rdvn2rwjz95j1jz0w2igcbk9shy862sdx2rdw9hn
P.S.: I'm not a huge fan of nix-prefetch-url's default output (base32). The default output of nix hash path is better, if you can use it:
$ nix hash path .
sha256-FibesuhNC4M81Gku9qLg4MsgS/qSZ2F3y4aa2u72j5g=
$ # Sanity check:
$ nix-hash --type sha256 --to-base32 $(<<<"FibesuhNC4M81Gku9qLg4MsgS/qSZ2F3y4aa2u72j5g=" base64 -d | hexdump -v -e '/1 "%02x"' )
164gyvpdm6l6rdvn2rwjz95j1jz0w2igcbk9shy862sdx2rdw9hn
I am trying to copy files from the pod to local using following command:
kubectl cp /namespace/pod_name:/path/in/pod /path/in/local
But the command terminates with exit code 126 and copy doesn't take place.
Similarly while trying from local to pod using following command:
kubectl cp /path/in/local /namespace/pod_name:/path/in/pod
It throws the following error:
OCI runtime exec failed: exec failed: container_linux.go:367: starting container process caused: exec: "tar": executable file not found in $PATH: unknown
Please help through this.
kubectl cp is actually a very small wrapper around kubectl exec whatever tar c | tar x. A side effect of this is that you need a working tar executable in the target container, which you do not appear to have.
In general kubectl cp is best avoided, it's usually only good for weird debugging stuff.
kubectl cp requires the tar to be present in your container, as the help says:
!!!Important Note!!!
Requires that the 'tar' binary is present in your container
image. If 'tar' is not present, 'kubectl cp' will fail.
Make sure your container contains the tar binary in its $PATH
An alternative way to copy a file from local filesystem into a container:
cat [local file path] | kubectl exec -i -n [namespace] [pod] -c [container] "--" sh -c "cat > [remote file path]"
Useful command to copy the file from pod to local
kubectl exec -n <namespace> <pod> -- cat <filename with path> > <filename>
For me the cat worked like this:
cat <file name> | kubectl exec -i <pod-id> -- sh -c "cat > <filename>"
Example:
cat file.json | kubectl exec -i server-77b7976cc7-x25s8 -- sh -c "cat > /tmp/file.json"
Didn't need to specify namespace since I run the command from a specific project, and since we have one container, didn't need to specify it
What is the significance of -- in the command line of commands like lxc-create or lxc-start.
I tried to use Google in order to get an answer but without success.
// Example 1
lxc-create -t download -n u1 -- -d ubuntu -r DISTRO-SHORT-CODENAME -a amd64
// Example 1
application="/root/app.out"
start="/root/lxc-app/lxc-start"
$start -n LXC_app -d -f /etc/lxc/lxc-app/lxc-app.conf -- $application &
As explained in the references provided in the comments, the "--" indicates the end of the options passed to the command. The following parameters/options will be eventually used by a sub-command called by the command.
In your example:
lxc-create -t download -n u1 -- -d ubuntu -r DISTRO-SHORT-CODENAME -a amd64
lxc-create command will interpret "-t download -n u1" and the remaining "-d ubuntu -r DISTRO-SHORT-CODENAME -a amd64" will be passed to the template script which will configure/populate the container.
In this specific example, the "-t download" makes lxc-create run a template script named something like "/usr/share/lxc/templates/lxc-download" to which it will pass "-d ubuntu -r DISTRO-SHORT-CODENAME -a amd64".
I am pulling the postgres:12.0-alpine docker image to build my database. My intention is to replace the postgresql.conf file in the container to reflect the changes I want (changing data directory, modify backup options etc). I am trying with the following docker file
FROM postgres:12.0-alpine
# create the custom user
RUN addgroup -S custom && adduser -S custom_admin -G custom
# create the appropriate directories
ENV APP_HOME=/home/data
ENV APP_SETTINGS=/var/lib/postgresql/data
WORKDIR $APP_HOME
# copy entrypoint.sh
COPY ./entrypoint.sh $APP_HOME/entrypoint.sh
# copy postgresql.conf
COPY ./postgresql.conf $APP_HOME/postgresql.conf
RUN chmod +x /home/data/entrypoint.sh
# chown all the files to the app user
RUN chown -R custom_admin:custom $APP_HOME
RUN chown -R custom_admin:custom $APP_SETTINGS
# change to the app user
USER custom_admin
# run entrypoint.sh
ENTRYPOINT ["/home/data/entrypoint.sh"]
CMD ["custom_admin"]
my entrypoint.sh looks like
#!/bin/sh
rm /var/lib/postgresql/data/postgresql.conf
cp ./postgresql.conf /var/lib/postgresql/data/postgresql.conf
echo "replaced .conf file"
exec "$#"
But I get an exec error saying 'custom_admin: not found on the 'exec "$#"' line. What am I missing here?
In order to provide a custom configuration. Please use the below command:
docker run -d --name some-postgres -v "$PWD/my-postgres.conf":/etc/postgresql/postgresql.conf postgres -c 'config_file=/etc/postgresql/postgresql.conf'
Here my-postgres.conf is your custom configuration file.
Refer the docker hub page for more information about the postgres image.
Better to use the suggested answer by #Thilak, you do not need custom image to just use the custom config.
Now the problem with the CMD ["custom_admin"] in the Dockerfile, any command that is passed to CMD in the Dockerfile, you are executing that command in the end of the entrypoint, normally such command refers to main or long-running process of the container. Where custom_admin seems like a user, not a command. you need to replace this with the process which would run as a main process of the container.
Change CMD to
CMD ["postgres"]
I would suggest modifying the offical entrypoint which do many tasks out of the box, as you own entrypoint just starting the container no DB initialization etc.
Creating a backup script to dump mongodb inside a container, I need to copy the folder outside the container, Docker cp doesn't seem to work with wildcards :
docker cp mongodb:mongo_dump_* .
The following is thrown in the terminal :
Error response from daemon: lstat /var/lib/docker/aufs/mnt/SomeHash/mongo_dump_*: no such file
or directory
Is there any workaround to use wildcards with cp command ?
I had a similar problem, and had to solve it in two steps:
$ docker exec <id> bash -c "mkdir -p /extract; cp -f /path/to/fileset* /extract"
$ docker cp <id>:/extract/. .
It seems there is no way yet to use wildcards with the docker cp command https://github.com/docker/docker/issues/7710.
You can create the mongo dump files into a folder inside the container and then copy the folder, as detailed on the other answer here.
If you have a large dataset and/or need to do the operation often, the best way to handle that is to use docker volumes, so you can directly access the files from the container into your host folder without using any other command: https://docs.docker.com/engine/userguide/containers/dockervolumes/
Today I have faced the same problem. And solved it like:
docker exec container /bin/sh -c 'tar -cf - /some/path/*' | tar -xvf -
Hope, this will help.