docker postgres failed to start with specified port - postgresql

I'm new to docker and I'm trying to run a postgres database docker with the following command :
docker run --name rva-db -e POSTGRES_PASSWORD=rva -e POSTGRES_DB=rva-db -d postgres -p 5432:5432
If I'm trying to run it without the -p option, it seems to work fine, but I can't reach it from my local pg-admin, I thought I need to add the port link to reach it.
Anyway, the container always crash after few seconds and when i'm trying to start it with the start command I'm getting the following return :
docker start -a rva-db
FATAL: invalid value for parameter "port": "5432:5432"
What did I miss ?
FYI, I'm running it on a MacOS with the following docker version :
$ docker version
Client:
Version: 1.12.1
API version: 1.24
Go version: go1.7.1
Git commit: 6f9534c
Built: Thu Sep 8 10:31:18 2016
OS/Arch: darwin/amd64
Server:
Version: 1.12.1
API version: 1.24
Go version: go1.6.3
Git commit: 23cf638
Built: Thu Aug 18 17:52:38 2016
OS/Arch: linux/amd64

Run the container typing -p option before the image name
docker run --name rva-db -e POSTGRES_PASSWORD=rva -e POSTGRES_DB=rva-db -d -p 5432:5432 postgres
As for Docker run reference docker run has this format
docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
Options must be before image name. After that you can set entrypoint or command (when theyy differ from default from Dockerfile) and their arguments.

Related

How to install chrony on redhat 8 minimal

Im using keycloak docker image and need to synchronize time with chrony. however I cannot install chrony, because its not in repository i assume.
I use image from https://hub.docker.com/r/jboss/keycloak
ist based on registry.access.redhat.com/ubi8-minimal
Steps to reproduce:
~$ docker run -d --rm -p 8080:8080 --name keycloak jboss/keycloak
~$ docker exec -it -u root keycloak bash
root#707c136d9c8a /]# microdnf install chrony
error: No package matches 'chrony'
I'm not able to find working repo which provides chrony for redhat 8 minimal
Apparently i need synchronize time on host system, nothing to do with container itself.. Silly me, i need a break..

Error starting Docker container (WSL, docker-ce, Ubuntu 16.04)

Microsoft Windows [Version 10.0.17134.285],
Ubuntu 16.04 (WSL),
docker-ce (stable)
I am following the instructions here - https://nickjanetakis.com/blog/setting-up-docker-for-windows-and-wsl-to-work-flawlessly. I opted for "stable" rather than "edge". I mounted the c drive mapping manually with
sudo mkdir /c
sudo mount --bind /mnt/c /c
rather than the WSL config file way, because I wasn't sure if I wanted it for ALL my WSL instances. Other than that, I followed the instructions.
I have started the Docker daemon with
sudo cgroupfs-mount
sudo dockerd -H tcp://0.0.0.0:2375 --tls=false
When I try to start my container with
docker run -d -p 27017:27017 --name testDB mongo:3.4
I get
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:46: preparing rootfs caused \\\"invalid argument\\\"\"": unknown.
and I cannot connect to the MongoDB on the container using localhost:27017.
docker ps -a
shows
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e115d1c409a3 mongo:3.4 "docker-entrypoint.s…" 6 seconds ago Created 0.0.0.0:27017->27017/tcp testDB
and
docker info
shows
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 1
Server Version: 18.06.1-ce
Storage Driver: overlay2
Backing Filesystem: <unknown>
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Kernel Version: 4.4.0-17134-Microsoft
Operating System: Ubuntu 16.04.5 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.9GiB
Name: DESKTOP-4F100D9
ID: EFH2:O3RT:3OO4:27P5:ZNK7:N5JW:WE5M:4VSK:QREN:YCV4:GSYG:ZDTR
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No memory limit support
WARNING: No swap limit support
WARNING: No kernel memory limit support
WARNING: No oom kill disable support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
WARNING: No cpu shares support
WARNING: No cpuset support
Any ideas what I did wrong and how to fix it?
(I need to run Docker under Linux(WSL) - I cannot use Docker for Windows because we are using VirtualBox, and Hyper-V is disabled)
Currently, you cannot use docker daemon directly from WSL. There are several issues, mostly with networking. It works only for simple images like hello world (Reddit topic)
What you can do, is connect from WSL to docker daemon in windows. So following the tutorial, you mentioned is fine, but if you're running it with VirtualBox you have to either start default machine or create and start a new one. This docker machine will be your daemon.
By default docker-machine command is not working correctly in WSL, but you can make it works by putting this code to e.g. ~/.bashrc file
# Ability to run docker-machine command properly from WSL
function docker-machine()
{
if [ $1 == "env" ]; then
docker-machine.exe $1 $2 --shell bash | sed 's/C:/\/c/' | sed 's/\\/\//g' | sed 's:#.*$::g' | sed 's/"//g'
printf "# Run this command to configure your shell:\n"
printf "# eval \"\$(docker-machine $1 $2)\"\n"
else
docker-machine.exe "$#"
fi
}
export -f docker-machine
After running source ~/.bashrc or reopening the bash you can run:
docker-machine start default - will start machine
eval $(docker-machine env default) - will connect your bash session to the machine
and then you should be able to run all the docker stuff like
docker ps
docker run -it alpine sh
docker build
etc
The docker machine will run until you either stop it or you shut down your PC. If you open a new bash session (window), you have to run just eval $(docker-machine env default) in order to connect your new session to the machine.
Hope it helps. :)
This is a simple solution which is to use Docker on windows in WSL instead.
Just add the following to your WSL .bashrc file.
export PATH="$HOME/bin:$HOME/.local/bin:$PATH"
export PATH="$PATH:/mnt/c/Program\ Files/Docker/Docker/resources/bin"
alias docker=docker.exe
alias docker-compose=docker-compose.exe
Reference: https://blog.jayway.com/2017/04/19/running-docker-on-bash-on-windows/

Unable to perform docker run using Invoke-command

i'm new to docker. I run docker "natively" from a Windows server 2016 with a Windows container, there is no intermediate VM (no docker machine) in between and no docker toolbox, so the "host" is the actual Windows Server that I run docker on.
Docker version:
PS C:> docker version
Client:
Version: 17.03.1-ee-3
API version: 1.27
Go version: go1.7.5
Git commit: 3fcee33
Built: Thu Mar 30 19:31:22 2017
OS/Arch: windows/amd64
Server:
Version: 17.03.1-ee-3
API version: 1.27 (minimum version 1.24)
Go version: go1.7.5
Git commit: 3fcee33
Built: Thu Mar 30 19:31:22 2017
OS/Arch: windows/amd64
Experimental: false
PS C:>
i pulled the image from docker hub. I need to replace the files inside the docker image while running and commit changes to the image.
Lets say i have Sample.java and datafile.properties inside the docker image which i pulled from docker hub.
i want to replace that with Hello.java and data.properties[ i pulled these files from github]
how would i do that in an automated way? Any advise and some examples on this would he helpful. Thanks in advance.
The best way to build an image automated, is to use a Dockerfile. Some information can be found in the documentation, for example; https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/
If you have your Hello.java and datafile.properties in a directory, create a Dockerfile in the same directory, e.g.;
FROM the-base-image-on-docker-hub
RUN rm /path/to/Sample.java
COPY ./Hello.java /path/to/
COPY ./datafile.properties /path/to/
You can then build your image, and "tag" it as myimage:latest with;
docker image build -t myimage:latest .
(the period at the end (.) indicates; use the current directory as "build context" - the build context is uploaded to the docker daemon, and everything in it will be accessible to add to your docker image using the COPY or ADD Dockerfile instructions)
This is a very naive example, just to illustrate the concept; I suggest reading the documentation, to understand the concept, and searching for more examples.

Transition PostgreSQL persistent storage on docker to modern docker storage only

With the advent of
docker volume create
for storage only containers, I'm still using the old way for running postgres on my machine for small applications without a dockerfile:
# MAKE MY DATA STORE
STORAGE_DIR=/home/username/mydockerdata/pgdata
docker create -v $STORAGE_DIR:/var/lib/postgresql/data --name mypgdata ubuntu true
# CREATE THE PG
docker run --name mypg -e POSTGRES_PASSWORD=password123 -d -p 5432:5432 --volumes-from mypgdata library/postgres:9.5.4
# RUN IT
docker start mypg
# docker stop mypg
I have 4 questions:
How could I move the old way to store my data in a local, persistent container to modern volumes?
The permissions my way have always seemed whacky:
$ ls -lah $STORAGE_DIR/..
drwx------ 19 999 root 4.0K Aug 28 10:04 pgdata
Should I do this differently?
Does my networking look correct here? This will be visible only on the machine hosting docker, or is this also published to all machines on my wifi network?
Besides the weak password, standard ports, default usernames, for example here, are there other security fears in doing this for personal use only that I should be aware of?
Create a new volume and copy the data over. Then run your container with the new volume definition.
docker volume create --name mypgdata
docker run --rm \
-v $STORAGE_DIR:/data \
-v mypgdata:/datanew ubuntu \
sh -c 'tar -C /data -cf - . | tar -C /datanew -xvf -'
docker run --rm -v mypgdata:/data ubuntu ls -l /data
The permissions are normal. UID 999 is the postgres user that the postgres image creates.
Port 5432 will be accessible on all your docker hosts interfaces. If you only want it to be available on localhost use --port 127.0.0.1:5432:5432
Moving to listening on localhost mitigates most security issues, until someone gains access to your docker host. General security is a bit too broad a topic for a dot point.

Docker: Mongo exits on run

Using:
https://registry.hub.docker.com/_/mongo/
I did this to pull in all tags:
docker pull mongo
Then, when I try to run it with
docker run -v /data:/data --name mongodb -p 4000:27017 mongo:2.6.6
The status shows
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5959d3f79243 mongo:2.6.6 "/entrypoint.sh mong 4 seconds ago Exited (1) 3 seconds ago mongodb
Logs show:
numactl: This system does not support NUMA policy
How do I keep mongo running while using docker? I am using Docker 1.4.1 on OSX (boot2docker).
Indeed, the boot2docker VM doesn't support NUMA, and the current Dockerfile executes mongod through numactl. A possible workaround:
$ docker run -v /data:/data --name mongodb -p 4000:27017 --entrypoint=mongod mongo:2.6.6
This uses --entrypoint to override the image defined ENTRYPOINT and execute mongod directly.