can not connect to docker container mapping port - mongodb

I use mongo image in docker, but I can not connect to 20217 port.
docker#default:~$ docker ps
prot info show: 0.0.0.0:20217->20217/tcp, 27017/tcp
but,
gilbertdeMacBook-Pro:~ gilbert$ lsof -i tcp:20217
there is no PID,
gilbertdeMacBook-Pro:~ gilbert$ docker info
Containers: 3
Images: 43
Server Version: 1.9.1
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 50
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.1.13-boot2docker
Operating System: Boot2Docker 1.9.1 (TCL 6.4.1); master : cef800b - Fri Nov 20 19:33:59 UTC 2015
CPUs: 1
Total Memory: 1.956 GiB
Name: default
ID: MRAZ:ZG5E:HDMY:EJNQ:HFL4:PW6Y:AXIS:6JFL:PFI5:GBAY:5SMF:NYQR
Debug mode (server): true
File Descriptors: 25
Goroutines: 44
System Time: 2016-01-27T14:53:52.005531869Z
EventsListeners: 0
Init SHA1:
Init Path: /usr/local/bin/docker
Docker Root Dir: /mnt/sda1/var/lib/docker
Username: gilbertgan
Registry: https://index.docker.io/v1/
Labels:
provider=virtualbox

I found this is because on MAC docker-machine is running on VM,so we need add the VM IP when connect to container.
the ip can be show by: docker-machine ls

Your docker container maps port 20217 which isn't the MongoDB default port. The correct port is 27017. And gilbert_gan is right as well. When running docker on docker-machine the docker host is not localhost but rather the virtual machine under docker-machine control.

Related

Docker container can’t startup with a 166G(nfs/efs) volume

my docker container can’t startup with a 166G (nfs/efs) volume, I use the EFS mounted to the EC2 with amazon-efs-utils tool, and then volume to the docker, below is my docker-compose file
services:
app:
image: xxxx
restart: always
environment:
- ASPNETCORE_ENVIRONMENT=Production
ports:
- "8002:8002"
volumes:
- "./BinaryObjects/Static:/app/App_Data/BinaryObjects/Static"
- "./BinaryObjects/Temp:/app/App_Data/BinaryObjects/Temp"
the /BinaryObjects is the root which EFS mounted to, and /Static folder has the 166G files, and /Temp folder has 10M files, if I remove the /Static in volumes, and the docker container can startup. so I think it is not the permission issue. I did the same thing in our staging env, and it works well, the different between staging and production is the /Static folder in staging has only 2G files. I have tried that created the nfs volumes first,
docker volume create \
--driver local \
--opt type=nfs \
--opt o=addr=xxxx.amazonaws.com,nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport \
--opt device=:/Static
efs-static
docker volume create \
--driver local \
--opt type=nfs \
--opt o=addr=xxxx.amazonaws.com,nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport \
--opt device=:/Temp
efs-temp
and no luck, the docker container can't startup either, and works if I remove the efs-static volume, it is just stuck, and no error message, please refer to below screenshots.
Does someone know why? I want the docker can startup with my 166G efs volume. Thanks.
my docker info
Client:
Context: default
Debug Mode: false
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 31
Server Version: 20.10.7
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version:
runc version:
init version:
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 5.4.0-1059-aws
Operating System: Ubuntu 18.04.6 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.18GiB
Name: ip-172-31-50-32
ID: XWJL:SUKJ:DVZL:SVII:JFHJ:W7ND:LZJ5:2LKU:ASBA:ILEB:HF54:KLCV
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

Docker container communication with other container on diffirent host/server

I am having two servers (CentOS8).
On server1 I have mysql-server container and on server2 I have zabbix-front-end i.e zabbix-web-apache-mysql (container name zabbixfrontend).
I am trying to connect to mysql-server from zabbixfrontend container. Getting error
bash-4.4$ mysql -h <MYSQL_SERVER_IP> -P 3306 -uroot -p
Enter password:
ERROR 2002 (HY000): Can't connect to MySQL server on '<MYSQL_SERVER_IP>' (115)
When I do nc from zabbixfrontend container to my mysql-server IP I get "No route to host." error message.
bash-4.4$ nc -zv <MYSQL_SERVER_IP> 3306
Ncat: Version 7.70 ( https://nmap.org/ncat )
Ncat: No route to host.
NOTE : I am successfully do nc from the host machine (server2) mysql-server container.
docker-compose.yml
version: '3.5'
services:
zabbix-web-apache-mysql:
image: zabbix/zabbix-web-apache-mysql:centos-8.0-latest
container_name: zabbixfrontend
#network_mode: host
ports:
- "80:8080"
- "443:8443"
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- ./zbx_env/etc/ssl/apache2:/etc/ssl/apache2:ro
- ./usr/share/zabbix/:/usr/share/zabbix/
env_file:
- .env_db_mysql
- .env_web
secrets:
- MYSQL_USER
- MYSQL_PASSWORD
- MYSQL_ROOT_PASSWORD
# zbx_net_frontend:
sysctls:
- net.core.somaxconn=65535
secrets:
MYSQL_USER:
file: ./.MYSQL_USER
MYSQL_PASSWORD:
file: ./.MYSQL_PASSWORD
MYSQL_ROOT_PASSWORD:
file: ./.MYSQL_ROOT_PASSWORD
docker logs zabbixfrontend out as below
** Deploying Zabbix web-interface (Apache) with MySQL database
** Using MYSQL_USER variable from ENV
** Using MYSQL_PASSWORD variable from ENV
********************
* DB_SERVER_HOST: <MYSQL_SERVER_IP>
* DB_SERVER_PORT: 3306
* DB_SERVER_DBNAME: zabbix
********************
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
The nc message is telling the truth: No route to host.
This happens because when you deploy your front-end container in the docker bridge network, its IP address belongs to the 172.18.0.0/16 subnet and you a are trying to reach an the database via an IP address that belongs to a different subnet (10.0.0.0/16).
On the other hand, when you deploy your front-end container on the host network, you no longer face that problem, because now the IP is literally using the IP address of the host machine, 10.0.0.2 and there is no need for a route to be explicitly created to reach 10.0.0.3.
Now the problem you are facing is that you can no longer access the web-ui via the browser. This happens because I assume you kept the ports:" option in your docker-compose.yml and tried to access the service on localhost:80/443. The source and destination ports do not need to be specified if you run the container on the host network. The container will just listen directly on the host on the port that's opened inside the container.
Try to run the front-end container with this config and then access it on localhost:8080 and localhost:8443:
...
network_mode: host
# ports:
# - "80:8080"
# - "443:8443"
volumes:
...
Running containers on the host network is not something that I would usually recommend, but hence your setup is quite special, having one container running on one docker host and another container running in another independent docker host, I assume you don't want create an overlay network and eventually register the two docker hosts to a swarm.

File permission issue with mongo docker image

I'm trying to instanciate a mongodb docker image using the following commmand:
docker run -e MONGO_INITDB_ROOT_USERNAME=root -e MONGO_INITDB_ROOT_PASSWORD=password mongo
The command fails instantly because of a Persmission denied:
2019-11-12T20:16:29.503+0000 I CONTROL [main] ERROR: Cannot write pid file to /tmp/docker-entrypoint-temp-mongod.pid: Permission denied
The weird thing is the same command works fine on some other machines that has the same users, groups... The only thing that differs is the docker version.
I don't understand why the mongo instance does not run as I do not have any volumes or user specified on the command line.
Here is my docker info
Client:
Debug Mode: false
Server:
Containers: 29
Running: 1
Paused: 0
Stopped: 28
Images: 87
Server Version: 19.03.4
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339
runc version: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.0-11-amd64
Operating System: Debian GNU/Linux 9 (stretch)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 25.52GiB
Name: jenkins-vm
ID: YIGQ:YOVJ:2Y7F:LM77:VHK6:ICMY:QDGA:5EFD:ZYDD:EQM5:DR77:DANT
Docker Root Dir: /data/var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
And as suggested by #jan-garaj, here is the result of docker run -e MONGO_INITDB_ROOT_USERNAME=root -e MONGO_INITDB_ROOT_PASSWORD=password mongo id: uid=0(root) gid=0(root) groups=0(root)
What could be the reason of this failure ?
You may have a problem with some security configuration. Check and compare docker info outputs. You may have enabled user namespaces (userns-remap), some special seccomp, selinux policies, weird storage driver, full disk space, ...
Could be because of the selinux policy :
edit config file at /etc/selinux/config :
SELINUX:disabled
reboot your system and try to run the image after.

Failure to connect to configured mongo instance (Connection refused)

Based on this guide:
https://docs.opsmanager.mongodb.com/current/tutorial/install-simple-test-deployment/
I am installing MongoDB and MongoDB Manager. I have created a docker image for each application and start them on the same virtual network:
docker network create --driver bridge mongo-network
with:
MongoDB:
docker run -ti -d --network mongo-network -p 27017:27017 --name mongodb-container mongodb-image
docker exec -ti -u mongod mongodb-container "mongod --port 27017 --dbpath /data/appdb --logpath /data/appdb/mongodb.log --wiredTigerCacheSizeGB 1 --fork"
And verified that its up and running with:
$ docker exec -ti -u mongod mongodb-container tail -f /data/appdb/mongodb.log
2019-04-21T15:26:05.208+0000 I CONTROL [initandlisten] ** WARNING: This server is bound to localhost.
2019-04-21T15:26:05.208+0000 I CONTROL [initandlisten] ** Remote systems will be unable to connect to this server.
2019-04-21T15:26:05.208+0000 I CONTROL [initandlisten] ** Start the server with --bind_ip <address> to specify which IP
2019-04-21T15:26:05.208+0000 I CONTROL [initandlisten] ** addresses it should serve responses from, or with --bind_ip_all to
2019-04-21T15:26:05.208+0000 I CONTROL [initandlisten] ** bind to all interfaces. If this behavior is desired, start the
2019-04-21T15:26:05.208+0000 I CONTROL [initandlisten] ** server with --bind_ip 127.0.0.1 to disable this warning.
2019-04-18T06:23:35.268+0000 I CONTROL [initandlisten]
2019-04-18T06:23:35.269+0000 I STORAGE [initandlisten] createCollection: admin.system.version with provided UUID: c0736278-72ec-4dfc-893c-8105eefa0ba8
2019-04-18T06:23:35.320+0000 I COMMAND [initandlisten] setting featureCompatibilityVersion to 4.0
2019-04-18T06:23:35.341+0000 I STORAGE [initandlisten] createCollection: local.startup_log with generated UUID: 397c17a3-3c5e-4605-b4dc-8a936dd9e40e
2019-04-18T06:23:35.394+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/appdb/diagnostic.data'
2019-04-18T06:23:35.396+0000 I NETWORK [initandlisten] waiting for connections on port 27017
2019-04-18T06:23:35.397+0000 I STORAGE [LogicalSessionCacheRefresh] createCollection: config.system.sessions with generated UUID: ac7bdb6e-4a60-430f-b1a4-34b09012e6da
2019-04-18T06:23:35.475+0000 I INDEX [LogicalSessionCacheRefresh] build index on: config.system.sessions properties: { v: 2, key: { lastUse: 1 }, name: "lsidTTLIndex", ns: "config.system.sessions", expireAfterSeconds: 1800 }
2019-04-18T06:23:35.475+0000 I INDEX [LogicalSessionCacheRefresh] building index using bulk method; build may temporarily use up to 500 megabytes of RAM
2019-04-18T06:23:35.477+0000 I INDEX [LogicalSessionCacheRefresh] build index done. scanned 0 total records. 0 secs
MongoDB Manager:
docker run -ti -d --network mongo-network -p 8080:8080 --name mongodb-manager-container mongodb-manager-image
docker exec -ti -u root mongodb-manager-container "/opt/mongodb/mms/bin/mongodb-mms start"
Below error message:
Generating new Ops Manager private key...
Starting pre-flight checks
Failure to connect to configured mongo instance:Config{
loadBalance=false,
encryptedCredentials=false,
ssl='false',
dbNames=' [
mmsdb,
mmsdbprovisionlog,
mmsdbautomation,
mmsdbserverlog,
mmsdbpings,
mmsdbprofile,
mmsdbrrd,
mmsdbconfig,
mmsdblogcollection,
mmsdbjobs,
mmsdbagentlog,
mmsdbbilling,
backuplogs,
automationcore,
monitoringstatus,
mmsdbautomationlog,
automationstatus,
ndsstatus,
cloudconf,
backupdb,
mmsdbprovisioning,
mmsdbqueues
] ',
uri=mongodb://mongodb-container:27017/?maxPoolSize=150
}Error:Timed out after 30000 ms while waiting to connect. Client view of cluster state is{
type=UNKNOWN,
servers= [
{
address=mongodb-container:27017,
type=UNKNOWN,
state=CONNECTING,
exception= {
com.mongodb.MongoSocketOpenException:Exception opening socket
},
caused by {
java.net.ConnectException:Connection refused (Connection refused)
}
}
]
And for mongodb - based on suggestions below - I am now using:
/etc/mongod.conf:
# network interfaces
net:
port: 27017
#bindIp: 127.0.0.1 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
bindIp: 0.0.0.0,::
and for MongoDB manager I am specifying the name of the mongodb container in:
/opt/mongodb/mms/conf/conf-mms.properties
#mongo.mongoUri=mongodb://127.0.0.1:27017/?maxPoolSize=150
#mongo.mongoUri=mongodb://0.0.0.0:27017/?maxPoolSize=150
mongo.mongoUri=mongodb://mongodb-container:27017/?maxPoolSize=150
I have verified that I can ping mongodb-container from mongodb-manager-container with:
docker exec -it -u root mongodb-manager-container bash
[root#e23a34bf2161 /]# ping mongodb-container
PING mongodb-container (172.18.0.2) 56(84) bytes of data.
64 bytes from mongodb-container.mongo-network (172.18.0.2): icmp_seq=1 ttl=64 time=0.077 ms
64 bytes from mongodb-container.mongo-network (172.18.0.2): icmp_seq=2 ttl=64 time=0.059 ms
64 bytes from mongodb-container.mongo-network (172.18.0.2): icmp_seq=3 ttl=64 time=0.052 ms
^C
--- mongodb-container ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2042ms
rtt min/avg/max/mdev = 0.052/0.062/0.077/0.013 ms
[root#e23a34bf2161 /]#
What am I missing?
EDIT:
Based on below suggestions I have now tried:
docker network create --driver bridge mongo-network
docker run -ti -d --network mongo-network -p 27017:27017 --name mongodb-container mongodb-image
# Copy modified version of mongod.conf file to container before starting mongodb
docker cp mongod.conf mongodb-container:/etc/mongod.conf
docker exec -ti -u mongod mongodb-container "mongod --port 27017 --dbpath /data/appdb --logpath /data/appdb/mongodb.log --wiredTigerCacheSizeGB 1 --fork"
docker run -it --rm --net container:mongodb-container nicolaka/netshoot ss -lnt
Which gives:
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 127.0.0.11:36001 0.0.0.0:*
LISTEN 0 128 127.0.0.1:27017 0.0.0.0:*
Not sure if this is expected/good output and why I need to spin up container from the nicolaka/netshoot image...
EDIT 2:
As suggested below if I pass: --bind_ip_all on the command line for starting mongod it works:
docker exec -ti -u mongod mongodb-container "mongod --bind_ip_all --port 27017 --dbpath /data/appdb --logpath /data/appdb/mongodb.log --wiredTigerCacheSizeGB 1 --fork"
So it seems when running as a docker container it completely ignores the /etc/mongod.conf file and you need to specify all the options in the docker exec command instead :-(
DNS on the container name will resolve to the container ip. To connect to mongo on that name, even from inside the container, you need to have mongo listening on all interfaces:
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0,::
The problem was that it was ignoring the configuration in /etc/mongod.conf file. After googling e.g.:
https://jira.mongodb.org/browse/SERVER-36572
I found that you need to pass the --config parameter to mongod e. to get it to read the mongod.conf file e.g.:
mongod --config /etc/mongod.conf
and with docker:
docker exec -ti -u mongod mongodb-container "mongod --config /etc/mongod.conf"
After doing the above I can now get it to listen on all interfaces with the below configuration in /etc/mongod.conf:
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0
You are running two separate containers and expect them to talk to each other over localhost? That never gonna work. You have to add "--link mongodb-container:mongo" to second docker run command and then use address mongodb://mongo:27017 in manager container.

Docker [for mac] file system became read-only which breaks almost all features of docker

My Docker ran into an error state, where I cannot use it anymore.
output of docker system info:
Containers: 14
Running: 2
Paused: 0
Stopped: 12
Images: 61
Server Version: 18.03.1-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: error
NodeID:
Error: open /var/lib/docker/swarm/worker/tasks.db: read-only file system
Is Manager: false
Node Address: 192.168.65.3
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 773c489c9c1b21a6d78b5c538cd395416ec50f88
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.87-linuxkit-aufs
Operating System: Docker for Mac
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.952GiB
Name: linuxkit-025000000001
ID: MCSC:SFXH:R3JC:NU4D:OJ5V:K4B5:LPMJ:2BFL:LHT3:LYCI:XKY2:DTE6
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
HTTP Proxy: docker.for.mac.http.internal:3128
HTTPS Proxy: docker.for.mac.http.internal:3129
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
This behaviour occured, after I built the following Dockerfile:
FROM perl:5.20
RUN apt-get update && apt-get install -y libsoap-lite-perl \
&& rm -rf /var/lib/apt/lists/*
RUN cpan SOAP::LITE
the error message when I try to build an image or run a container or remove an image is always similar to this:
Error: open /var/lib/docker/swarm/worker/tasks.db: read-only file system
for example if I try to execute this command:
docker container run -it perl:5.20 bash
I get this error:
docker: Error response from daemon: mkdir /var/lib/docker/overlay2/1b966e163e500a8c78a64e8d0f14984b091c1c5fe188a60b8bd030672d3138d9-init: read-only file system.
How can I reset my docker so these errors go away?
Go to your docker for mac Icon in the top right, click on it and then click Restart.
After that Docker works as expected.
This seems to be an temporary issue since I cannot reproduce it after restarting docker. My guess is that I had an network communication breakdown while docker tried to download and install the packages in the Dockerfile.