not showing kubernetes pods in podman - kubernetes

after installing the k8s 1.24 version, I would like to manage the container using cri-o and podman.
I installed podman to use the podman commit function, but I can see the image, but I can't see the containers. As far as I know, I need to change the run time of Podman, how can I change it?
Additionally, if I check the containers using the crictl command, I see the k8s containers.
My os is ubuntu:20.04
crictl ps -a command result
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
c798099462c9b 295c7be079025306c4f1d65997fcf7adb411c88f139ad1d34b537164aa060369 33 minutes ago Running nginx 0 e1bd39fd2b241 nginx-deployment-6595874d85-wksxq
e2da3de955397 295c7be079025306c4f1d65997fcf7adb411c88f139ad1d34b537164aa060369 33 minutes ago Running nginx 0 15bf749dc406f nginx-deployment-6595874d85-8zbpx
a58db13acc2bf 295c7be079025306c4f1d65997fcf7adb411c88f139ad1d34b537164aa060369 33 minutes ago Running nginx 0 f5d775b0b78c0 nginx-deployment-6595874d85-pspvg
75573509b4b1e docker.io/kubernetesui/dashboard#sha256:1b020f566b5d65a0273c3f4ed16f37dedcb95ee2c9fa6f1c42424ec10b2fd2d7 3 days ago Running kubernetes-dashboard 0 bfb3ade50b952 kubernetes-dashboard-f8bb6d75-9g6p7
2ab98c33f2713 7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9 3 days ago Running dashboard-metrics-scraper 0 23da9b06873c9 dashboard-metrics-scraper-7bfdf779ff-qrdcj
ac41ff1d3a335 docker.io/calico/kube-controllers#sha256:57c40fdfb86dce269a8f93b4f5545b23b7ee9ba36d62e67e7ce367df8d753887 3 days ago Running calico-kube-controllers 0 fac206793f1d5 calico-kube-controllers-6766647d54-sqjqx
0ec5b4d17ed3b docker.io/calico/node#sha256:0a430186f9267218aed3e419a541e306eba3d0bbe5cf4f6a8b700d35d7a4f7fa 3 days ago Running calico-node 0 1a63a00117c7f calico-node-mkvpg
6ec33c0ece470 a87d3f6f1b8fdc077e74ad6874301f57490f5b38cc731d5d6f5803f36837b4b1 3 days ago Exited install-cni 0 1a63a00117c7f calico-node-mkvpg
8dd11d084b8f3 docker.io/calico/cni#sha256:7c7bd52f3c72917c28a6715740b4710e9a345a26a7d90556a471a3eb977d8cf7 3 days ago Exited upgrade-ipam 0 1a63a00117c7f calico-node-mkvpg
0ce4eaeb4d27b k8s.gcr.io/kube-proxy#sha256:058460abb91c7c867f61f5a8a4c29d2cda605d8ff1dd389b1f7bad6c1db60a5b 3 days ago Running kube-proxy 0
podman ps -a command result
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
podman system info command result
host:
arch: amd64
buildahVersion: 1.23.1
cgroupControllers:
- cpuset
- cpu
- cpuacct
- blkio
- memory
- devices
- freezer
- net_cls
- perf_event
- net_prio
- hugetlb
- pids
- rdma
cgroupManager: systemd
cgroupVersion: v1
conmon:
package: 'conmon: /usr/libexec/podman/conmon'
path: /usr/libexec/podman/conmon
version: 'conmon version 2.1.2, commit: '
cpus: 4
distribution:
codename: focal
distribution: ubuntu
version: "20.04"
eventLogger: journald
hostname: realworker
idMappings:
gidmap: null
uidmap: null
kernel: 5.4.0-122-generic
linkmode: dynamic
logDriver: journald
memFree: 4012826624
memTotal: 16728518656
ociRuntime:
name: crun
package: 'crun: /usr/bin/crun'
path: /usr/bin/crun
version: |-
crun version UNKNOWN
commit: ea1fe3938eefa14eb707f1d22adff4db670645d6
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
remoteSocket:
path: /run/podman/podman.sock
security:
apparmorEnabled: true
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: false
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: 'slirp4netns: /usr/bin/slirp4netns'
version: |-
slirp4netns version 1.1.8
commit: unknown
libslirp: 4.3.1-git
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.4.3
swapFree: 0
swapTotal: 0
uptime: 95h 1m 37.73s (Approximately 3.96 days)
plugins:
log:
- k8s-file
- none
- journald
network:
- bridge
- macvlan
volume:
- local
registries:
search:
- docker.io
- quay.io
store:
configFile: /etc/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: overlay
graphOptions:
overlay.mountopt: nodev,metacopy=on
graphRoot: /var/lib/containers/storage
graphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "true"
imageStore:
number: 11
runRoot: /run/containers/storage
volumePath: /var/lib/containers/storage/volumes
version:
APIVersion: 3.4.2
Built: 0
BuiltTime: Thu Jan 1 00:00:00 1970
GitCommit: ""
GoVersion: go1.15.2
OsArch: linux/amd64
Version: 3.4.2

Related

could not write to mounted volume when using non-root user (same when using fsGroup)

I hope I miss something, but could not find answer anywhere. I am using ROOK storage class resource to provision PV that are later attached to POD. in example adding 3 volumes (1 emptyDir, and 2 volumes from ROOK cluster)
test.yaml
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
namespace: misc
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
# runAsNonRoot: true
volumes:
- name: keymanager-keys
persistentVolumeClaim:
claimName: keymanager-pvc
readOnly: false
- name: keymanager-keyblock
persistentVolumeClaim:
claimName: keymanager-block-pvc
readOnly: false
- name: keymanager-key-n
persistentVolumeClaim:
claimName: keymanager-ext4x-pvc
readOnly: false
- name: local-keys
emptyDir: {}
- name: ephemeral-storage
ephemeral:
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteMany"]
storageClassName: "ceph-filesystem"
resources:
requests:
storage: 10Mi
containers:
- name: sec-ctx-demo
image: busybox:1.28
command: [ "sh", "-c", "sleep 1h" ]
# securityContext:
# privileged: true
volumeMounts:
- name: keymanager-keys
mountPath: /data/pvc-filesystem-preconfigured
- name: local-keys
mountPath: /data/emptydir
- name: ephemeral-storage
mountPath: /data/pvc-filesystem-ephemeral-storage
- name: keymanager-keyblock
mountPath: /data/pvc-block-storage
- name: keymanager-key-n
mountPath: /data/testing-with-fstypes
POD information:
kubectl get pods -n misc
NAME READY STATUS RESTARTS AGE
security-context-demo 1/1 Running 15 (30m ago) 15h
kubectl exec -it -n misc security-context-demo -- sh
/ $ ls -l /data/
total 5
drwxrwsrwx 2 root 2000 4096 Feb 13 17:31 emptydir
drwxrwsr-x 3 root 2000 1024 Feb 13 07:08 pvc-block-storage
drwxr-xr-x 2 root root 0 Feb 13 17:31 pvc-filesystem-ephemeral-storage
drwxr-xr-x 2 root root 0 Feb 9 11:29 pvc-filesystem-preconfigured
drwxr-xr-x 2 root root 0 Feb 13 08:28 testing-with-fstypes
/data/emptydir and /data/pvc-block-storage directories are not mounted as expected to group 2000.
valumes that uses Rook CephFS ignores fsGroup field in the PodSecurityContext.If Rook filesystem support fsGroup this should not be happening.
Expected behavior:
would like to see mounted volumes to specific group (that will be used by unprivileged users). Don't know weather doing something wrong or what is happening. Expecting to be able to write to mounted volume with app user.
Environment:
OS (e.g. from /etc/os-release): Debian GNU/Linux 11 (bullseye) Kernel (e.g. uname -a):
Linux worker1 5.10.0-9-amd64 1 SMP Debian 5.10.70-1 (2021-09-30) x86_64 GNU/Linux
Rook version (use rook version inside of a Rook Pod): v1.10.6
Storage backend version (e.g. for ceph do ceph -v): ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)
Kubernetes version (use kubectl version): v1.24.1
cluster:
id: 25e0134a-515e-4f62-8eec-ae5d8cb3e650
health: HEALTH_OK
services:
mon: 3 daemons, quorum a,c,d (age 7w)
mgr: a(active, since 10w)
mds: 1/1 daemons up, 1 hot standby
osd: 3 osds: 3 up (since 11w), 3 in (since 11w)
rgw: 1 daemon active (1 hosts, 1 zones)
data:
volumes: 1/1 healthy
pools: 12 pools, 185 pgs
objects: 421 objects, 1.2 MiB
usage: 1.5 GiB used, 249 GiB / 250 GiB avail
pgs: 185 active+clean
if more information needed let me know and I will update this post. also if helps can add PV and PVC configs that are created after Pod creation.

File permission issue with mongo docker image

I'm trying to instanciate a mongodb docker image using the following commmand:
docker run -e MONGO_INITDB_ROOT_USERNAME=root -e MONGO_INITDB_ROOT_PASSWORD=password mongo
The command fails instantly because of a Persmission denied:
2019-11-12T20:16:29.503+0000 I CONTROL [main] ERROR: Cannot write pid file to /tmp/docker-entrypoint-temp-mongod.pid: Permission denied
The weird thing is the same command works fine on some other machines that has the same users, groups... The only thing that differs is the docker version.
I don't understand why the mongo instance does not run as I do not have any volumes or user specified on the command line.
Here is my docker info
Client:
Debug Mode: false
Server:
Containers: 29
Running: 1
Paused: 0
Stopped: 28
Images: 87
Server Version: 19.03.4
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339
runc version: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.0-11-amd64
Operating System: Debian GNU/Linux 9 (stretch)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 25.52GiB
Name: jenkins-vm
ID: YIGQ:YOVJ:2Y7F:LM77:VHK6:ICMY:QDGA:5EFD:ZYDD:EQM5:DR77:DANT
Docker Root Dir: /data/var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
And as suggested by #jan-garaj, here is the result of docker run -e MONGO_INITDB_ROOT_USERNAME=root -e MONGO_INITDB_ROOT_PASSWORD=password mongo id: uid=0(root) gid=0(root) groups=0(root)
What could be the reason of this failure ?
You may have a problem with some security configuration. Check and compare docker info outputs. You may have enabled user namespaces (userns-remap), some special seccomp, selinux policies, weird storage driver, full disk space, ...
Could be because of the selinux policy :
edit config file at /etc/selinux/config :
SELINUX:disabled
reboot your system and try to run the image after.

Postgres not starting on swarm server reboot

I'm trying to run an app using docker swarm. The app is designed to be completely local running on a single computer using docker swarm.
If I SSH into the server and run a docker stack deploy everything works, as seen here running docker service ls:
When this deployment works, the services generally go live in this order:
Registry (a private registry)
Main (an Nginx service) and Postgres
All other services in random order (all Node apps)
The problem I am having is on reboot. When I reboot the server, I pretty consistently have the issue of the services failing with this result:
I am getting some errors that could be helpful.
In Postgres: docker service logs APP_NAME_postgres -f:
In Docker logs: sudo journalctl -fu docker.service
Update: June 5th, 2019
Also, By request from a GitHub issue docker version output:
Client:
Version: 18.09.5
API version: 1.39
Go version: go1.10.8
Git commit: e8ff056
Built: Thu Apr 11 04:43:57 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.5
API version: 1.39 (minimum version 1.12)
Go version: go1.10.8
Git commit: e8ff056
Built: Thu Apr 11 04:10:53 2019
OS/Arch: linux/amd64
Experimental: false
And docker info output:
Containers: 28
Running: 9
Paused: 0
Stopped: 19
Images: 14
Server Version: 18.09.5
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
NodeID: pbouae9n1qnezcq2y09m7yn43
Is Manager: true
ClusterID: nq9095ldyeq5ydbsqvwpgdw1z
Managers: 1
Nodes: 1
Default Address Pool: 10.0.0.0/8
SubnetSize: 24
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 1
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 192.168.0.47
Manager Addresses:
192.168.0.47:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84
runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.15.0-50-generic
Operating System: Ubuntu 18.04.2 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 3.68GiB
Name: oeemaster
ID: 76LH:BH65:CFLT:FJOZ:NCZT:VJBM:2T57:UMAL:3PVC:OOXO:EBSZ:OIVH
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
WARNING: No swap limit support
And finally, My docker swarm stack/compose file:
secrets:
jwt-secret:
external: true
pg-db:
external: true
pg-host:
external: true
pg-pass:
external: true
pg-user:
external: true
ssl_dhparam:
external: true
services:
accounts:
depends_on:
- postgres
- registry
deploy:
restart_policy:
condition: on-failure
environment:
JWT_SECRET_FILE: /run/secrets/jwt-secret
PG_DB_FILE: /run/secrets/pg-db
PG_HOST_FILE: /run/secrets/pg-host
PG_PASS_FILE: /run/secrets/pg-pass
PG_USER_FILE: /run/secrets/pg-user
image: 127.0.0.1:5000/local-oee-master-accounts:v0.8.0
secrets:
- source: jwt-secret
- source: pg-db
- source: pg-host
- source: pg-pass
- source: pg-user
graphs:
depends_on:
- postgres
- registry
deploy:
restart_policy:
condition: on-failure
environment:
PG_DB_FILE: /run/secrets/pg-db
PG_HOST_FILE: /run/secrets/pg-host
PG_PASS_FILE: /run/secrets/pg-pass
PG_USER_FILE: /run/secrets/pg-user
image: 127.0.0.1:5000/local-oee-master-graphs:v0.8.0
secrets:
- source: pg-db
- source: pg-host
- source: pg-pass
- source: pg-user
health:
depends_on:
- postgres
- registry
deploy:
restart_policy:
condition: on-failure
environment:
PG_DB_FILE: /run/secrets/pg-db
PG_HOST_FILE: /run/secrets/pg-host
PG_PASS_FILE: /run/secrets/pg-pass
PG_USER_FILE: /run/secrets/pg-user
image: 127.0.0.1:5000/local-oee-master-health:v0.8.0
secrets:
- source: pg-db
- source: pg-host
- source: pg-pass
- source: pg-user
live-data:
depends_on:
- postgres
- registry
deploy:
restart_policy:
condition: on-failure
image: 127.0.0.1:5000/local-oee-master-live-data:v0.8.0
ports:
- published: 32000
target: 80
main:
depends_on:
- accounts
- graphs
- health
- live-data
- point-logs
- registry
deploy:
restart_policy:
condition: on-failure
environment:
MAIN_CONFIG_FILE: nginx.local.conf
image: 127.0.0.1:5000/local-oee-master-nginx:v0.8.0
ports:
- published: 80
target: 80
- published: 443
target: 443
modbus-logger:
depends_on:
- point-logs
- registry
deploy:
restart_policy:
condition: on-failure
environment:
CONTROLLER_ADDRESS: 192.168.2.100
SERVER_ADDRESS: http://point-logs
image: 127.0.0.1:5000/local-oee-master-modbus-logger:v0.8.0
point-logs:
depends_on:
- postgres
- registry
deploy:
restart_policy:
condition: on-failure
environment:
ENV_TYPE: local
PG_DB_FILE: /run/secrets/pg-db
PG_HOST_FILE: /run/secrets/pg-host
PG_PASS_FILE: /run/secrets/pg-pass
PG_USER_FILE: /run/secrets/pg-user
image: 127.0.0.1:5000/local-oee-master-point-logs:v0.8.0
secrets:
- source: pg-db
- source: pg-host
- source: pg-pass
- source: pg-user
postgres:
depends_on:
- registry
deploy:
restart_policy:
condition: on-failure
window: 120s
environment:
POSTGRES_PASSWORD: password
image: 127.0.0.1:5000/local-oee-master-postgres:v0.8.0
ports:
- published: 5432
target: 5432
volumes:
- /media/db_main/postgres_oee_master:/var/lib/postgresql/data:rw
registry:
deploy:
restart_policy:
condition: on-failure
image: registry:2
ports:
- mode: host
published: 5000
target: 5000
volumes:
- /mnt/registry:/var/lib/registry:rw
version: '3.2'
Things I've tried
Action: Added restart_policy > window: 120s
Result: No Effect
Action: Postgres restart_policy > condition: none & crontab #reboot redeploy
Result: No Effect
Action: Set all containers stop_grace_period: 2m
Result: No Effect
Current Workaround
Currently, I have hacked together a solution that is working just so I can move on to the next thing. I just wrote a shell script called recreate.sh that will kill the failed first boot version of the server, wait for it to break down, and the "manually" run docker stack deploy again. I am then setting the script to run at boot with crontab #reboot. This is working for shutdowns and reboots, but I don't accept this as the proper answer, so I won't add it as one.
It looks to me that you need to check is who/what kills postgres service. From logs you posted it seems that postrgres receives smart shutdown signal. Then, postress stops gently. Your stack file has restart policy set to "on-failure", and since postres process stops gently (exit code 0), docker does not consider this as failue and as instructed, it does not restart.
In conclusion, I'd recommend changing restart policy to "any" from "on-failure".
Also, have in mind that "depends_on" settings that you use are ignored in swarm and you need to have your services/images own way of ensuring proper startup order or to be able to work when dependent services are not up yet.
There's also thing you could try - healthchecks. Perhaps your postgres base image has a healthcheck defined and it's terminating container by sending a kill signal to it. And as wrote earlier, postgres shuts down gently and there's no error exit code and restart policy does not trigger. Try disabling healthcheck in yaml or go to dockerfiles to see for the healthcheck directive and figure out why it triggers.

scaling a service with docker compose

I am facing issues with scaling a service using docker compose and need help.
Below is what I have:
My docker-compose.yml
web:
image: nginx
The command that I use to run:
docker-compose -f compose/nginx-docker-compose.yml scale web=3 up -d
The output of the command:
Creating and starting compose_web_1 ... done
Creating and starting compose_web_2 ... done
Creating and starting compose_web_3 ... done
ERROR: Arguments to scale should be in the form service=num
The output of docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fead93372574 nginx "nginx -g 'daemon off" 6 seconds ago Up 4 seconds 80/tcp, 443/tcp compose_web_3
de110ae9606d nginx "nginx -g 'daemon off" 6 seconds ago Up 4 seconds 80/tcp, 443/tcp compose_web_1
4d7f8fd39ccd nginx "nginx -g 'daemon off" 6 seconds ago Up 4 seconds 80/tcp, 443/tcp compose_web_2
I should also mention that when I do not use the scale web=3 option, the service comes up just fine.
docker version
Client:
Version: 1.10.3
API version: 1.22
Go version: go1.5.3
Git commit: 20f81dd
Built: Thu Mar 10 15:51:19 2016
OS/Arch: linux/amd64
Server:
Version: 1.10.3
API version: 1.22
Go version: go1.5.3
Git commit: 20f81dd
Built: Thu Mar 10 15:51:19 2016
OS/Arch: linux/amd64
docker-compose version
docker-compose version 1.8.0, build f3628c7
docker-py version: 1.9.0
CPython version: 2.7.9
OpenSSL version: OpenSSL 1.0.1e 11 Feb 2013
Let me know if anybody else has face this and have found a solution.
Thanks.
You should remove the "up" if you use scale option. For ex: docker-compose scale web=3.
Let's see document from Docker site.
In your case:
docker-compose -f compose/nginx-docker-compose.yml scale web=3 up -d
The command may "think" that the up is the service name needed to be scaled (should be up=3). So it threw the warning like that

can not connect to docker container mapping port

I use mongo image in docker, but I can not connect to 20217 port.
docker#default:~$ docker ps
prot info show: 0.0.0.0:20217->20217/tcp, 27017/tcp
but,
gilbertdeMacBook-Pro:~ gilbert$ lsof -i tcp:20217
there is no PID,
gilbertdeMacBook-Pro:~ gilbert$ docker info
Containers: 3
Images: 43
Server Version: 1.9.1
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 50
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.1.13-boot2docker
Operating System: Boot2Docker 1.9.1 (TCL 6.4.1); master : cef800b - Fri Nov 20 19:33:59 UTC 2015
CPUs: 1
Total Memory: 1.956 GiB
Name: default
ID: MRAZ:ZG5E:HDMY:EJNQ:HFL4:PW6Y:AXIS:6JFL:PFI5:GBAY:5SMF:NYQR
Debug mode (server): true
File Descriptors: 25
Goroutines: 44
System Time: 2016-01-27T14:53:52.005531869Z
EventsListeners: 0
Init SHA1:
Init Path: /usr/local/bin/docker
Docker Root Dir: /mnt/sda1/var/lib/docker
Username: gilbertgan
Registry: https://index.docker.io/v1/
Labels:
provider=virtualbox
I found this is because on MAC docker-machine is running on VM,so we need add the VM IP when connect to container.
the ip can be show by: docker-machine ls
Your docker container maps port 20217 which isn't the MongoDB default port. The correct port is 27017. And gilbert_gan is right as well. When running docker on docker-machine the docker host is not localhost but rather the virtual machine under docker-machine control.