I have a mongo docker instance running on a remote server, what is the correct way to access the command line from my local machine?
If i login to the remote host, i can access this by:
$ docker exec -it mongo-dev mongo ccc-mongo
but i am unsure how to do this from my local machine.
I tried this:
$ ssh -L 4321:localhost:27017 khine#ccc1 -f -N
Are you sure you want to continue connecting (yes/no)? yes
khine#ccc1's password:
khine#dhegdheer:~/Sandboxes/$ mongo --port 4321
MongoDB shell version: 2.4.9
connecting to: 127.0.0.1:4321/test
channel 2: open failed: connect failed: Connection refused
Wed Sep 9 15:36:44.386 DBClientCursor::init call() failed
Wed Sep 9 15:36:44.388 Error: DBClientBase::findN: transport error: 127.0.0.1:4321 ns: admin.$cmd query: { whatsmyuri: 1 } at src/mongo/shell/mongo.js:147
exception: connect failed
on my remote machine i have 3 mongo instances running
khine#ccc1 /ccc $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
22a32b4f6a1d redis:2.8 "/entrypoint.sh redi 7 days ago Up 7 days 6379/tcp redis-web
167b022ab793 mongo:2.4 "/entrypoint.sh mong 7 days ago Up 7 days 27017/tcp mongo-web
ab84ea6cb44a redis:2.8 "/entrypoint.sh redi 2 weeks ago Up 2 weeks 6379/tcp redis-www
04dcc306af04 redis:2.8 "/entrypoint.sh redi 2 weeks ago Up 2 weeks 6379/tcp redis-dev
02c0c18307dc mongo:2.4 "/entrypoint.sh mong 2 weeks ago Up 2 weeks 27017/tcp mongo-www
61df69ec7edb mongo:2.4 "/entrypoint.sh mong 2 weeks ago Up 2 weeks 27017/tcp mongo-dev
running docker inspect, i get this:
khine#ccc1 /ccc $ docker inspect 61df69ec7edb
[{
"AppArmorProfile": "",
"Args": [
"mongod"
],
"Config": {
"AttachStderr": false,
"AttachStdin": false,
"AttachStdout": false,
"Cmd": [
"mongod"
],
"CpuShares": 0,
"Cpuset": "",
"Domainname": "",
"Entrypoint": [
"/entrypoint.sh"
],
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"MONGO_VERSION=2.4.14"
],
"ExposedPorts": {
"27017/tcp": {}
},
"Hostname": "61df69ec7edb",
"Image": "mongo:2.4",
"HostConfig": {
"Binds": [
"/ccc/mongo-data/dev:/data/db"
],
"CapAdd": null,
"CapDrop": null,
"CgroupParent": "",
"Name": "/mongo-dev",
"NetworkSettings": {
"Bridge": "docker0",
"Gateway": "172.17.42.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.34",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"LinkLocalIPv6Address": "fe80::42:acff:fe11:22",
"LinkLocalIPv6PrefixLen": 64,
"MacAddress": "02:42:ac:11:00:22",
"PortMapping": null,
"Ports": {
"27017/tcp": null
}
},
"Path": "/entrypoint.sh",
"ProcessLabel": "",
"ResolvConfPath": "/var/lib/docker/containers/61df69ec7edb6995f06d797f5b2eed420d0c4daa4cd089c3b9174900d72d0b13/resolv.conf",
"RestartCount": 0,
"State": {
"Dead": false,
"Error": "",
"ExitCode": 0,
"FinishedAt": "0001-01-01T00:00:00Z",
"OOMKilled": false,
"Paused": false,
"Pid": 15346,
"Restarting": false,
"Running": true,
"StartedAt": "2015-08-26T06:01:55.361817334Z"
},
"Volumes": {
"/data/db": "/ccc/mongo-data/dev"
},
"VolumesRW": {
"/data/db": true
}
}
]
if i add the IP address for the instance, i get this warning
$ Warning: remote port forwarding failed for listen port 4321
any advice much appreciated.
Related
I have the following docker-compose file:
services:
pgdatabase:
image: postgres:13
environment:
- POSTGRES_USER=root
- POSTGRES_PASSWORD=root
- POSTGRES_DB=ny_taxi
volumes:
- "./data:/var/lib/postgresql/data:rw"
ports:
- "5432:5432"
pgadmin:
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=admin#admin.com
- PGADMIN_DEFAULT_PASSWORD=root
volumes:
- "./data_pgadmin:/var/lib/pgadmin"
ports:
- "8080:80"
I'm trying to connect to postgres using pgadmin but I'm getting the following error:
Unable to connect to server: could not translate host name "pgdatabase" to address: Name does not resolve
Running docker network ls I get:
NAME DRIVER SCOPE
bridge bridge local
docker-sql-pg_default bridge local
host host local
none null local
Then running docker network inspect docker-sql-pg_default I get
[
{
"Name": "docker-sql-pg_default",
"Id": "bfee2f08620b5ffc1f8e10d8bed65c4d03a98a470deb8b987c4e52a9de27c3db",
"Created": "2023-01-24T17:57:27.831702189Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.24.0.0/16",
"Gateway": "172.24.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"8f53be84a95c9c0591df6cc6edb72d4ca070243c3c067ab2fb14c2094b23bcee": {
"Name": "docker-sql-pg-pgdatabase-1",
"EndpointID": "7f3ddb29b000bc4cfda9c54a4f13e0aa30f1e3f8e5cc1a8ba91cee840c16cd60",
"MacAddress": "02:42:ac:18:00:02",
"IPv4Address": "172.24.0.2/16",
"IPv6Address": ""
},
"bf2eb29b73fe9e49f4bef668a1f70ac2c7e9196b13350f42c28337a47fcd71f4": {
"Name": "docker-sql-pg-pgadmin-1",
"EndpointID": "b3a9504d75e11aa0f08f6a2b5c9c2f660438e23f0d1dd5d7cf4023a5316961d2",
"MacAddress": "02:42:ac:18:00:03",
"IPv4Address": "172.24.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "docker-sql-pg",
"com.docker.compose.version": "2.13.0"
}
}
]
I tried to connect to the gateway IP 172.24.0.1 and the IP of postgres base 172.24.0.2 but I got timeout error. Why my network isn't running?
Basically I solved my problem using the following steps:
First I accessed the pg-admin container and I used ping to verify if the pgadmin could reach the postgres.
Since pgadmin was reaching postgres I used sudo netstat -tulpn | grep LISTEN to verify in my host machine the ports that are in use. surprisingly I have two instances of pgadmin running on 8080 (one bugged).
I used docker-compose down to stop the servers and used docker system prune to delete all images/containers...
I verified the used ports again and one pgadmin still running on 8080.
I used pidof to check the PID of running (bugged) pgadmin.
Then I used kill -9 to kill the proccess.
Last, I used docker-compose up -d and I was able to communicate pgadmin with postgres via pgadmin interface.
I've successfully created 2 containers for testing:
docker container run --name postgres -p 5432:5432 -e POSTGRES_PASSWORD=password --hostname postgres --network postgres-net -d -v postgres-vol:/var/lib/postgresql/data postgis/postgis
docker container run --name pgadmin4 -p 5050:80 -v pgadmin4:/var/lib/pgadmin -e PGADMIN_DEFAULT_EMAIL=me#gmail.com -e PGADMIN_DEFAULT_PASSWORD=password --hostname pgadmin4 --network postgres-net --detach dpage/pgadmin4
Both are in the bridge network postgres-net and a named volumed has been created for each one of them: pgadmin4 and postgres-vol
❯ docker container inspect postgres --format '{{json .NetworkSettings}}' | jq
{
"Bridge": "",
"SandboxID": "4a06989f7e03c06b89956681e0f3dd4c400cbeba248d0f418ddf03c8b3e5984e",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"5432/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "5432"
}
]
},
"Networks": {
"postgres-net": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"abbecb6784e7",
"postgres"
],
"NetworkID": "15baa0bcadc284342cfd1afde7e7800d1c7aab1045b4cbbca7692293d88cb75a",
"EndpointID": "7afbf3f5ea0f0bdc3396aee45e0158caf784bb5f371ab67fc847a5fc72e85d56",
"Gateway": "172.19.0.1",
"IPAddress": "172.19.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "xxxxxx",
"DriverOpts": null
}
}
For connecting pgadmin to postgres, I just needed to make reference to the postgres host to make it work.
Then, I decided to move on to compose ...
PGADMIN_EMAIL=me#gmail.com PGADMIN_PASSWORD=password PGPASSWORD=password docker compose -f docker/docker-compose.yml up
...using this compose file:
services:
postgres:
image: postgis/postgis:latest
hostname: postgres
ports:
- '5432:5432'
networks:
- gw-net
environment:
- POSTGRES_PASSWORD=${PGPASSWORD}
volumes:
- postgres-vol:/var/lib/postgresql/data
pgadmin4:
image: dpage/pgadmin4:latest
hostname: pgadmin4
ports:
- '5050:80'
networks:
- postgres-net
environment:
- PGADMIN_DEFAULT_EMAIL=${PGADMIN_EMAIL}
- PGADMIN_DEFAULT_PASSWORD=${PGADMIN_PASSWORD}
volumes:
- pgadmin4:/var/lib/pgadmin
networks:
postgres-net: {}
volumes:
pgadmin4:
external: true
postgres-vol:
external: true
For whatever reason, the pgadmin4-1 container is not able to connect to the postgres-1 container.
{
"Bridge": "",
"SandboxID": "7550d2c28fbc07ab33769c2255aa38ee0ec0c713257aa777e4e9986fd69715fc",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"5432/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "5432"
}
]
},
"SandboxKey": "/var/run/docker/netns/7550d2c28fbc",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"docker_gw-net": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"docker-postgres-1",
"postgres",
"03ea9d1f8933"
],
"NetworkID": "591f1440bf5a4b32e3e348b1eccc9e025cb8da05ed5a6423a7768ae9daf969db",
"EndpointID": "f4e78d0b454c5eee9537837464d68006e3bee0799d366415949591db2ff3ed26",
"Gateway": "172.21.0.1",
"IPAddress": "172.21.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "xxxxxxxx",
"DriverOpts": null
}
}
}
I get this error:
Unable to connect to server:
could not translate host name "docker-postgres-1" to address: Name does not resolve
I also tried using "postgres" without success. I can see that the maintenance database i've specified exists and that the use exists as well.
I am trying to containerize the Atlassian tool set (Jira, Bitbucket, Confluence, Bamboo) and back it with Postgresql.
I am a Docker newbie. Although I learned a lot today alone...
TLDR - I cannot get the Bitbucket wizard to accept the connection to Postgresql. I also do not have access to Kubernetes.
Note: I did use the --add-host=host.docker.internal:host.gateway to both containers. IS THIS A PROBLEM?!
I created and started the two necessary containers
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bba43f9e9aa6 atlassian/bitbucket "/usr/bin/tini -- /e…" 50 minutes ago Up 50 minutes 0.0.0.0:7990->7990/tcp, :::7990->7990/tcp, 0.0.0.0:7999->7999/tcp, :::7999->7999/tcp bitbucket-docker
050df8a05cce postgres "docker-entrypoint.s…" 54 minutes ago Up 54 minutes 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp postgres-docker
I created a docker network and connected both containers to it
[erik#localhost volumes]$ sudo docker network inspect atlassian-network
[
{
"Name": "atlassian-network",
"Id": "eb45fe5a08280ea0aacb9ee107edf26b9df690398d8e7e0db2d188662120d69a",
"Created": "2022-05-13T12:31:30.417700947-04:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"050df8a05cce53044cba9ab9b2aab80088c264228fc9e357077d60209db59e02": {
"Name": "postgres-docker",
"EndpointID": "1250dd25a1dc39faf9f56d2cb70b0b21922068840097c17c213b944b3d1e952b",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"bba43f9e9aa6e677eb970734a2d16e955ceeff2f0e2ae328f50d377f74cd6a53": {
"Name": "bitbucket-docker",
"EndpointID": "ddcb259a4f5f217f144aa09fb99f059e9000a0c7afdcf8c2f0f4979479f7a5c4",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
I used the postgres client to test the connection from the Bitbucket container(bba) to the Postgres container(050)
root#bba43f9e9aa6:/var/atlassian/application-data/bitbucket# pg_isready -d bitbucket -h postgres-docker -p 5432 -U atlassian_user
postgres-docker:5432 - accepting connections
**************************************
The Bitbucket wizard (on 172.18.0.3:7990)
The configuration entered is not valid. A database connection could not be established. Please check your configuration and try again.
**************************************
Database -> ExternalType -> PostgreSQLHost-> postgres-dockerPort-> 5432DB Name -> bitbucketDB User -> atlassian_userDB Pass -> pass
Some more debugging...
root#050df8a05cce:/# psql bitbucket -U atlassian_user
psql (14.2 (Debian 14.2-1.pgdg110+1))
Type "help" for help.
bitbucket=# \conninfo
You are connected to database "bitbucket" as user "atlassian_user" via socket in "/var/run/postgresql" at port "5432".
root#050df8a05cce:/# ping bitbucket-docker
PING bitbucket-docker (172.18.0.3) 56(84) bytes of data.
64 bytes from bitbucket-docker.atlassian-network (172.18.0.3): icmp_seq=1 ttl=64 time=0.519 ms
64 bytes from bitbucket-docker.atlassian-network (172.18.0.3): icmp_seq=2 ttl=64 time=0.188 ms
64 bytes from bitbucket-docker.atlassian-network (172.18.0.3): icmp_seq=3 ttl=64 time=0.267 ms
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bba43f9e9aa6 atlassian/bitbucket "/usr/bin/tini -- /e…" About an hour ago Up About an hour 0.0.0.0:7990->7990/tcp, :::7990->7990/tcp, 0.0.0.0:7999->7999/tcp, :::7999->7999/tcp bitbucket-docker
050df8a05cce postgres "docker-entrypoint.s…" About an hour ago Up About an hour 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp postgres-docker
Any image I would try to run the behavior is always the same "Exited (139)"
OS: Centos 8 with podman running inside an Azure VM. The Centos image is the one provided by Azure when creating a VM.
VM: Azure B2S Gen 2 | 2vCPU(s) | 4 GiB RAM | 8 GiB SSD
I paste below the exact extract from the terminal:
pull
$ podman pull fedora
Trying to pull registry.access.redhat.com/fedora...
name unknown: Repo not found
Trying to pull registry.redhat.io/fedora...
unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication
Trying to pull docker.io/library/fedora...
Getting image source signatures
Copying blob ae7b613df528 done
Copying config b3048463dc done
Writing manifest to image destination
Storing signatures
b3048463dcefbe4920ef2ae1af43171c9695e2077f315b2bc12ed0f6f67c86c7
run
$ podman run --rm fedora /bin/echo "Hello Geeks! Welcome to Podman"
ps
$ podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
feb43e01e777 docker.io/library/ubuntu:latest bash 3 minutes ago Exited (139) 3 minutes ago magical_carson
inspect
$ podman inspect feb43e01e777
[
{
"Id": "feb43e01e7771ca0a5a1b4cdf5a7b2587341493f1ecd7b2723d1ad5a45076aac",
"Created": "2020-12-10T11:35:16.863809294Z",
"Path": "bash",
"Args": [
"bash"
],
"State": {
"OciVersion": "1.0.2-dev",
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 139,
"Error": "",
"StartedAt": "2020-12-10T11:35:17.280743295Z",
"FinishedAt": "2020-12-10T11:35:17.280874897Z",
"Healthcheck": {
"Status": "",
"FailingStreak": 0,
"Log": null
}
},
"Image": "f643c72bc25212974c16f3348b3a898b1ec1eb13ec1539e10a103e6e217eb2f1",
"ImageName": "docker.io/library/ubuntu:latest",
"Rootfs": "",
"Pod": "",
"ResolvConfPath": "/run/user/1000/containers/overlay-containers/feb43e01e7771ca0a5a1b4cdf5a7b2587341493f1ecd7b2723d1ad5a45076aac/userdata/resolv.conf",
"HostnamePath": "/run/user/1000/containers/overlay-containers/feb43e01e7771ca0a5a1b4cdf5a7b2587341493f1ecd7b2723d1ad5a45076aac/userdata/hostname",
"HostsPath": "/run/user/1000/containers/overlay-containers/feb43e01e7771ca0a5a1b4cdf5a7b2587341493f1ecd7b2723d1ad5a45076aac/userdata/hosts",
"StaticDir": "/home/brais/.local/share/containers/storage/overlay-containers/feb43e01e7771ca0a5a1b4cdf5a7b2587341493f1ecd7b2723d1ad5a45076aac/userdata",
"OCIConfigPath": "/home/brais/.local/share/containers/storage/overlay-containers/feb43e01e7771ca0a5a1b4cdf5a7b2587341493f1ecd7b2723d1ad5a45076aac/userdata/config.json",
"OCIRuntime": "runc",
"LogPath": "/home/brais/.local/share/containers/storage/overlay-containers/feb43e01e7771ca0a5a1b4cdf5a7b2587341493f1ecd7b2723d1ad5a45076aac/userdata/ctr.log",
"LogTag": "",
"ConmonPidFile": "/run/user/1000/containers/overlay-containers/feb43e01e7771ca0a5a1b4cdf5a7b2587341493f1ecd7b2723d1ad5a45076aac/userdata/conmon.pid",
"Name": "magical_carson",
"RestartCount": 0,
"Driver": "overlay",
"MountLabel": "system_u:object_r:container_file_t:s0:c375,c701",
"ProcessLabel": "system_u:system_r:container_t:s0:c375,c701",
"AppArmorProfile": "",
"EffectiveCaps": [
"CAP_AUDIT_WRITE",
"CAP_CHOWN",
"CAP_DAC_OVERRIDE",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_MKNOD",
"CAP_NET_BIND_SERVICE",
"CAP_NET_RAW",
"CAP_SETFCAP",
"CAP_SETGID",
"CAP_SETPCAP",
"CAP_SETUID",
"CAP_SYS_CHROOT"
],
"BoundingCaps": [
"CAP_AUDIT_WRITE",
"CAP_CHOWN",
"CAP_DAC_OVERRIDE",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_MKNOD",
"CAP_NET_BIND_SERVICE",
"CAP_NET_RAW",
"CAP_SETFCAP",
"CAP_SETGID",
"CAP_SETPCAP",
"CAP_SETUID",
"CAP_SYS_CHROOT"
],
"ExecIDs": [],
"GraphDriver": {
"Name": "overlay",
"Data": {
"LowerDir": "/home/brais/.local/share/containers/storage/overlay/6581dd55e4fe0935a32a688d74513db86632efb162fd41431e7d69318802dfae/diff:/home/brais/.local/share/containers/storage/overlay/1bd27dc7c1c2e7a36c599becda69d0cd905f4f1a122f2b7a95c81a78abc452ec/diff:/home/brais/.local/share/containers/storage/overlay/bacd3af13903e13a43fe87b6944acd1ff21024132aad6e74b4452d984fb1a99a/diff",
"UpperDir": "/home/brais/.local/share/containers/storage/overlay/ccc5801aaacb05d0ed1e64cee2e38f7b4dd8a29890e6fdf780887d296a1c9696/diff",
"WorkDir": "/home/brais/.local/share/containers/storage/overlay/ccc5801aaacb05d0ed1e64cee2e38f7b4dd8a29890e6fdf780887d296a1c9696/work"
}
},
"Mounts": [],
"Dependencies": [],
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": ""
},
"ExitCommand": [
"/usr/bin/podman",
"--root",
"/home/brais/.local/share/containers/storage",
"--runroot",
"/run/user/1000/containers",
"--log-level",
"error",
"--cgroup-manager",
"cgroupfs",
"--tmpdir",
"/run/user/1000/libpod/tmp",
"--runtime",
"runc",
"--storage-driver",
"overlay",
"--storage-opt",
"overlay.mount_program=/usr/bin/fuse-overlayfs",
"--events-backend",
"file",
"container",
"cleanup",
"feb43e01e7771ca0a5a1b4cdf5a7b2587341493f1ecd7b2723d1ad5a45076aac"
],
"Namespace": "",
"IsInfra": false,
"Config": {
"Hostname": "feb43e01e777",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": true,
"OpenStdin": true,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"TERM=xterm",
"container=podman",
"HOSTNAME=feb43e01e777",
"HOME=/root"
],
"Cmd": [
"bash"
],
"Image": "docker.io/library/ubuntu:latest",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": "",
"OnBuild": null,
"Labels": null,
"Annotations": {
"io.container.manager": "libpod",
"io.kubernetes.cri-o.Created": "2020-12-10T11:35:16.863809294Z",
"io.kubernetes.cri-o.TTY": "true",
"io.podman.annotations.autoremove": "FALSE",
"io.podman.annotations.init": "FALSE",
"io.podman.annotations.privileged": "FALSE",
"io.podman.annotations.publish-all": "FALSE",
"org.opencontainers.image.stopSignal": "15"
},
"StopSignal": 15,
"CreateCommand": [
"podman",
"run",
"-it",
"ubuntu",
"bash"
]
},
"HostConfig": {
"Binds": [],
"CgroupMode": "host",
"ContainerIDFile": "",
"LogConfig": {
"Type": "k8s-file",
"Config": null
},
"NetworkMode": "slirp4netns",
"PortBindings": {},
"RestartPolicy": {
"Name": "",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": [],
"CapDrop": [],
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": [],
"GroupAdd": [],
"IpcMode": "private",
"Cgroup": "",
"Cgroups": "default",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "private",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [],
"Tmpfs": {},
"UTSMode": "private",
"UsernsMode": "",
"ShmSize": 65536000,
"Runtime": "oci",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DiskQuota": 0,
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": 0,
"OomKillDisable": false,
"PidsLimit": 0,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0
}
}
]
podman info
$ podman info
host:
arch: amd64
buildahVersion: 1.15.1
cgroupVersion: v1
conmon:
package: conmon-2.0.20-2.module_el8.3.0+475+c50ce30b.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.0.20, commit: 1019ecdeda3936be22162bb1cca308192145de53'
cpus: 2
distribution:
distribution: '"centos"'
version: "8"
eventLogger: file
hostname: vm-test1
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 4.18.0-193.28.1.el8_2.x86_64
linkmode: dynamic
memFree: 247398400
memTotal: 4129382400
ociRuntime:
name: runc
package: runc-1.0.0-68.rc92.module_el8.3.0+475+c50ce30b.x86_64
path: /usr/bin/runc
version: 'runc version spec: 1.0.2-dev'
os: linux
remoteSocket:
path: /run/user/1000/podman/podman.sock
rootless: true
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.1.4-2.module_el8.3.0+475+c50ce30b.x86_64
version: |-
slirp4netns version 1.1.4
commit: b66ffa8e262507e37fca689822d23430f3357fe8
libslirp: 4.3.1
SLIRP_CONFIG_VERSION_MAX: 3
swapFree: 0
swapTotal: 0
uptime: 17h 48m 18.07s (Approximately 0.71 days)
registries:
search:
- registry.access.redhat.com
- registry.redhat.io
- docker.io
store:
configFile: /home/brais/.config/containers/storage.conf
containerStore:
number: 1
paused: 0
running: 0
stopped: 1
graphDriverName: overlay
graphOptions:
overlay.mount_program:
Executable: /usr/bin/fuse-overlayfs
Package: fuse-overlayfs-1.1.2-3.module_el8.3.0+507+aa0970ae.x86_64
Version: |-
fuse-overlayfs: version 1.1.0
FUSE library version 3.2.1
using FUSE kernel interface version 7.26
graphRoot: /home/brais/.local/share/containers/storage
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "false"
imageStore:
number: 8
runRoot: /run/user/1000/containers
volumePath: /home/brais/.local/share/containers/storage/volumes
version:
APIVersion: 1
Built: 1600970293
BuiltTime: Thu Sep 24 17:58:13 2020
GitCommit: ""
GoVersion: go1.14.7
OsArch: linux/amd64
Version: 2.0.5
I have a pod running and want to port forward so i can access the pod from the internal network.
I don't know what port it is listening on though, there is no service yet.
I describe the pod:
$ kubectl describe pod queue-l7wck
Name: queue-l7wck
Namespace: default
Priority: 0
Node: minikube/192.168.64.3
Start Time: Wed, 18 Dec 2019 05:13:56 +0200
Labels: app=work-queue
chapter=jobs
component=queue
Annotations: <none>
Status: Running
IP: 172.17.0.2
IPs:
IP: 172.17.0.2
Controlled By: ReplicaSet/queue
Containers:
queue:
Container ID: docker://13780475170fa2c0d8e616ba1a3b1554d31f404cc0a597877e790cbf01838e63
Image: gcr.io/kuar-demo/kuard-amd64:blue
Image ID: docker-pullable://gcr.io/kuar-demo/kuard-amd64#sha256:1ecc9fb2c871302fdb57a25e0c076311b7b352b0a9246d442940ca8fb4efe229
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 18 Dec 2019 05:14:02 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mbn5b (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-mbn5b:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mbn5b
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/queue-l7wck to minikube
Normal Pulling 31h kubelet, minikube Pulling image "gcr.io/kuar-demo/kuard-amd64:blue"
Normal Pulled 31h kubelet, minikube Successfully pulled image "gcr.io/kuar-demo/kuard-amd64:blue"
Normal Created 31h kubelet, minikube Created container queue
Normal Started 31h kubelet, minikube Started container queue
even the JSON has nothing:
$ kubectl get pods queue-l7wck -o json
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"creationTimestamp": "2019-12-18T03:13:56Z",
"generateName": "queue-",
"labels": {
"app": "work-queue",
"chapter": "jobs",
"component": "queue"
},
"name": "queue-l7wck",
"namespace": "default",
"ownerReferences": [
{
"apiVersion": "apps/v1",
"blockOwnerDeletion": true,
"controller": true,
"kind": "ReplicaSet",
"name": "queue",
"uid": "a9ec07f7-07a3-4462-9ac4-a72226f54556"
}
],
"resourceVersion": "375402",
"selfLink": "/api/v1/namespaces/default/pods/queue-l7wck",
"uid": "af43027d-8377-4227-b366-bcd4940b8709"
},
"spec": {
"containers": [
{
"image": "gcr.io/kuar-demo/kuard-amd64:blue",
"imagePullPolicy": "Always",
"name": "queue",
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
"name": "default-token-mbn5b",
"readOnly": true
}
]
}
],
"dnsPolicy": "ClusterFirst",
"enableServiceLinks": true,
"nodeName": "minikube",
"priority": 0,
"restartPolicy": "Always",
"schedulerName": "default-scheduler",
"securityContext": {},
"serviceAccount": "default",
"serviceAccountName": "default",
"terminationGracePeriodSeconds": 30,
"tolerations": [
{
"effect": "NoExecute",
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"tolerationSeconds": 300
},
{
"effect": "NoExecute",
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"tolerationSeconds": 300
}
],
"volumes": [
{
"name": "default-token-mbn5b",
"secret": {
"defaultMode": 420,
"secretName": "default-token-mbn5b"
}
}
]
},
"status": {
"conditions": [
{
"lastProbeTime": null,
"lastTransitionTime": "2019-12-18T03:13:56Z",
"status": "True",
"type": "Initialized"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2019-12-18T03:14:02Z",
"status": "True",
"type": "Ready"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2019-12-18T03:14:02Z",
"status": "True",
"type": "ContainersReady"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2019-12-18T03:13:56Z",
"status": "True",
"type": "PodScheduled"
}
],
"containerStatuses": [
{
"containerID": "docker://13780475170fa2c0d8e616ba1a3b1554d31f404cc0a597877e790cbf01838e63",
"image": "gcr.io/kuar-demo/kuard-amd64:blue",
"imageID": "docker-pullable://gcr.io/kuar-demo/kuard-amd64#sha256:1ecc9fb2c871302fdb57a25e0c076311b7b352b0a9246d442940ca8fb4efe229",
"lastState": {},
"name": "queue",
"ready": true,
"restartCount": 0,
"started": true,
"state": {
"running": {
"startedAt": "2019-12-18T03:14:02Z"
}
}
}
],
"hostIP": "192.168.64.3",
"phase": "Running",
"podIP": "172.17.0.2",
"podIPs": [
{
"ip": "172.17.0.2"
}
],
"qosClass": "BestEffort",
"startTime": "2019-12-18T03:13:56Z"
}
}
How do you checker what port a pod is listening on with kubectl?
Update
If I ssh into the pod and run netstat -tulpn as suggested in the comments I get:
$ kubectl exec -it queue-pfmq2 -- sh
~ $ netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 :::8080 :::* LISTEN 1/kuard
But this method is not using kubectl.
Your container image has a port opened during the build (looks like port 8080 in your case) using the EXPOSE command in the Dockerfile. Since the exposed port is baked into the image, k8s does not keep track of this open port since k8s does not need to take steps to open it.
Since k8s is not responsible for opening the port, you won't be able to find the listening port using kubectl or checking the pod YAML
Try the combination of both kubectl and your Linux command to get the Port container is listening on:
kubectl exec <pod name here> -- netstat -tulpn
Further you can pipe this result with grep to narrow the findings if required eg.
kubectl exec <pod name here> -- netstat -tulpn | grep "search string"
Note: It will work only if your container's base image supports the command netstat. and as per your Update section it seems it supports.
Above solution is nothing but a smart use of the commands you have used in two parts first to exec the container in interactive mode using -it second in the container to list the listening port.
One answer suggested to run netstat inside the container.
This only works if netstat is part of the container's image.
As an alternative, you can run netstat on the host executing it in the container's network namespace..
Get the container's process ID on the host (this is the application running inside the container). Then change to the container's network namespace (run as root on the host):
host# PS1='container# ' nsenter -t <PID> -n
Modifying the PS1 environment variable is used to show a different prompt while you are in the container's network namespace.
Get the listening ports in the container:
container# netstat -na
....
container# exit
If who created the image added the right Openshift label then you can use the following command (unfortunately your image does not have the label) :
skopeo inspect docker://image-url:tag | grep expose-service
e.g.
skopeo inspect docker://quay.io/redhattraining/loadtest:v1.0 | grep expose-service
output:
"io.openshift.expose-services": "8080:http"
So 8080 is the port exposed by the image
Hope this helps
normally. a container will able to run curl . so you can use curl to check whether a port is open.
for port in 8080 50000 443 8443;do curl -I - connect-timeout 1 127.0.0.1:$port;done
this can be run with sh.