How can I grab exposed port from inspecting docker container? - sed

Assuming that I start a docker container with the following command
docker run -d --name my-container -p 1234 my-image
and running docker ps shows the port binding for that image is...
80/tcp, 443 /tcp. 0.0.0.0:32768->1234/tcp
Is there a way that I can use docker inspect to grab the port that is assigned to be mapped to 1234 (in this case, 32768)?
Similar to parsing and grabbing the IP address using the following command...
IP=$(docker inspect -f "{{ .Networksettings.IPAddress }} my-container)
I want to be able to do something similar to the following
ASSIGNED_PORT=$(docker inspect -f "{{...}} my-container)
I am not sure if there is a way to do this through Docker, but I would imagine there is some command line magic (grep,sed,etc) that would allow me to do something like this.
When I run docker inspect my-container and look at the NetworkSettings...I see the following
"NetworkSettings": {
...
...
...
"Ports": {
"1234/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "32768"
}
],
"443/tcp": null,
"80/tcp": null
},
...
...
},
In this case, I would want it to find HostPort without me telling it anything about port 1234 (it should ignore 443 and 80 below it) and return 32768.

Execute the command: docker inspect --format="{{json .Config.ExposedPorts }}" src_python_1
Result: {"8000/tcp":{}}
Proof (using docker ps):
e5e917b59e15 src_python:latest "start-server" 22 hours ago Up 22 hours 0.0.0.0:8000->8000/tcp src_python_1

It is not easy as with ip address as one container can have multiple ports, some exposed and some not, but this will get it:
sudo docker inspect name | grep HostPort | sort | uniq | grep -o [0-9]*
If more than one port is exposed it will be displayed on a new line.

There are two good options depending on your taste: docker port my-container 1234 | grep -o [0-9]*$ and docker inspect --format='{{(index (index .NetworkSettings.Ports "1234/tcp") 0).HostPort}}' my-container

Using jq:
docker inspect --format="{{json .}}" my-container | jq '.NetworkSettings.Ports["1234/tcp"][0].HostPort'
Change 1234 with the port you specified in docker run.

I have used both docker inspect <container> and docker inspect <container>| jq combination to strip ports.
In below example I am looking at dsb-server container and port I am looking for is 8080/tcp
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
98b4bec33ba9 xxxxx:dsbv3 "./docker-entrypoint." 6 days ago Up 6 days 8009/tcp, 0.0.0.0:4848->4848/tcp, 8181/tcp, 0.0.0.0:9013->8080/tcp dsb-server
to strip ports
docker inspect dsb-server| jq -r '.[].NetworkSettings.Ports."8080/tcp"[].HostPort'
9013
docker inspect --format='{{(index (index .NetworkSettings.Ports "8080/tcp") 0).HostPort}}'
9013

The above answers were close and put me on the right track, but I kept getting the following error:
Template parsing error: template: :1: unexpected "/" in operand
I found the answer here: https://github.com/moby/moby/issues/27592
This is what finally worked for me:
docker inspect --format="{{(index (index .NetworkSettings.Ports \"80/tcp\") 0).HostPort}}" $INSTANCE_ID

Related

Is it possible to install curl into busybox in kubernetes pod

I am using busybox to detect my network problem in kubernetes v1.18 pods. I created the busybox like this:
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- name: busybox
image: busybox:1.28
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
and login to find the kubernetes clusters network situation:
kubectl exec -it busybox /bin/bash
What surprises me is that the busybox does not contain curl. Why does the busybox package not include the curl command? I am searching the internet and find the docs do not talk about how to add curl into busybox. I tried to install curl, but found no way to do this. Is there anyway to add curl package into busybox?
The short answer, is you cannot.
Why?
Because busybox does not have package manager like: yum, apk, or apt-get ..
Acutally you have two solutions:
1. Either use a modified busybox
You can use other busybox images like progrium/busybox which provides opkg-install as a package manager.
image: progrium/busybox
Then:
kubectl exec -it busybox -- opkg-install curl
2. Or if your concern to use a minimal image, you can use alpine
image: alpine:3.12
then:
kubectl exec -it alpine -- apk --update add curl
No. Consider alpine as a base image instead that includes BusyBox plus a package manager, or building (or finding) a custom image that has the tools you need pre-installed.
BusyBox is built as a single binary that contains implementations of many common Linux tools. The BusyBox documentation includes a listing of the included commands. You cannot "install" more commands into it without writing C code.
BusyBox does contain an implementation of wget, which might work for your purposes (wget -O- http://other-service).
BusyBox has a subset of wget. The usage patterns of curl are significantly more complex in your OS than the one that comes with Busybox.
To clarify what I mean, run the following in your OS:
$ wget --help | wc -l
207
while running wget's help inside Busybox container should give you a minimal subset package:
$ docker run --rm busybox wget --help 2>&1 | wc -l
20
In K8s, you could run the following:
$ kubectl run -i --tty --rm busybox --image=busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # wget
BusyBox v1.33.1 (2021-06-07 17:33:50 UTC) multi-call binary.
Usage: wget [-cqS] [--spider] [-O FILE] [-o LOGFILE] [--header 'HEADER: VALUE'] [-Y on/off]
[--no-check-certificate] [-P DIR] [-U AGENT] [-T SEC] URL...
Retrieve files via HTTP or FTP
--spider Only check URL existence: $? is 0 if exists
--no-check-certificate Don't validate the server's certificate
-c Continue retrieval of aborted transfer
-q Quiet
-P DIR Save to DIR (default .)
-S Show server response
-T SEC Network read timeout is SEC seconds
-O FILE Save to FILE ('-' for stdout)
-o LOGFILE Log messages to FILE
-U STR Use STR for User-Agent header
-Y on/off
If curl is something required for your use case, I wouldsuggest to use Alpine which is busybox + a minimal package manager and libc implementation such that you can trivially do apk add --no-cache curl and get real curl (or even apk add --no-cache wget to get the "real" wget instead of BusyBox's wget).
As others said, the answer is no and you need to use another image.
There is:
Official curl alpine based image: https://hub.docker.com/r/curlimages/curl with curlimages/curl
Busyboxplus Images: https://hub.docker.com/r/radial/busyboxplus with radial/busyboxplus:curl
Nixery with nixery.dev/curl
Image sizes:
$ docker images -f "reference=*/*curl"
REPOSITORY TAG IMAGE ID CREATED SIZE
curlimages/curl latest ab35d809acc4 9 days ago 11MB
radial/busyboxplus curl 71fa7369f437 8 years ago 4.23MB
nixery.dev/curl latest aa552b5bd167 N/A 56MB
As #abdennour is suggesting, I'm no longer sticking with busybox anymore. Alpine is a very lightweight Linux container image as others suggest here in which you can literally install any UNIX-like tool handy to accomplish your troubleshooting task. In fact, I use this function within my dotfiles at .bashrc to spin a handy ephemeral ready-to-rock Alpine pod:
## This function takes an optional argument to run a pod within a Kubernetes NS, if it's not provided it fallsback to `default` NS.
function kalpinepod () { kubectl run -it --rm --restart=Never --image=alpine handytools -n ${1:-default} -- /bin/ash }
❯ kalpinepod kube-system
If you don't see a command prompt, try pressing enter.
/ # cat /etc/resolv.conf
search kube-system.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.245.0.10
options ndots:5
/ # apk --update add curl openssl
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
(1/6) Installing ca-certificates (20191127-r5)
(2/6) Installing brotli-libs (1.0.9-r3)
(3/6) Installing nghttp2-libs (1.42.0-r1)
(4/6) Installing libcurl (7.74.0-r1)
(5/6) Installing curl (7.74.0-r1)
(6/6) Installing openssl (1.1.1j-r0)
Executing busybox-1.32.1-r3.trigger
Executing ca-certificates-20191127-r5.trigger
OK: 9 MiB in 20 packages
Or just copy a statically built curl into Busybox:
https://github.com/moparisthebest/static-curl/releases
Radial has an overlay of busybox images adding cURL. docker pull radial/busyboxplus:curl
They also have a second images having cURL + Git. docker pull radial/busyboxplus:git
Install the curl binary from the source website
Replace binary-url with the URL of the binary file found from curl.se
export BINARY_URL="<binary-url>"
wget $BINARY_URL -O curl && install curl /bin; rm -f curl
Worked with busybox:latest image

run sed in dockerfile to replace text with build arg value

I'm trying to use sed to replace the text "localhost" in my nginx.conf with the IP address of my docker host (FYI. using docker machine locally, which is running docker on 192.168.99.100).
My nginx Dockerfile looks like this:
FROM nginx:alpine
ARG DOCKER_HOST
COPY ./nginx.conf /etc/nginx/nginx.conf
RUN sed -i 's/localhost/${DOCKER_HOST}/g' /etc/nginx/nginx.conf
EXPOSE 80
My nginx.conf file looks like this (note: majority removed for simplicity)
http {
sendfile on;
upstream epd {
server localhost:8080;
}
# ...
}
I'm expecting "localhost" to get replaced with "192.168.99.100", but it actually gets replaced with "${DOCKER_HOST}". This causes an error
host not found in upstream "${DOCKER_HOST}:8080"
I've tried a few other things, but I can't seem to get this working. I can confirm the DOCKER_HOST build arg is getting through to the Dockerfile via my docker compose script, as I can echo this out.
Many thanks for any responses...
Replace the single quotes ' around s/localhost/${DOCKER_HOST}/g with double quotes ". Variables will not be interpolated within single quotes.

Docker + Crontab: find container ID from service name for use in crontab

The context is that I'm trying to set up a cron job to back up a database within a postgres docker container. The crontab line I'm using is:
45 1 * * * docker exec e2fa9f0adbe0 pg_dump -Z 9 -U pguser -d pgdb | curl -u ftpuser:ftppwd ftp.mydomain.com/my-db-backups/db-backup_`date '+\%Y-\%m-\%d_\%H-\%M-\%S'`.sql.gz --ftp-create-dirs -T -
It works fine. But I'm trying to refine it because at present the container ID e2fa9f0adbe0 is hard-coded into the crontab, so it will break if ever the service under which the container placed is restarted, so that the container will re-appear under a new ID. On the other hand, the service name will always be the same.
So is there a way of altering the above cron command to extract the container ID from the service name (let's say my-postgres-service)?
Well I tried to edit Olmpc's answer to make it more complete (and so that I can mark it as Accepted), but my edit was rejected (thanks). So I'll post my own answer:
To answer the actual question, the cron command can be altered as follows so that it is based on the service name (which is fixed) rather than the container ID (which is subject to change):
45 1 * * * docker exec `docker ps -qf name=my-postgres-service` pg_dump -Z 9 -U pguser -d pgdb | curl -u ftpuser:ftppwd ftp.mydomain.com/my-db-backups/db-backup_`date '+\%Y-\%m-\%d_\%H-\%M-\%S'`.sql.gz --ftp-create-dirs -T -
This is working nicely. Note: it relies on there being only one container associated with the service, so that only a single container ID is returned by docker ps -qf name=my-postgres-service.
You can use the following command to get the id from the name:
docker ps -aqf "name=my-postgres-service"
And the following to have more details about the options used above:
docker ps -h
So, the full crontab line would be:
45 1 * * * docker exec `docker ps -aqf name=my-postgres-service` pg_dump -Z 9 -U pguser -d pgdb | curl -u ftpuser:ftppwd ftp.mydomain.com/my-db-backups/db-backup_`date '+\%Y-\%m-\%d_\%H-\%M-\%S'`.sql.gz --ftp-create-dirs -T -

docker varnish cmd error - no such file or directory

I'm trying to get a Varnish container running as part of a multicontainer Docker environment.
I'm using https://github.com/newsdev/docker-varnish as a base.
My Dockerfile looks like:
FROM newsdev/varnish:4.1.0
COPY start-varnishd.sh /usr/local/bin/start-varnishd
ENV VARNISH_VCL_PATH /etc/varnish/default.vcl
ENV VARNISH_PORT 80
ENV VARNISH_MEMORY 64m
EXPOSE 80
CMD [ "exec /usr/local/sbin/varnishd -j unix,user=varnishd -F -f /etc/varnish/default.vcl -s malloc,64m -a 0.0.0.0:80 -p http_req_hdr_len=16384 -p http_resp_hdr_len=16384" ]
When I run this as part of a docker-compose setup, I get:
ERROR: for eventsapi_varnish_1 Cannot start service varnish: oci
runtime error: container_linux.go:262: starting container process
caused "exec: \"exec /usr/local/sbin/varnishd -j unix,user=varnishd -F
-f /etc/varnish/default.vcl -s malloc,64m -a 0.0.0.0:80 -p http_req_hdr_len=16384 -p http_resp_hdr_len=16384\": stat exec
/usr/local/sbin/varnishd -j unix,user=varnishd -F -f
/etc/varnish/default.vcl -s malloc,64m -a 0.0.0.0:80 -p
http_req_hdr_len=16384 -p http_resp_hdr_len=16384: no such file or
directory"
I get the same if I try
CMD ["start-varnishd"]
(as it is in the base newsdev/docker-varnish)
or
CMD [/usr/local/bin/start-varnishd]
But if I run a bash shell on the container directly:
docker run -t -i eventsapi_varnish /bin/bash
and then run the varnishd command from there, varnish starts up fine (and starts complaining that it can't find the web container, obviously).
What am I doing wrong? What file can't it find? Again looking around the running container directly, it seems that Varnish is where it thinks it should be, the VCL file is where it thinks it should be... what's stopping it running from within docker-compose?
Thanks!
I didn't get to the bottom of why I was getting this error, but "fixed" it by using the (more recent?) fork: https://hub.docker.com/r/tripviss/varnish/. My Dockerfile is now just:
FROM tripviss/varnish:5.1
COPY default.vcl /usr/local/etc/varnish/

Creating multiple PostgreSQL containers in docker in fedora

I want to create 2 containers of postgrSQL so that one can be used as DEV and other as DEV_STAGE.
I was able to successfully create one container and it is been assigned to port 5432. But when I'm trying to the second container, it is getting created(sometimes shows the status as EXITED) but not getting started because of the port number issue.
The below are the commands which I ran.
sudo docker run -v "pwd/data:/var/lib/pgsql/data:Z" -e POSTGRESQL_USER=user1 -e POSTGRESQL_PASSWORD=password -e POSTGRESQL_DATABASE=test_db -d -p 5432:5432 fedora/postgresql
sudo docker run -v "pwd/data_stage:/var/lib/pgsql/data_stage:Z" -e POSTGRESQL_USER=user1 -e POSTGRESQL_PASSWORD=password -e POSTGRESQL_DATABASE=test_db -d -p 5432:5433 fedora/postgresql
I think the port mapping which I'm using is incorrect. But not able to get the correct one.
You have an error in volume definition of the second container. Don't change path after colon, it is mandatory the path is set to /var/lib/pgsql/data.
Also you fliped ports mapping. The correct command is like this:
sudo docker run -v "`pwd`/data_stage:/var/lib/pgsql/data:Z" -e POSTGRESQL_USER=user1 -e POSTGRESQL_PASSWORD=password -e POSTGRESQL_DATABASE=test_db -d -p 5433:5432 fedora/postgresql
If anything goes wrong inspect container logs with docker logs CONTAINER_ID
Thanks for the answer. I corrected the path. I think flipping the port number will not work too. Because I already have one container which is mapped to 5432. So I can't map the port to 5432 again. The below command with worked for me. First, I modified Postgres default port to 5433 using export variable PGPORT=5433.
sudo docker run -v "`pwd`/data_stg:/var/lib/pgsql/data:Z" -e PGPORT=5433 -e POSTGRESQL_USER=user1 -e POSTGRESQL_PASSWORD=password -e POSTGRESQL_DATABASE=test_db -d -p 5433:5433 fedora/postgresql