docker-compose volumes not working this way - docker-compose

in my docker-compose.yml , I am using the registry:2 image ( version)
as I need to set up my own configuration ( for using S3 storage ) , I tried to mound my config directory in place of the default one
/usr/share/docker-registry/config/config.yml # my own registry config in local host
/go/src/github.com/docker/distribution/cmd/registry/config.yml # default in container
in my docker-compose.yml , I wrote
backend:
image: registry:2
ports:
- 127.0.0.1:5000:5000
links:
- cache
volumes:
- /usr/share/docker-registry/config:/go/src/github.com/docker/distribution/cmd/registry
..
but when I compose it, my config settings are never taken in account... it's always using the default settings in the container cmd/registry/config.yml
what could be wrong ?
If I inspect the running registry:v2 container , I can see that
thanks for any enlightenment ...
If I inspect the running registry:v2 container , the config is weird ( S3 info are there, but no volumes , and the CMD is executing the standard config.yml file ... )
"Config": {
"Hostname": "5337012111a5",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"PortSpecs": null,
"ExposedPorts": {
"5000/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"SETTINGS_FLAVOR=local",
"REGISTRY_STORAGE_S3_SECURE=True",
"REGISTRY_STORAGE_S3_ENCRYPT=True",
"REGISTRY_STORAGE_S3_ROOTDIRECTORY=/s3/object/name/prefix",
"CACHE_REDIS_PORT=6379",
"REGISTRY_STORAGE_S3_V4AUTH=True",
"REGISTRY_STORAGE_S3_CHUNKSIZE=5242880",
"REGISTRY_STORAGE_S3_SECRETKEY=yyyyyyyyyyyyyyyyyyyyyyyy”,
"CACHE_LRU_REDIS_PORT=6379",
"SEARCH_BACKEND=sqlalchemy",
"CACHE_REDIS_HOST=cache",
"REGISTRY_STORAGE_S3_ACCESSKEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx”,
"CACHE_LRU_REDIS_HOST=cache",
"REGISTRY_STORAGE_S3_REGION=eu-central-1",
"REGISTRY_STORAGE_S3_BUCKET=harbor.dufour16.net",
"PATH=/go/bin:/usr/src/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"GOLANG_VERSION=1.4.2",
"GOPATH=/go/src/github.com/docker/distribution/Godeps/_workspace:/go",
"DISTRIBUTION_DIR=/go/src/github.com/docker/distribution"
],
"Cmd": [
"cmd/registry/config.yml"
],
"Image": "registry:2",
"Volumes": null,
"VolumeDriver": "",
"WorkingDir": "/go/src/github.com/docker/distribution",
"Entrypoint": [
"registry"
],

I need to override settings with environment variables.. not using external volumes ..

Related

Cannot connect postgres to pgadmin using docker-compose

I have the following docker-compose file:
services:
pgdatabase:
image: postgres:13
environment:
- POSTGRES_USER=root
- POSTGRES_PASSWORD=root
- POSTGRES_DB=ny_taxi
volumes:
- "./data:/var/lib/postgresql/data:rw"
ports:
- "5432:5432"
pgadmin:
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=admin#admin.com
- PGADMIN_DEFAULT_PASSWORD=root
volumes:
- "./data_pgadmin:/var/lib/pgadmin"
ports:
- "8080:80"
I'm trying to connect to postgres using pgadmin but I'm getting the following error:
Unable to connect to server: could not translate host name "pgdatabase" to address: Name does not resolve
Running docker network ls I get:
NAME DRIVER SCOPE
bridge bridge local
docker-sql-pg_default bridge local
host host local
none null local
Then running docker network inspect docker-sql-pg_default I get
[
{
"Name": "docker-sql-pg_default",
"Id": "bfee2f08620b5ffc1f8e10d8bed65c4d03a98a470deb8b987c4e52a9de27c3db",
"Created": "2023-01-24T17:57:27.831702189Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.24.0.0/16",
"Gateway": "172.24.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"8f53be84a95c9c0591df6cc6edb72d4ca070243c3c067ab2fb14c2094b23bcee": {
"Name": "docker-sql-pg-pgdatabase-1",
"EndpointID": "7f3ddb29b000bc4cfda9c54a4f13e0aa30f1e3f8e5cc1a8ba91cee840c16cd60",
"MacAddress": "02:42:ac:18:00:02",
"IPv4Address": "172.24.0.2/16",
"IPv6Address": ""
},
"bf2eb29b73fe9e49f4bef668a1f70ac2c7e9196b13350f42c28337a47fcd71f4": {
"Name": "docker-sql-pg-pgadmin-1",
"EndpointID": "b3a9504d75e11aa0f08f6a2b5c9c2f660438e23f0d1dd5d7cf4023a5316961d2",
"MacAddress": "02:42:ac:18:00:03",
"IPv4Address": "172.24.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "docker-sql-pg",
"com.docker.compose.version": "2.13.0"
}
}
]
I tried to connect to the gateway IP 172.24.0.1 and the IP of postgres base 172.24.0.2 but I got timeout error. Why my network isn't running?
Basically I solved my problem using the following steps:
First I accessed the pg-admin container and I used ping to verify if the pgadmin could reach the postgres.
Since pgadmin was reaching postgres I used sudo netstat -tulpn | grep LISTEN to verify in my host machine the ports that are in use. surprisingly I have two instances of pgadmin running on 8080 (one bugged).
I used docker-compose down to stop the servers and used docker system prune to delete all images/containers...
I verified the used ports again and one pgadmin still running on 8080.
I used pidof to check the PID of running (bugged) pgadmin.
Then I used kill -9 to kill the proccess.
Last, I used docker-compose up -d and I was able to communicate pgadmin with postgres via pgadmin interface.

Container on same network not communicating with each other

I have a mongodb container which i name e-learning
and i have a docker image which should connect to the mongodb container to update my database but it's not working i get this error:
Unknown, Last error: connection() error occurred during connection handshake: dial tcp 127.0.0.1:27017: connect: connection refused }
here's my docker build file
# syntax=docker/dockerfile:1
FROM golang:1.18
WORKDIR /go/src/github.com/worker
COPY go.mod go.sum main.go ./
RUN go mod download
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM jrottenberg/ffmpeg:4-alpine
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
ENV LD_LIBRARY_PATH=/usr/local/lib
COPY --from=jrottenberg/ffmpeg / /
COPY app.env /root
COPY --from=0 /go/src/github.com/worker/app .
CMD ["./app"]
my docker compose file
version: "3.9"
services:
worker:
image: worker
environment:
- MONGO_URI="mongodb://localhost:27017/"
- MONGO_DATABASE=e-learning
- RABBITMQ_URI=amqp://user:password#rabbitmq:5672/
- RABBITMQ_QUEUE=upload
networks:
- app_network
external_links:
- e-learning
- rabbitmq
volumes:
- worker:/go/src/github.com/worker:rw
networks:
app_network:
external: true
volumes:
worker:
my docker inspect network
[
{
"Name": "app_network",
"Id": "f688edf02a194fd3b8a2a66076f834a23fa26cead20e163cde71ef32fc1ab598",
"Created": "2022-06-27T12:18:00.283531947+03:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.20.0.0/16",
"Gateway": "172.20.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"2907482267e1f6e42544e5e8d852c0aac109ec523c6461e003572963e299e9b0": {
"Name": "rabbitmq",
"EndpointID": "4b46e091e4d5a79782185dce12cb2b3d79131b92d2179ea294a639fe82a1e79a",
"MacAddress": "02:42:ac:14:00:03",
"IPv4Address": "172.20.0.3/16",
"IPv6Address": ""
},
"8afd004a981715b8088af53658812215357e156ede03905fe8fdbe4170e8b13f": {
"Name": "e-learning",
"EndpointID": "1c12d592a0ef6866d92e9989f2e5bc3d143602fc1e7ad3d980efffcb87b7e076",
"MacAddress": "02:42:ac:14:00:02",
"IPv4Address": "172.20.0.2/16",
"IPv6Address": ""
},
"ad026f7e10c9c1c41071929239363031ff72ad1b9c6765ef5c977da76f24ea31": {
"Name": "video-transformation-worker-1",
"EndpointID": "ce3547014a6856725b6e815181a2c3383d307ae7cf7132e125c58423f335b73f",
"MacAddress": "02:42:ac:14:00:04",
"IPv4Address": "172.20.0.4/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Change MONGO_URI="mongodb://localhost:27017/" to MONGO_URI="mongodb://e-learning:27017/" (working on the assumption that e-learning is the mongo container).
Within a container attached to a bridge network (the default) localhost (127.0.0.1) is the container itself. So your app container is trying to access the database at port 27017 on itself (not on the host or on the db container). The easiest solution is to use the automatic DNS resolution between containers that docker provides.
I added extra hosts and changed my mongo uri to host.docker.internal
and it solved my problems
version: "3.9"
services:
worker:
image: worker
environment:
- MONGO_URI="mongodb://host.docker.internal:27017/"
- MONGO_DATABASE=e-learning
- RABBITMQ_URI=amqp://user:password#rabbitmq:5672/
- RABBITMQ_QUEUE=upload
networks:
- app_network
external_links:
- e-learning
- rabbitmq
volumes:
- worker:/go/src/github.com/worker:rw
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
app_network:
external: true
volumes:
worker:

Docker compose rabbitmq create qeueue on startup

I am trying to setup docker compose with 3 services: 1 rabbitmq service and 2 java applications(1 producer and 1 consumer). When rabbitmq service starts, it has no queues by default. Only when produces send message to queue it actually creates it. In the same time, consumer requires queue which it has listen to. So here is the problem: when i run my docker compose, rabbitmq starts fine with my producer, but consumer can't find queue and fails. So, can i somehow set-up docker compose rabbitmq service to start and create queue by default?
version: '3.3'
services:
rabbitmq:
image: rabbitmq:management
container_name: rabbitmq
restart: always
environment:
RABBITMQ_DEFAULT_USER: help
RABBITMQ_DEFAULT_PASS: 51243
ports:
- "5672:5672"
- "15672:15672"
#Update
I found pretty good way to achieve what i want. I can start rabbitmq, create all what i need(queue) and export definitions.json. Then i need put it under /etc/rabbitmq/definitions.json
version: '3.3'
services:
rabbitmq:
image: rabbitmq:management
container_name: rabbitmq
restart: always
environment:
RABBITMQ_DEFAULT_USER: ete
RABBITMQ_DEFAULT_PASS: 1402
ports:
- "5672:5672"
- "15672:15672"
volumes:
- ./definitions.json:/etc/rabbitmq/definitions.json
But here is another problem: this definition file overrides my user that i set by environment variables.
Here is definition.json:
{
"rabbit_version": "3.8.9",
"rabbitmq_version": "3.8.9",
"product_name": "RabbitMQ",
"product_version": "3.8.9",
"users": [
{
"name": "eternal",
"password_hash": "wz1jzbGjNMZ115U7XhEUvF271uImnOfho2jpx2pOvLSY/Ssl",
"hashing_algorithm": "rabbit_password_hashing_sha256",
"tags": "administrator"
}
],
"vhosts": [
{
"name": "/"
}
],
"permissions": [
{
"user": "eternal",
"vhost": "/",
"configure": ".*",
"write": ".*",
"read": ".*"
}
],
"topic_permissions": [],
"parameters": [],
"global_parameters": [
{
"name": "cluster_name",
"value": "rabbit#58cb492cd4ee"
},
{
"name": "internal_cluster_id",
"value": "rabbitmq-cluster-id-DlZ_FZVpiFx93CVZXneG4A"
}
],
"policies": [],
"queues": [
{
"name": "qqq",
"vhost": "/",
"durable": false,
"auto_delete": false,
"arguments": {}
}
],
"exchanges": [],
"bindings": []
}
If i will remove users section and everything else and keep only queue section, then after docker-compose up, i can't log in into managment page. Because my login and password is not accepted.

Ghost config not loading correct database file

I'm using a sqlite3 database file with my ghost blog with docker-compose. However when I start it the config file correctly copies over my database and theme to /content but it doesn't load in the browser. Instead it's loading the original database and theme from /versions/3.32.1/content.
I can get it working correctly when I cp the db file and theme folder to /versions/3.32.1/content and change the permissions of the db file but I'd like this to happen automatically. How can I adjust the config file or docker-compose to do this?
Here is my config.development.json:
{
"url": "http://ghost:2368",
"server": {
"port": 2368,
"host": "0.0.0.0"
},
"database": {
"client": "sqlite3",
"connection": {
"filename": "content/data/threadlet.db"
},
"debug": false
},
"paths": {
"contentPath": "content/"
},
"privacy": {
"useRpcPing": false,
"useUpdateCheck": true
},
"useMinFiles": false,
"caching": {
"theme": {
"maxAge": 0
},
"admin": {
"maxAge": 0
}
}
}
and my ghost setup in docker-compose:
ghost:
image: ghost:latest
container_name: ghost
restart: always
ports:
- 2368:2368
env_file:
- ".env"
environment:
# DATABASE_URL:
url: "http://ghost:2368"
NODE_ENV: development
volumes:
- ./ghost/config.${NODE_ENV}.json:/var/lib/ghost/config.${NODE_ENV}.json
- ./ghost/content:/var/lib/ghost/content

How can I find out why a succesful kubernetes MountVolume step doesnt result in any Mounts in a docker container?

I've defined a mount point like so:
- name: dir-graphite
configMap:
name: hub-logstash-grafana
items:
- key: logstash.conf.file
path: config
with a later volume declaration:
volumes:
volumeMounts:
- mountPath: "/opt/blackduck/hub/logs"
name: dir-webapp
- mountPath: "/var/lib/logstash/data"
name: dir-logstash
- mountPath: "/tmp/x"
name: dir-graphite
In kubernetes 1.6.6, I see
Aug 05 02:02:59 ip-10-0-26-84 kubelet[30344]: I0805 02:02:59.912640 30344 operation_generator.go:597]
MountVolume.SetUp succeeded for volume "kubernetes.io/configmap/a47ebff8-7976-11e7-8369-12207729cdd2-dir-graphite" (spec.Name: "dir-graphite") pod "a47ebff8-7976-11e7-8369-12207729cdd2" (UID: "a47ebff8-7976-11e7-8369-12207729cdd2").
That is, I can see the mount set up operation succeeding for my config map, however, when i inspect the actual created container, I see no associated mount:
"Mounts": [
{
"Source": "/var/lib/kubelet/pods/a47ebff8-7976-11e7-8369-12207729cdd2/etc-hosts",
"Destination": "/etc/hosts",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
{
"Source": "/var/lib/kubelet/pods/a47ebff8-7976-11e7-8369-12207729cdd2/containers/logstash/86b079de",
"Destination": "/dev/termination-log",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
{
"Source": "/var/lib/kubelet/pods/a47ebff8-7976-11e7-8369-12207729cdd2/volumes/kubernetes.io~secret/default-token-2t0cl",
"Destination": "/var/run/secrets/kubernetes.io/serviceaccount",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
The following snippet is indeed correct:
31 containers:
32 - image: blackducksoftware/hub-logstash:4.0.0
33 name: logstash
34 volumeMounts:
35 - name: dir-graphite
36 mountPath: /tmp/x
I beleive there is a situation where if you have a minor indentation off, in 1.6, validation isnt very strict, so you get a no-op where a volume is loaded but the mount paths don't necessarily get read into anything.
For the record , if the volume is truly attached, you should see this in your mount points when running docker inspect:
"Mounts": [
{
"Source": "/var/lib/kubelet/pods/bf655bb7-7985-11e7-8369-12207729cdd2/volumes/kubernetes.io~configmap/dir-graphite",
"Destination": "/tmp/x",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
Moral of the story: In the case of ConfigMaps (and likely any other volume that is lazily mounted by a kubelet), A successful MountVolume.SetUp log at the kubelet level doesn't gaurantee that your files were mounted into a container, rather, it only means that the kubelet was able to create a volume corresponding to one of your defined ConfigMap volumes.