Exception when running in docker: org.postgresql.util.PSQLException: FATAL: password authentication failed for user "hamzabelmellouki" - postgresql

I am pretty new to docker. I have a spring-boot application that connects to postgres database (running in a container). I want to dockerize my spring-boot application and be able to connect it to the postgres database container.
What works:
Running the application without using docker works just fine: java -jar product-manager-0.0.1-SNAPSHOT.jar
What doesn't work:
Dockerizing and Running the application with Docker gives me an exception.
Note that I am not using docker-compose here.
My app Dockerfile:
FROM openjdk:11
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
Steps I've taken to run the app:
Run and start up the postgres database in a container using these commands:
docker run --rm --name lil-postgres -e POSTGRES_PASSWORD=password -d -v $HOME/srv/postgres:/var/lib/postgresql/data -p 5432:5432 postgres
postgres -D /usr/local/var/postgres
I've built the app's image:
docker build -t ensa/product-manager .
This is the result from running the latest command:
Sending build context to Docker daemon 38.01MB
Step 1/4 : FROM openjdk:11
---> 612d4d483eee
Step 2/4 : ARG JAR_FILE=target/*.jar
---> Running in 1b8674e959ca
Removing intermediate container 1b8674e959ca
---> d2c2b90680de
Step 3/4 : COPY ${JAR_FILE} app.jar
---> 01295beecd1b
Step 4/4 : ENTRYPOINT ["java","-jar","/app.jar"]
---> Running in 31230e7ff323
Removing intermediate container 31230e7ff323
---> c4487683e7b1
Successfully built c4487683e7b1
Successfully tagged ensa/product-manager:latest
Finally, I've created a running container using:
docker run --net="host" -it ensa/product-manager
The third step resulted in an exception:
org.postgresql.util.PSQLException: FATAL: password authentication failed for user "hamzabelmellouki"
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:520) ~[postgresql-42.2.8.jar!/:42.2.8]
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:141) ~[postgresql-42.2.8.jar!/:42.2.8]
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:192) ~[postgresql-42.2.8.jar!/:42.2.8]
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49) ~[postgresql-42.2.8.jar!/:42.2.8]
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:195) ~[postgresql-42.2.8.jar!/:42.2.8]
at org.postgresql.Driver.makeConnection(Driver.java:458) ~[postgresql-42.2.8.jar!/:42.2.8]
at org.postgresql.Driver.connect(Driver.java:260) ~[postgresql-42.2.8.jar!/:42.2.8]
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138) ~[HikariCP-3.4.1.jar!/:na]
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:353) ~[HikariCP-3.4.1.jar!/:na]
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:201) ~[HikariCP-3.4.1.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:473) ~[HikariCP-3.4.1.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:562) ~[HikariCP-3.4.1.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115) ~[HikariCP-3.4.1.jar!/:na]
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112) ~[HikariCP-3.4.1.jar!/:na]
at org.springframework.jdbc.datasource.DataSourceUtils.fetchConnection(DataSourceUtils.java:158) ~[spring-jdbc-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:116) ~[spring-jdbc-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:79) ~[spring-jdbc-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:324) ~[spring-jdbc-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.boot.jdbc.EmbeddedDatabaseConnection.isEmbedded(EmbeddedDatabaseConnection.java:120) ~[spring-boot-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at org.springframework.boot.autoconfigure.orm.jpa.HibernateDefaultDdlAutoProvider.getDefaultDdlAuto(HibernateDefaultDdlAutoProvider.java:42) ~[spring-boot-autoconfigure-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaConfiguration.lambda$getVendorProperties$1(HibernateJpaConfiguration.java:130) ~[spring-boot-autoconfigure-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at org.springframework.boot.autoconfigure.orm.jpa.HibernateSettings.getDdlAuto(HibernateSettings.java:41) ~[spring-boot-autoconfigure-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at org.springframework.boot.autoconfigure.orm.jpa.HibernateProperties.determineDdlAuto(HibernateProperties.java:136) ~[spring-boot-autoconfigure-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at org.springframework.boot.autoconfigure.orm.jpa.HibernateProperties.getAdditionalProperties(HibernateProperties.java:102) ~[spring-boot-autoconfigure-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at org.springframework.boot.autoconfigure.orm.jpa.HibernateProperties.determineHibernateProperties(HibernateProperties.java:94) ~[spring-boot-autoconfigure-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaConfiguration.getVendorProperties(HibernateJpaConfiguration.java:132) ~[spring-boot-autoconfigure-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at org.springframework.boot.autoconfigure.orm.jpa.JpaBaseConfiguration.entityManagerFactory(JpaBaseConfiguration.java:133) ~[spring-boot-autoconfigure-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[na:na]
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154) ~[spring-beans-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:640) ~[spring-beans-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:625) ~[spring-beans-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1338) ~[spring-beans-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1177) ~[spring-beans-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:557) ~[spring-beans-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:517) ~[spring-beans-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:323) ~[spring-beans-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) ~[spring-beans-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:321) ~[spring-beans-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202) ~[spring-beans-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1108) ~[spring-context-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:868) ~[spring-context-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:550) ~[spring-context-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:141) ~[spring-boot-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:747) ~[spring-boot-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397) ~[spring-boot-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:315) ~[spring-boot-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1226) ~[spring-boot-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1215) ~[spring-boot-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at com.ensa.productmanager.ProductManagerApplication.main(ProductManagerApplication.java:10) ~[classes!/:0.0.1-SNAPSHOT]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[na:na]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48) ~[app.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:87) ~[app.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:51) ~[app.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:52) ~[app.jar:0.0.1-SNAPSHOT]
I've spent the whole day debugging and reading threads about this error. But I couldn't find a clue. Any help will be much appreciated.
UPDATE
I've run docker inspect on the Postgres container:
[
{
"Id": "426744f8c0a90b504c4d7c22242929f4b7f833e3ff4ddb9a112139d18ffd7c10",
"Created": "2020-01-16T19:14:14.7290504Z",
"Path": "docker-entrypoint.sh",
"Args": [
"postgres"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 8057,
"ExitCode": 0,
"Error": "",
"StartedAt": "2020-01-16T19:14:15.5451891Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:e2d75d1c1264a777df31dcbd4fd452b238134eb27854c2a173fdbfaa47ce9b87",
"ResolvConfPath": "/var/lib/docker/containers/426744f8c0a90b504c4d7c22242929f4b7f833e3ff4ddb9a112139d18ffd7c10/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/426744f8c0a90b504c4d7c22242929f4b7f833e3ff4ddb9a112139d18ffd7c10/hostname",
"HostsPath": "/var/lib/docker/containers/426744f8c0a90b504c4d7c22242929f4b7f833e3ff4ddb9a112139d18ffd7c10/hosts",
"LogPath": "/var/lib/docker/containers/426744f8c0a90b504c4d7c22242929f4b7f833e3ff4ddb9a112139d18ffd7c10/426744f8c0a90b504c4d7c22242929f4b7f833e3ff4ddb9a112139d18ffd7c10-json.log",
"Name": "/lil-postgres",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/Users/hamzabelmellouki/srv/postgres:/var/lib/postgresql/data"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "default",
"PortBindings": {
"5432/tcp": [
{
"HostIp": "",
"HostPort": "5432"
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": true,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Capabilities": null,
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
]
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/baabe07f985a61979a3752f7e3c43116b980c45682d27c660b2867a6af3da28b-init/diff:/var/lib/docker/overlay2/ac6700580b74ace6c2f4eb9c274ef405124afb8019948c51516b496ed62eae8d/diff:/var/lib/docker/overlay2/35c484f8059617661ae54f749d0b938e7d81bc2089dd81ef81952aed6079c30b/diff:/var/lib/docker/overlay2/1649505946cf497fd95a5b7a118150a09315767464671f93459e3324f8e5ac42/diff:/var/lib/docker/overlay2/69a4818765c800f405e11c7755e184e4fd0837b68b4664821cf9c80daa7b629b/diff:/var/lib/docker/overlay2/177d3596510c3fdb235ec84728b8b6baf23a406dca3e9a3bfb45e98b48fe2395/diff:/var/lib/docker/overlay2/07d15f6f98f43ff88e6bdf9e361fadaaa46f83121278b5dbddac372cb95d99b8/diff:/var/lib/docker/overlay2/5e10c2001ef13459900f124ae825c210e2e001b0aa2f8b5e1e7c8c5c27d9e403/diff:/var/lib/docker/overlay2/b1e7828ebe991a7401746f581fd640bca3ae74f3a9820fd15e0fb2c666a08754/diff:/var/lib/docker/overlay2/e1b274467493537d29e25cce71acf9c3020d8a2e25080c75aa0fca10a5c05789/diff:/var/lib/docker/overlay2/11141fed542b6c519ec7621e0c4f341d2b2822a7c2885980216a2ef27609ffe0/diff:/var/lib/docker/overlay2/c3e4e9634bfe6a9e4b26f3d2d5751b2fa01aae0e8e1679a800accd94179d527d/diff:/var/lib/docker/overlay2/e7da8a834521c827046a2dc64f6f39afa6fded7f1af3241c75c592756d3053d6/diff:/var/lib/docker/overlay2/91d22f9b7d3aa1fae5cabcfe245a18c7dc6ac372696c73be70b65f7c8b42f1f4/diff:/var/lib/docker/overlay2/1a2021467ef42428eda11a9aef133e4b7ee9364266255abe5a0f2df4a9ff2584/diff",
"MergedDir": "/var/lib/docker/overlay2/baabe07f985a61979a3752f7e3c43116b980c45682d27c660b2867a6af3da28b/merged",
"UpperDir": "/var/lib/docker/overlay2/baabe07f985a61979a3752f7e3c43116b980c45682d27c660b2867a6af3da28b/diff",
"WorkDir": "/var/lib/docker/overlay2/baabe07f985a61979a3752f7e3c43116b980c45682d27c660b2867a6af3da28b/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/Users/hamzabelmellouki/srv/postgres",
"Destination": "/var/lib/postgresql/data",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "426744f8c0a9",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"5432/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"POSTGRES_USER=hamzabelmellouki",
"POSTGRES_PASSWORD=password",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/postgresql/11/bin",
"GOSU_VERSION=1.11",
"LANG=en_US.utf8",
"PG_MAJOR=11",
"PG_VERSION=11.5-1.pgdg90+1",
"PGDATA=/var/lib/postgresql/data"
],
"Cmd": [
"postgres"
],
"Image": "postgres",
"Volumes": {
"/var/lib/postgresql/data": {}
},
"WorkingDir": "",
"Entrypoint": [
"docker-entrypoint.sh"
],
"OnBuild": null,
"Labels": {}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "2649b309ebde1f690927a1081497c9dc05dc249642b429f25d930243eb81e08f",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"5432/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "5432"
}
]
},
"SandboxKey": "/var/run/docker/netns/2649b309ebde",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "f01f60128018d8634f26a0c683ee6865e20361548b3a49007804dc913bfc4292",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:02",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "045d48c0f1d22cb7c319d6e850f6e9985a08e6d08884a79e823dbd0ac6c8a1b9",
"EndpointID": "f01f60128018d8634f26a0c683ee6865e20361548b3a49007804dc913bfc4292",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02",
"DriverOpts": null
}
}
}
}
]

maybe,you can use docker-compose to defining and running multi-container docker applications(postgres+spring-boot) and you use a YAML file to configure your application's services.
So,You must create new volumes with different path to insert new data with new schema and configuration,because if you use volumes that already used before so you will find the problems like password authentication failed for user “hamzabelmellouki”
or the user(role) “hamzabelmellouki” does not existe or the database "database's name" does not existe because they have other user rhat created before .
#####docker-compose.yml#####
version: '3'
services:
db:
image: postgres
container_name: lil-postgres
ports:
- "5432:5432"
restart: unless-stopped
volumes:
- "./postgres/data:/var/lib/postgresql/data"
environment:
POSTGRES_USER: hamzabelmellouki
POSTGRES_PASSWORD: password
POSTGRES_DB: hamzabelmellouki
networks:
- product-net
product:
image: ensa/product-manager
restart: unless-stopped
ports:
- "8080:8080"
depends_on:
- db
networks:
- product-net
networks:
product-net:
driver: bridge
volumes:
db:
driver: local

Try below:
While running Postgres container, you have not provided any username , it should be something like:
docker run --rm --net="host" --name lil-postgres -e POSTGRES_USER=hamzabelmellouki -e POSTGRES_PASSWORD=password -d -v $HOME/srv/postgres:/var/lib/postgresql/data -p 5432:5432 postgres:9.6
by default Postgres container will run in bridge network (default docker network) and in your app, you've provided --net="host"
So maybe due to that, they are not able to communicate.
Provide the same network while running both
You can check it by below command
docker inspect container_name (or container_id)
Its nothing related, still, I've updated the docker image tag(Postgres:9.6), as it's always good to use a particular tag name, instead of default one (latest tag).

Try changing localhost by the ip adresse 172.17.0.2 in the application.properties spring.datasource.url=jdbc:postgresql://172.17.0.2:5432/testdb

Related

Kafka connect schema registry timeout

EDIT: Turns out I turned on a firewall which limited connectivity from containers to host. Adding firewall rules solved the issue.
I am running a kafka JDBC sink connector with the following properties:
{
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"table.name.format": "events",
"connection.password": "******",
"tasks.max": "1",
"topics": "events",
"value.converter.schema.registry.url": "http://IP:PORT",
"db.buffer.size": "8000000",
"connection.user": "postgres",
"name": "cp-sink-events",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"connection.url": "jdbc:postgresql://IP:PORT/postgres?stringtype=unspecified",
"insert.mode": "upsert",
"pk.mode": "record_value",
"pk.fields": "source,timestamp,event,event_type,value"
}
It was working fine before, but since this week I have been getting the following errors while trying to sink my data to Postgres:
Caused by: org.apache.kafka.common.errors.SerializationException: Error retrieving Avro value schema for id 4
Caused by: java.net.SocketTimeoutException: connect timed out
It appears my kafka connect cannot acces my schema registry server anymore. I coulnd't manage to figure out why or how. I have tried multiple things but yet to find the solution.
I did install NGINX on this VM over last week, and killed apache2 which was running on port 80. But I haven't found any dependencies that this would cause any problems.
When I curl the schema registry address from the VM to retrieve the schemas of the mentioned IDs it works fine (http://IP:PORT/schemas/ids/4). any clue how to proceed?
EDIT:
If I configure the IP to random value I get
Caused by: java.net.NoRouteToHostException: No route to host (Host unreachable).
So my host schema registry seems reachable when right IP is configured, I don't know where the timeout limit comes from.
Tried to set timeout limit but didn't work:
SCHEMA_REGISTRY_KAFKASTORE_TIMEOUT_MS: 10000
My compose connect config is set as such:
CONNECT_BOOTSTRAP_SERVERS: kafka0:29092
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: _connect_configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_STORAGE_TOPIC: _connect_offset
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: _connect_status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: http://schemaregistry0:8085
CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schemaregistry0:8085
CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_REST_ADVERTISED_HOST_NAME: kafka-connect0
CONNECT_PLUGIN_PATH: /usr/share/java,/usr/share/confluent-hub-components,/usr/share/local-connectors
Docker Kafka network:
[
{
"Name": "kafka_default",
"Id": "89cd2fe68f2ea3923a76ada4dcb89e505c18792e1abe50fa7ad047e10ee6b673",
"Created": "2023-01-16T18:42:35.531539648+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.25.0.0/16",
"Gateway": "172.25.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"41ac45882364494a357c26e14f8e3b2aede4ace7eaab3dea748c9a5f94430529": {
"Name": "kafka-schemaregistry1-1",
"EndpointID": "56612fbe41396799a8249620dc07b0a5c84c65d311959214b955f538225757ac",
"MacAddress": "02:42:ac:19:00:06",
"IPv4Address": "172.25.0.6/16",
"IPv6Address": ""
},
"42c0847ffb7545d35b2d4116fb5c590a869aec87037601d33267a35e7fe0cb2f": {
"Name": "kafka-kafka-connect0-1",
"EndpointID": "68eef87346aed70bc17ab9960daca4b24073961dcd93bc85c8f7bcdb714feac3",
"MacAddress": "02:42:ac:19:00:08",
"IPv4Address": "172.25.0.8/16",
"IPv6Address": ""
},
"46160f183ba8727fde7b4a7d1770b8d747ed596b8e6b7ca7fea28b39c81dcf7f": {
"Name": "kafka-zookeeper0-1",
"EndpointID": "512970666d1c07a632e0f450bef7ceb6aa3281ca648545ef22de4041fe32a845",
"MacAddress": "02:42:ac:19:00:03",
"IPv4Address": "172.25.0.3/16",
"IPv6Address": ""
},
"6804e9d36647971afe95f5882e7651e39ff8f76a9537c9c6183337fe6379ced9": {
"Name": "kafka-ui",
"EndpointID": "9e9a2a7a04644803703f9c8166d80253258ffba621a5990f3c1efca1112a33a6",
"MacAddress": "02:42:ac:19:00:09",
"IPv4Address": "172.25.0.9/16",
"IPv6Address": ""
},
"8b79e3af68df7d405567c896858a863fecf7f2b32d23138fa065327114b7ce83": {
"Name": "kafka-zookeeper1-1",
"EndpointID": "d5055748e626f1e00066642a7ef60b6606c5a11a4210d0df156ce532fab4e753",
"MacAddress": "02:42:ac:19:00:02",
"IPv4Address": "172.25.0.2/16",
"IPv6Address": ""
},
"92a09c7d3dfb684051660e84793b5328216bf5da4e0ce075d5918c55b9d4034b": {
"Name": "kafka-kafka0-1",
"EndpointID": "cbeba237d1f1c752fd9e4875c8694bdd4d85789bcde4d6d3590f4ef95bb82c6f",
"MacAddress": "02:42:ac:19:00:05",
"IPv4Address": "172.25.0.5/16",
"IPv6Address": ""
},
"e8c5aeef7a1a4a2be6ede2e5436a211d87cbe57ca1d8c506d1905d74171c4f6b": {
"Name": "kafka-kafka1-1",
"EndpointID": "e310477b655cfc60c846035896a62d32c0d07533bceea2c7ab3d17385fe9507b",
"MacAddress": "02:42:ac:19:00:04",
"IPv4Address": "172.25.0.4/16",
"IPv6Address": ""
},
"ecebbd73e861ed4e2ef8e476fa16d95b0983aaa0876a51b0d292b503ef5e9e54": {
"Name": "kafka-schemaregistry0-1",
"EndpointID": "844136d5def798c3837db4256b51c7995011f37576f81d4929087d53a2da7273",
"MacAddress": "02:42:ac:19:00:07",
"IPv4Address": "172.25.0.7/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "kafka",
"com.docker.compose.version": "2.12.2"
}
}
]
curl http://kafka-schemaregistry0-1:8085/schemas/ids/5
Curling the container works from inside the connect container.
EDIT 5:
After changing the URL to the docker container name, I now have access to schema registry.
"value.converter.schema.registry.url": "http://kafka-schemaregistry0-1:8085"
However; now my postgres connection fails.
Caused by: org.postgresql.util.PSQLException: The connection attempt failed
I think the conclusion here would be that my connect container was previously able to access containers via the IP of the host machine, and that it now is not able to anymore. I am curious to know how this can be fixed.

pgadmin can't see postgres when using compose

I've successfully created 2 containers for testing:
docker container run --name postgres -p 5432:5432 -e POSTGRES_PASSWORD=password --hostname postgres --network postgres-net -d -v postgres-vol:/var/lib/postgresql/data postgis/postgis
docker container run --name pgadmin4 -p 5050:80 -v pgadmin4:/var/lib/pgadmin -e PGADMIN_DEFAULT_EMAIL=me#gmail.com -e PGADMIN_DEFAULT_PASSWORD=password --hostname pgadmin4 --network postgres-net --detach dpage/pgadmin4
Both are in the bridge network postgres-net and a named volumed has been created for each one of them: pgadmin4 and postgres-vol
❯ docker container inspect postgres --format '{{json .NetworkSettings}}' | jq
{
"Bridge": "",
"SandboxID": "4a06989f7e03c06b89956681e0f3dd4c400cbeba248d0f418ddf03c8b3e5984e",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"5432/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "5432"
}
]
},
"Networks": {
"postgres-net": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"abbecb6784e7",
"postgres"
],
"NetworkID": "15baa0bcadc284342cfd1afde7e7800d1c7aab1045b4cbbca7692293d88cb75a",
"EndpointID": "7afbf3f5ea0f0bdc3396aee45e0158caf784bb5f371ab67fc847a5fc72e85d56",
"Gateway": "172.19.0.1",
"IPAddress": "172.19.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "xxxxxx",
"DriverOpts": null
}
}
For connecting pgadmin to postgres, I just needed to make reference to the postgres host to make it work.
Then, I decided to move on to compose ...
PGADMIN_EMAIL=me#gmail.com PGADMIN_PASSWORD=password PGPASSWORD=password docker compose -f docker/docker-compose.yml up
...using this compose file:
services:
postgres:
image: postgis/postgis:latest
hostname: postgres
ports:
- '5432:5432'
networks:
- gw-net
environment:
- POSTGRES_PASSWORD=${PGPASSWORD}
volumes:
- postgres-vol:/var/lib/postgresql/data
pgadmin4:
image: dpage/pgadmin4:latest
hostname: pgadmin4
ports:
- '5050:80'
networks:
- postgres-net
environment:
- PGADMIN_DEFAULT_EMAIL=${PGADMIN_EMAIL}
- PGADMIN_DEFAULT_PASSWORD=${PGADMIN_PASSWORD}
volumes:
- pgadmin4:/var/lib/pgadmin
networks:
postgres-net: {}
volumes:
pgadmin4:
external: true
postgres-vol:
external: true
For whatever reason, the pgadmin4-1 container is not able to connect to the postgres-1 container.
{
"Bridge": "",
"SandboxID": "7550d2c28fbc07ab33769c2255aa38ee0ec0c713257aa777e4e9986fd69715fc",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"5432/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "5432"
}
]
},
"SandboxKey": "/var/run/docker/netns/7550d2c28fbc",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"docker_gw-net": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"docker-postgres-1",
"postgres",
"03ea9d1f8933"
],
"NetworkID": "591f1440bf5a4b32e3e348b1eccc9e025cb8da05ed5a6423a7768ae9daf969db",
"EndpointID": "f4e78d0b454c5eee9537837464d68006e3bee0799d366415949591db2ff3ed26",
"Gateway": "172.21.0.1",
"IPAddress": "172.21.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "xxxxxxxx",
"DriverOpts": null
}
}
}
I get this error:
Unable to connect to server:
could not translate host name "docker-postgres-1" to address: Name does not resolve
I also tried using "postgres" without success. I can see that the maintenance database i've specified exists and that the use exists as well.

Centos8 podman exiting all containers (139)

Any image I would try to run the behavior is always the same "Exited (139)"
OS: Centos 8 with podman running inside an Azure VM. The Centos image is the one provided by Azure when creating a VM.
VM: Azure B2S Gen 2 | 2vCPU(s) | 4 GiB RAM | 8 GiB SSD
I paste below the exact extract from the terminal:
pull
$ podman pull fedora
Trying to pull registry.access.redhat.com/fedora...
name unknown: Repo not found
Trying to pull registry.redhat.io/fedora...
unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication
Trying to pull docker.io/library/fedora...
Getting image source signatures
Copying blob ae7b613df528 done
Copying config b3048463dc done
Writing manifest to image destination
Storing signatures
b3048463dcefbe4920ef2ae1af43171c9695e2077f315b2bc12ed0f6f67c86c7
run
$ podman run --rm fedora /bin/echo "Hello Geeks! Welcome to Podman"
ps
$ podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
feb43e01e777 docker.io/library/ubuntu:latest bash 3 minutes ago Exited (139) 3 minutes ago magical_carson
inspect
$ podman inspect feb43e01e777
[
{
"Id": "feb43e01e7771ca0a5a1b4cdf5a7b2587341493f1ecd7b2723d1ad5a45076aac",
"Created": "2020-12-10T11:35:16.863809294Z",
"Path": "bash",
"Args": [
"bash"
],
"State": {
"OciVersion": "1.0.2-dev",
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 139,
"Error": "",
"StartedAt": "2020-12-10T11:35:17.280743295Z",
"FinishedAt": "2020-12-10T11:35:17.280874897Z",
"Healthcheck": {
"Status": "",
"FailingStreak": 0,
"Log": null
}
},
"Image": "f643c72bc25212974c16f3348b3a898b1ec1eb13ec1539e10a103e6e217eb2f1",
"ImageName": "docker.io/library/ubuntu:latest",
"Rootfs": "",
"Pod": "",
"ResolvConfPath": "/run/user/1000/containers/overlay-containers/feb43e01e7771ca0a5a1b4cdf5a7b2587341493f1ecd7b2723d1ad5a45076aac/userdata/resolv.conf",
"HostnamePath": "/run/user/1000/containers/overlay-containers/feb43e01e7771ca0a5a1b4cdf5a7b2587341493f1ecd7b2723d1ad5a45076aac/userdata/hostname",
"HostsPath": "/run/user/1000/containers/overlay-containers/feb43e01e7771ca0a5a1b4cdf5a7b2587341493f1ecd7b2723d1ad5a45076aac/userdata/hosts",
"StaticDir": "/home/brais/.local/share/containers/storage/overlay-containers/feb43e01e7771ca0a5a1b4cdf5a7b2587341493f1ecd7b2723d1ad5a45076aac/userdata",
"OCIConfigPath": "/home/brais/.local/share/containers/storage/overlay-containers/feb43e01e7771ca0a5a1b4cdf5a7b2587341493f1ecd7b2723d1ad5a45076aac/userdata/config.json",
"OCIRuntime": "runc",
"LogPath": "/home/brais/.local/share/containers/storage/overlay-containers/feb43e01e7771ca0a5a1b4cdf5a7b2587341493f1ecd7b2723d1ad5a45076aac/userdata/ctr.log",
"LogTag": "",
"ConmonPidFile": "/run/user/1000/containers/overlay-containers/feb43e01e7771ca0a5a1b4cdf5a7b2587341493f1ecd7b2723d1ad5a45076aac/userdata/conmon.pid",
"Name": "magical_carson",
"RestartCount": 0,
"Driver": "overlay",
"MountLabel": "system_u:object_r:container_file_t:s0:c375,c701",
"ProcessLabel": "system_u:system_r:container_t:s0:c375,c701",
"AppArmorProfile": "",
"EffectiveCaps": [
"CAP_AUDIT_WRITE",
"CAP_CHOWN",
"CAP_DAC_OVERRIDE",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_MKNOD",
"CAP_NET_BIND_SERVICE",
"CAP_NET_RAW",
"CAP_SETFCAP",
"CAP_SETGID",
"CAP_SETPCAP",
"CAP_SETUID",
"CAP_SYS_CHROOT"
],
"BoundingCaps": [
"CAP_AUDIT_WRITE",
"CAP_CHOWN",
"CAP_DAC_OVERRIDE",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_MKNOD",
"CAP_NET_BIND_SERVICE",
"CAP_NET_RAW",
"CAP_SETFCAP",
"CAP_SETGID",
"CAP_SETPCAP",
"CAP_SETUID",
"CAP_SYS_CHROOT"
],
"ExecIDs": [],
"GraphDriver": {
"Name": "overlay",
"Data": {
"LowerDir": "/home/brais/.local/share/containers/storage/overlay/6581dd55e4fe0935a32a688d74513db86632efb162fd41431e7d69318802dfae/diff:/home/brais/.local/share/containers/storage/overlay/1bd27dc7c1c2e7a36c599becda69d0cd905f4f1a122f2b7a95c81a78abc452ec/diff:/home/brais/.local/share/containers/storage/overlay/bacd3af13903e13a43fe87b6944acd1ff21024132aad6e74b4452d984fb1a99a/diff",
"UpperDir": "/home/brais/.local/share/containers/storage/overlay/ccc5801aaacb05d0ed1e64cee2e38f7b4dd8a29890e6fdf780887d296a1c9696/diff",
"WorkDir": "/home/brais/.local/share/containers/storage/overlay/ccc5801aaacb05d0ed1e64cee2e38f7b4dd8a29890e6fdf780887d296a1c9696/work"
}
},
"Mounts": [],
"Dependencies": [],
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": ""
},
"ExitCommand": [
"/usr/bin/podman",
"--root",
"/home/brais/.local/share/containers/storage",
"--runroot",
"/run/user/1000/containers",
"--log-level",
"error",
"--cgroup-manager",
"cgroupfs",
"--tmpdir",
"/run/user/1000/libpod/tmp",
"--runtime",
"runc",
"--storage-driver",
"overlay",
"--storage-opt",
"overlay.mount_program=/usr/bin/fuse-overlayfs",
"--events-backend",
"file",
"container",
"cleanup",
"feb43e01e7771ca0a5a1b4cdf5a7b2587341493f1ecd7b2723d1ad5a45076aac"
],
"Namespace": "",
"IsInfra": false,
"Config": {
"Hostname": "feb43e01e777",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": true,
"OpenStdin": true,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"TERM=xterm",
"container=podman",
"HOSTNAME=feb43e01e777",
"HOME=/root"
],
"Cmd": [
"bash"
],
"Image": "docker.io/library/ubuntu:latest",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": "",
"OnBuild": null,
"Labels": null,
"Annotations": {
"io.container.manager": "libpod",
"io.kubernetes.cri-o.Created": "2020-12-10T11:35:16.863809294Z",
"io.kubernetes.cri-o.TTY": "true",
"io.podman.annotations.autoremove": "FALSE",
"io.podman.annotations.init": "FALSE",
"io.podman.annotations.privileged": "FALSE",
"io.podman.annotations.publish-all": "FALSE",
"org.opencontainers.image.stopSignal": "15"
},
"StopSignal": 15,
"CreateCommand": [
"podman",
"run",
"-it",
"ubuntu",
"bash"
]
},
"HostConfig": {
"Binds": [],
"CgroupMode": "host",
"ContainerIDFile": "",
"LogConfig": {
"Type": "k8s-file",
"Config": null
},
"NetworkMode": "slirp4netns",
"PortBindings": {},
"RestartPolicy": {
"Name": "",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": [],
"CapDrop": [],
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": [],
"GroupAdd": [],
"IpcMode": "private",
"Cgroup": "",
"Cgroups": "default",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "private",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [],
"Tmpfs": {},
"UTSMode": "private",
"UsernsMode": "",
"ShmSize": 65536000,
"Runtime": "oci",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DiskQuota": 0,
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": 0,
"OomKillDisable": false,
"PidsLimit": 0,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0
}
}
]
podman info
$ podman info
host:
arch: amd64
buildahVersion: 1.15.1
cgroupVersion: v1
conmon:
package: conmon-2.0.20-2.module_el8.3.0+475+c50ce30b.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.0.20, commit: 1019ecdeda3936be22162bb1cca308192145de53'
cpus: 2
distribution:
distribution: '"centos"'
version: "8"
eventLogger: file
hostname: vm-test1
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 4.18.0-193.28.1.el8_2.x86_64
linkmode: dynamic
memFree: 247398400
memTotal: 4129382400
ociRuntime:
name: runc
package: runc-1.0.0-68.rc92.module_el8.3.0+475+c50ce30b.x86_64
path: /usr/bin/runc
version: 'runc version spec: 1.0.2-dev'
os: linux
remoteSocket:
path: /run/user/1000/podman/podman.sock
rootless: true
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.1.4-2.module_el8.3.0+475+c50ce30b.x86_64
version: |-
slirp4netns version 1.1.4
commit: b66ffa8e262507e37fca689822d23430f3357fe8
libslirp: 4.3.1
SLIRP_CONFIG_VERSION_MAX: 3
swapFree: 0
swapTotal: 0
uptime: 17h 48m 18.07s (Approximately 0.71 days)
registries:
search:
- registry.access.redhat.com
- registry.redhat.io
- docker.io
store:
configFile: /home/brais/.config/containers/storage.conf
containerStore:
number: 1
paused: 0
running: 0
stopped: 1
graphDriverName: overlay
graphOptions:
overlay.mount_program:
Executable: /usr/bin/fuse-overlayfs
Package: fuse-overlayfs-1.1.2-3.module_el8.3.0+507+aa0970ae.x86_64
Version: |-
fuse-overlayfs: version 1.1.0
FUSE library version 3.2.1
using FUSE kernel interface version 7.26
graphRoot: /home/brais/.local/share/containers/storage
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "false"
imageStore:
number: 8
runRoot: /run/user/1000/containers
volumePath: /home/brais/.local/share/containers/storage/volumes
version:
APIVersion: 1
Built: 1600970293
BuiltTime: Thu Sep 24 17:58:13 2020
GitCommit: ""
GoVersion: go1.14.7
OsArch: linux/amd64
Version: 2.0.5

Ansible AWX : playbook run successfully for create directory in localhost but when go and check to that location that directory not available

I new with ansible AWX I wanted to create directory on my localhost for that create playbook as given below when I run playbook it shows that successful massage and changed on localhost but when i go to that
location the Directory isn't available there.
playbook:
---
- hosts: localhost
tasks:
- name: Create Directory
file:
path: ~/newDir1
mode: "0755"
state: directory
output:
TASK [Create Directory] ********************************************************
20:16:58
10
changed: [localhost]
PLAY RECAP *********************************************************************
20:17:00
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
location got through output:
{
"path": "/var/lib/awx/newDir1",
"changed": true,
"diff": {
"before": {
"path": "/var/lib/awx/newDir1",
"state": "absent"
},
"after": {
"path": "/var/lib/awx/newDir1",
"state": "directory"
}
},
"uid": 975,
"gid": 975,
"owner": "awx",
"group": "awx",
"mode": "0755",
"state": "directory",
"secontext": "system_u:object_r:tmp_t:s0",
"size": 6,
"invocation": {
"module_args": {
"path": "/var/lib/awx/newDir1",
"mode": "0755",
"state": "directory",
"recurse": false,
"force": false,
"follow": true,
"modification_time_format": "%Y%m%d%H%M.%S",
"access_time_format": "%Y%m%d%H%M.%S",
"_original_basename": null,
"_diff_peek": null,
"src": null,
"modification_time": null,
"access_time": null,
"owner": null,
"group": null,
"seuser": null,
"serole": null,
"selevel": null,
"setype": null,
"attributes": null,
"content": null,
"backup": null,
"remote_src": null,
"regexp": null,
"delimiter": null,
"directory_mode": null,
"unsafe_writes": null
}
},
"_ansible_no_log": false
}
If you are using awx docker image then you need to check created directory inside that container.
Get inside container and search. :)

ssh tunnel to remote dockerized mongo server

I have a mongo docker instance running on a remote server, what is the correct way to access the command line from my local machine?
If i login to the remote host, i can access this by:
$ docker exec -it mongo-dev mongo ccc-mongo
but i am unsure how to do this from my local machine.
I tried this:
$ ssh -L 4321:localhost:27017 khine#ccc1 -f -N
Are you sure you want to continue connecting (yes/no)? yes
khine#ccc1's password:
khine#dhegdheer:~/Sandboxes/$ mongo --port 4321
MongoDB shell version: 2.4.9
connecting to: 127.0.0.1:4321/test
channel 2: open failed: connect failed: Connection refused
Wed Sep 9 15:36:44.386 DBClientCursor::init call() failed
Wed Sep 9 15:36:44.388 Error: DBClientBase::findN: transport error: 127.0.0.1:4321 ns: admin.$cmd query: { whatsmyuri: 1 } at src/mongo/shell/mongo.js:147
exception: connect failed
on my remote machine i have 3 mongo instances running
khine#ccc1 /ccc $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
22a32b4f6a1d redis:2.8 "/entrypoint.sh redi 7 days ago Up 7 days 6379/tcp redis-web
167b022ab793 mongo:2.4 "/entrypoint.sh mong 7 days ago Up 7 days 27017/tcp mongo-web
ab84ea6cb44a redis:2.8 "/entrypoint.sh redi 2 weeks ago Up 2 weeks 6379/tcp redis-www
04dcc306af04 redis:2.8 "/entrypoint.sh redi 2 weeks ago Up 2 weeks 6379/tcp redis-dev
02c0c18307dc mongo:2.4 "/entrypoint.sh mong 2 weeks ago Up 2 weeks 27017/tcp mongo-www
61df69ec7edb mongo:2.4 "/entrypoint.sh mong 2 weeks ago Up 2 weeks 27017/tcp mongo-dev
running docker inspect, i get this:
khine#ccc1 /ccc $ docker inspect 61df69ec7edb
[{
"AppArmorProfile": "",
"Args": [
"mongod"
],
"Config": {
"AttachStderr": false,
"AttachStdin": false,
"AttachStdout": false,
"Cmd": [
"mongod"
],
"CpuShares": 0,
"Cpuset": "",
"Domainname": "",
"Entrypoint": [
"/entrypoint.sh"
],
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"MONGO_VERSION=2.4.14"
],
"ExposedPorts": {
"27017/tcp": {}
},
"Hostname": "61df69ec7edb",
"Image": "mongo:2.4",
"HostConfig": {
"Binds": [
"/ccc/mongo-data/dev:/data/db"
],
"CapAdd": null,
"CapDrop": null,
"CgroupParent": "",
"Name": "/mongo-dev",
"NetworkSettings": {
"Bridge": "docker0",
"Gateway": "172.17.42.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.34",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"LinkLocalIPv6Address": "fe80::42:acff:fe11:22",
"LinkLocalIPv6PrefixLen": 64,
"MacAddress": "02:42:ac:11:00:22",
"PortMapping": null,
"Ports": {
"27017/tcp": null
}
},
"Path": "/entrypoint.sh",
"ProcessLabel": "",
"ResolvConfPath": "/var/lib/docker/containers/61df69ec7edb6995f06d797f5b2eed420d0c4daa4cd089c3b9174900d72d0b13/resolv.conf",
"RestartCount": 0,
"State": {
"Dead": false,
"Error": "",
"ExitCode": 0,
"FinishedAt": "0001-01-01T00:00:00Z",
"OOMKilled": false,
"Paused": false,
"Pid": 15346,
"Restarting": false,
"Running": true,
"StartedAt": "2015-08-26T06:01:55.361817334Z"
},
"Volumes": {
"/data/db": "/ccc/mongo-data/dev"
},
"VolumesRW": {
"/data/db": true
}
}
]
if i add the IP address for the instance, i get this warning
$ Warning: remote port forwarding failed for listen port 4321
any advice much appreciated.