I have a mongodb instance running inside a container on my host server (say host1). The docker port has been map from container's port to host port. In my case 27017. Configuration is as below.
I have a webapp running inside another container. If I put the webapp container on another server, say host2, then it can access that mongodb, but if it's on host1, it can't. Since I want the webapp to always treat mogodb as an Internet connection to allow using the webapp docker on any host, independent of the mongodb host, I don't want to bind the webapp connection to host via LAN network.
telnet on my laptop to that mongodb works, but not from inside another docker on host1. Anyone idea where I screwed up? thanks
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
wiredTiger:
engineConfig:
cacheSizeGB: 48
# where to write logging data.
systemLog:
destination: file
logAppend: false
path: /var/log/mongodb/mongod.log
# network interfaces
net:
port: 27017
bindIpAll: true
# how the process runs
processManagement:
timeZoneInfo: /usr/share/zoneinfo
security:
authorization: enabled
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options:
#auditLog:
#snmp:
For docker port mapping:
0.0.0.0:27017->27017/tcp, 0.0.0.0:27017->27017/udp
Edits:
I use mongoose with mongo connection string, from both host1 and host2.
The docker network driver is Overlay.
This is the docker network config of the webapp:
{
"Name": "clientA_webnet",
"Id": "pr4i54oqo6snwc9jev2zs4tq2",
"Created": "2019-07-17T10:00:55.361594601+07:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.6.0/24",
"Gateway": "10.0.6.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"19cf98a368b5fe515eaaba90e7e036d26925af06196beb1027efa1d982778bb6": {
"Name": "clientA_webapp_container",
"EndpointID": "1a326fc3226cb3b9dc85c3426ca9cfdf8e4543366b07e67e83c10b7f924ab538",
"MacAddress": "02:42:0a:00:06:05",
"IPv4Address": "10.0.6.5/24",
"IPv6Address": ""
},
"3bf158bbc7237a98c4607049b8955b7e25b3e0cbde63f7165f7c6d1f86d5f5a2": {
"Name": "clientA_redis_container",
"EndpointID": "4f294d55f27bb02e9adaac27658b2c7d28bf0a1b5d397c8e5a905e896925db10",
"MacAddress": "02:42:0a:00:06:04",
"IPv4Address": "10.0.6.4/24",
"IPv6Address": ""
},
"7d4cfe90ad1d5b29aef135695a517af17c80a5602008fe2125152c0892242e54": {
"Name": "clientA_nginx_container",
"EndpointID": "18a8d5f2c48f7e056585f302c0e1f654e170fac13ef033f2ecd3a0477bd74d57",
"MacAddress": "02:42:0a:00:06:06",
"IPv4Address": "10.0.6.6/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4103"
},
"Labels": {},
"Peers": [
{
"Name": "8d418ecd9f27",
"IP": "x.x.x.x"
}
]
}
Below is the mongodb network config
{
"Name": "common_net",
"Id": "e2c0a6261df67d429e24ae3ac4da82dd6604fb7e1a3d788ce81199391e1d0291",
"Created": "2019-03-04T11:12:19.332798004+07:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.27.0.0/16",
"Gateway": "172.27.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"cdca734916358983fd77f4b9b64434d163a67f146c0c152bdb178502db89d37b": {
"Name": "common_mongo_container",
"EndpointID": "253a7ca61a065de90912aaa8b9dc76f0d491aa10bb93ecba4a71d6417f48a3c4",
"MacAddress": "02:42:ac:1b:00:0a",
"IPv4Address": "172.27.0.10/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
Below is the way I connect to mongodb where x.x.x.x is the real Internet IP address of the server (in this case host1)
mongodb: 'mongodb://username:password#x.x.x.x:27017/dashboard'
Related
EDIT: Turns out I turned on a firewall which limited connectivity from containers to host. Adding firewall rules solved the issue.
I am running a kafka JDBC sink connector with the following properties:
{
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"table.name.format": "events",
"connection.password": "******",
"tasks.max": "1",
"topics": "events",
"value.converter.schema.registry.url": "http://IP:PORT",
"db.buffer.size": "8000000",
"connection.user": "postgres",
"name": "cp-sink-events",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"connection.url": "jdbc:postgresql://IP:PORT/postgres?stringtype=unspecified",
"insert.mode": "upsert",
"pk.mode": "record_value",
"pk.fields": "source,timestamp,event,event_type,value"
}
It was working fine before, but since this week I have been getting the following errors while trying to sink my data to Postgres:
Caused by: org.apache.kafka.common.errors.SerializationException: Error retrieving Avro value schema for id 4
Caused by: java.net.SocketTimeoutException: connect timed out
It appears my kafka connect cannot acces my schema registry server anymore. I coulnd't manage to figure out why or how. I have tried multiple things but yet to find the solution.
I did install NGINX on this VM over last week, and killed apache2 which was running on port 80. But I haven't found any dependencies that this would cause any problems.
When I curl the schema registry address from the VM to retrieve the schemas of the mentioned IDs it works fine (http://IP:PORT/schemas/ids/4). any clue how to proceed?
EDIT:
If I configure the IP to random value I get
Caused by: java.net.NoRouteToHostException: No route to host (Host unreachable).
So my host schema registry seems reachable when right IP is configured, I don't know where the timeout limit comes from.
Tried to set timeout limit but didn't work:
SCHEMA_REGISTRY_KAFKASTORE_TIMEOUT_MS: 10000
My compose connect config is set as such:
CONNECT_BOOTSTRAP_SERVERS: kafka0:29092
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: _connect_configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_STORAGE_TOPIC: _connect_offset
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: _connect_status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: http://schemaregistry0:8085
CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schemaregistry0:8085
CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_REST_ADVERTISED_HOST_NAME: kafka-connect0
CONNECT_PLUGIN_PATH: /usr/share/java,/usr/share/confluent-hub-components,/usr/share/local-connectors
Docker Kafka network:
[
{
"Name": "kafka_default",
"Id": "89cd2fe68f2ea3923a76ada4dcb89e505c18792e1abe50fa7ad047e10ee6b673",
"Created": "2023-01-16T18:42:35.531539648+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.25.0.0/16",
"Gateway": "172.25.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"41ac45882364494a357c26e14f8e3b2aede4ace7eaab3dea748c9a5f94430529": {
"Name": "kafka-schemaregistry1-1",
"EndpointID": "56612fbe41396799a8249620dc07b0a5c84c65d311959214b955f538225757ac",
"MacAddress": "02:42:ac:19:00:06",
"IPv4Address": "172.25.0.6/16",
"IPv6Address": ""
},
"42c0847ffb7545d35b2d4116fb5c590a869aec87037601d33267a35e7fe0cb2f": {
"Name": "kafka-kafka-connect0-1",
"EndpointID": "68eef87346aed70bc17ab9960daca4b24073961dcd93bc85c8f7bcdb714feac3",
"MacAddress": "02:42:ac:19:00:08",
"IPv4Address": "172.25.0.8/16",
"IPv6Address": ""
},
"46160f183ba8727fde7b4a7d1770b8d747ed596b8e6b7ca7fea28b39c81dcf7f": {
"Name": "kafka-zookeeper0-1",
"EndpointID": "512970666d1c07a632e0f450bef7ceb6aa3281ca648545ef22de4041fe32a845",
"MacAddress": "02:42:ac:19:00:03",
"IPv4Address": "172.25.0.3/16",
"IPv6Address": ""
},
"6804e9d36647971afe95f5882e7651e39ff8f76a9537c9c6183337fe6379ced9": {
"Name": "kafka-ui",
"EndpointID": "9e9a2a7a04644803703f9c8166d80253258ffba621a5990f3c1efca1112a33a6",
"MacAddress": "02:42:ac:19:00:09",
"IPv4Address": "172.25.0.9/16",
"IPv6Address": ""
},
"8b79e3af68df7d405567c896858a863fecf7f2b32d23138fa065327114b7ce83": {
"Name": "kafka-zookeeper1-1",
"EndpointID": "d5055748e626f1e00066642a7ef60b6606c5a11a4210d0df156ce532fab4e753",
"MacAddress": "02:42:ac:19:00:02",
"IPv4Address": "172.25.0.2/16",
"IPv6Address": ""
},
"92a09c7d3dfb684051660e84793b5328216bf5da4e0ce075d5918c55b9d4034b": {
"Name": "kafka-kafka0-1",
"EndpointID": "cbeba237d1f1c752fd9e4875c8694bdd4d85789bcde4d6d3590f4ef95bb82c6f",
"MacAddress": "02:42:ac:19:00:05",
"IPv4Address": "172.25.0.5/16",
"IPv6Address": ""
},
"e8c5aeef7a1a4a2be6ede2e5436a211d87cbe57ca1d8c506d1905d74171c4f6b": {
"Name": "kafka-kafka1-1",
"EndpointID": "e310477b655cfc60c846035896a62d32c0d07533bceea2c7ab3d17385fe9507b",
"MacAddress": "02:42:ac:19:00:04",
"IPv4Address": "172.25.0.4/16",
"IPv6Address": ""
},
"ecebbd73e861ed4e2ef8e476fa16d95b0983aaa0876a51b0d292b503ef5e9e54": {
"Name": "kafka-schemaregistry0-1",
"EndpointID": "844136d5def798c3837db4256b51c7995011f37576f81d4929087d53a2da7273",
"MacAddress": "02:42:ac:19:00:07",
"IPv4Address": "172.25.0.7/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "kafka",
"com.docker.compose.version": "2.12.2"
}
}
]
curl http://kafka-schemaregistry0-1:8085/schemas/ids/5
Curling the container works from inside the connect container.
EDIT 5:
After changing the URL to the docker container name, I now have access to schema registry.
"value.converter.schema.registry.url": "http://kafka-schemaregistry0-1:8085"
However; now my postgres connection fails.
Caused by: org.postgresql.util.PSQLException: The connection attempt failed
I think the conclusion here would be that my connect container was previously able to access containers via the IP of the host machine, and that it now is not able to anymore. I am curious to know how this can be fixed.
I am trying to run Wazuh/Wazuh docker container on ECS. I was able to register task definition and launch container using Terraform. However, I am facing an issue with "Volume"(Data Volume) while registering tak definition using AWS CLI command.
Command: aws ecs --region eu-west-1 register-task-definition --family hids --cli-input-json file://task-definition.json
Error:
ParamValidationError: Parameter validation failed:
Unknown parameter in volumes[0]: "dockerVolumeConfiguration", must be one of: name, host
2019-08-29 07:31:59,195 - MainThread - awscli.clidriver - DEBUG - Exiting with rc 255
{
"containerDefinitions": [
{
"portMappings": [
{
"hostPort": 514,
"containerPort": 514,
"protocol": "udp"
},
{
"hostPort": 1514,
"containerPort": 1514,
"protocol": "udp"
},
{
"hostPort": 1515,
"containerPort": 1515,
"protocol": "tcp"
},
{
"hostPort": 1516,
"containerPort": 1516,
"protocol": "tcp"
},
{
"hostPort": 55000,
"containerPort": 55000,
"protocol": "tcp"
}
],
"image": "wazuh/wazuh",
"essential": true,
"name": "chids",
"cpu": 1600,
"memory": 1600,
"mountPoints": [
{
"containerPath": "/var/ossec/data",
"sourceVolume": "ossec-data"
},
{
"containerPath": "/etc/filebeat",
"sourceVolume": "filebeat_etc"
},
{
"containerPath": "/var/lib/filebeat",
"sourceVolume": "filebeat_lib"
},
{
"containerPath": "/etc/postfix",
"sourceVolume": "postfix"
}
]
}
],
"volumes": [
{
"name": "ossec-data",
"dockerVolumeConfiguration": {
"scope": "shared",
"driver": "local",
"autoprovision": true
}
},
{
"name": "filebeat_etc",
"dockerVolumeConfiguration": {
"scope": "shared",
"driver": "local",
"autoprovision": true
}
},
{
"name": "filebeat_lib",
"dockerVolumeConfiguration": {
"scope": "shared",
"driver": "local",
"autoprovision": true
}
},
{
"name": "postfix",
"dockerVolumeConfiguration": {
"scope": "shared",
"driver": "local",
"autoprovision": true
}
}
]
}
I tried by adding "host" parameter(however it supports Bind Mounts only). But got the same error.
"volumes": [
{
"name": "ossec-data",
"host": {
"sourcePath": "/var/ossec/data"
},
"dockerVolumeConfiguration": {
"scope": "shared",
"driver": "local",
"autoprovision": true
}
}
]
ECS should register the task definition having 4 Data Volumes and associated mount points.
Got the issue.
Removed "dockerVolumeConfiguration" parameter from "Volume" configuration and it worked.
"volumes": [
{
"name": "ossec-data",
"host": {
"sourcePath": "/ecs/ossec-data"
}
},
{
"name": "filebeat_etc",
"host": {
"sourcePath": "/ecs/filebeat_etc"
}
},
{
"name": "filebeat_lib",
"host": {
"sourcePath": "/ecs/filebeat_lib"
}
},
{
"name": "postfix",
"host": {
"sourcePath": "/ecs/postfix"
}
}
]
Can you check on your version of awscli?
aws --version
According to all the documentation, your first task definition should work fine and I tested it locally without any issues.
It might be that you are using an older aws cli version where the syntax was different or parameters were different at the time.
Could you try updating your aws cli to the latest version and try again?
--
Some additional info I found:
Checking on the aws ecs cli command, they added docker volume configuration as part of the cli in v1.80
The main aws-cli releases updates periodically to update the commands but they don't provide much info on what specific versions of each command is changed:
https://github.com/aws/aws-cli/blob/develop/CHANGELOG.rst
If you update your aws-cli version things should work
I have created a jps file using documentation https://docs.jelastic.com/application-manifest.
But there is no clear documentation to use PostgreSQL.
Jelastic JPS Node:
{
"nodeType": "postgres9",
"restart": false,
"database": {
"name": "xxxx",
"user": "xxx",
"dump": "xxx.sql"
}
}
Error while configuring environment,
"data": {
"result": 11005,
"source": "marketplace",
"error": "database query error: java.sql.SQLNonTransientConnectionException: Could not connect to address=(host=10.101.3.225)(port=3306)(type=master) : Connection refused (Connection refused)"
}
I have provided whole JPS file content here. In this, i got error when importing database and others are working fine in configs object.
{
"jpsVersion": "0.1",
"jpsType": "install",
"application": {
"id": "xxx",
"name": "xxx",
"version": "0.0.1",
"logo": "http://example.com/img/logo.png",
"type": "php",
"homepage": "http://example.com/",
"description": {
"en": "xxx"
},
"env": {
"topology": {
"ha": false,
"engine": "php7.2",
"ssl": false,
"nodes": [
{
"extip": false,
"count": 1,
"cloudlets": 16,
"nodeType": "nginxphp"
},
{
"extip": false,
"count": 1,
"cloudlets": 16,
"nodeType": "postgres9"
}
]
},
"upload": [
{
"nodeType": "nginxphp",
"sourcePath": "https://example.com/xxx.conf",
"destPath": "${SERVER_CONF_D}/xxx.conf"
}
],
"deployments": [
{
"archive": "https://example.com/xxx.zip",
"name": "xxx.zip",
"context": "ROOT"
}
],
"configs": [
{
"nodeType": "nginxphp",
"restart": true,
"path": "${SERVER_CONF_D}/xxx.conf",
"replacements": [
{
"pattern":"/usr/share/nginx/html",
"replacement":"${SERVER_WEBROOT}"
}
]
},
{
"nodeType": "postgres9",
"restart": false,
"database": {
"name": "xxx",
"user": "xxx",
"dump": "https://example.com/xxx.sql"
}
}, {
"restart": false,
"nodeType": "nginxphp",
"path": "${SERVER_WEBROOT}/ROOT/server/php/config.inc.php",
"replacements": [{
"replacement": "${nodes.postgres9.address}",
"pattern": "localhost"
}, {
"replacement": "${nodes.postgres9.database.password}",
"pattern": "xxx"
}
]
}
]
},
"success": {
"text": "Installation completed. username: admin and password: xxx"
}
}
}
Since Actions are disabled for the Postgres so far (The action is executed only for mysql5, mariadb, and mariadb10 containers) we've improved your manifest based on the recent updates. Yaml was used because it's more clear for reading and understanding:
jpsVersion: 0.1
jpsType: install
name: xxx
version: 0.0.1
logo: http://example.com/img/logo.png
engine: php7.2
nodes:
- cloudlets: 16
nodeType: nginxphp
- cloudlets: 16
nodeType: postgres9
onInstall:
- upload [nginxphp]:
sourcePath: https://example.com/xxx.conf
destPath: ${SERVER_CONF_D}/xxx.conf
- deploy:
archive: https://example.com/xxx.zip
name: xxx.zip
context: ROOT
- replaceInFile [nginxphp]:
path: ${SERVER_CONF_D}/xxx.conf
replacements:
- pattern: /usr/share/nginx/html
replacement: ${SERVER_WEBROOT}
- restartNodes [nginxphp]
- replaceInFile [nginxphp]:
path: ${SERVER_WEBROOT}/ROOT/server/php/config.inc.php
replacements:
- pattern: localhost
replacement: ${nodes.postgres9.address}
- pattern: xxx
replacement: ${nodes.postgres9.password}
- cmd [postgres9]: printf "PGPASSWORD=${nodes.postgres9.password};\nexport PGPASSWORD;\npsql postgres webadmin -c \"CREATE DATABASE Jelastic;\"\n" > /tmp/createDb
- cmd [postgres9]: chmod +x /tmp/createDb && /tmp/createDb
success: Installation completed. username admin and password xxx
Please note, that you can debug every action in the /console tab
I'm writting a micro service with spring-boot. The db is mongodb. The service works perfectly in my local environment. But after I deployed it to the cloud foundry it doesn't work. The reason is connecting mongodb time out.
I think the root cause is the application doesn't know it is running on cloud. Because it still connecting 127.0.0.1:27017, but not the redirected port.
How could it know it is running on cloud? Thank you!
EDIT:
There is a mongodb instance bound to the service. And when I checked the environment information, I got following info:
{
"VCAP_SERVICES": {
"mongodb": [
{
"credentials": {
"hostname": "10.11.241.1",
"ports": {
"27017/tcp": "43417",
"28017/tcp": "43135"
},
"port": "43417",
"username": "xxxxxxxxxx",
"password": "xxxxxxxxxx",
"dbname": "gwkp7glhw9tq9cwp",
"uri": "xxxxxxxxxx"
},
"syslog_drain_url": null,
"volume_mounts": [],
"label": "mongodb",
"provider": null,
"plan": "v3.0-container",
"name": "mongodb-business-configuration",
"tags": [
"mongodb",
"document"
]
}
]
}
}
{
"VCAP_APPLICATION": {
"cf_api": "xxxxxxxxxx",
"limits": {
"fds": 16384,
"mem": 1024,
"disk": 1024
},
"application_name": "mock-service",
"application_uris": [
"xxxxxxxxxx"
],
"name": "mock-service",
"space_name": "xxxxxxxxxx",
"space_id": "xxxxxxxxxx",
"uris": [
"xxxxxxxxxx"
],
"users": null,
"application_id": "xxxxxxxxxx",
"version": "c7569d23-f3ee-49d0-9875-8e595ee76522",
"application_version": "c7569d23-f3ee-49d0-9875-8e595ee76522"
}
}
From my understanding, I think my spring-boot service should try to connect the port 43417 but not 27017, right? Thank you!
Finally I found the reason is I didn't specify the profile. After adding following code in my manifest.yml it works:
env:
SPRING_PROFILES_ACTIVE: cloud
I am trying to connect to my xampp server but it is not being connected.
Here is my datasources.json
{
"db": {
"name": "db",
"connector": "memory"
}
"test": {
"host": "localhost",
"port": 3306,
"database": "test",
"name": "test",
"debug": false,
"connector": "mysql",
"socketPath" : "C:/xampp/mysql/bin/my.ini"
}
}
I do not know that my socket path is having an issue.
Error:
Error:connect ENOTSOCK C:/xampp/mysql/bin/my.ini
In your config file socketpath should be given socket file that is .sock extension commonly
C:/xampp/mysql/mysql.sock
You are not gaving the password and username of mysql database
"test": {
"host": "127.0.0.1",
"port": 3306,
"database": "mysqltest",
"password": "", ///password
"name": "test",
"connector": "mysql",
"user": "root" ///username
}