I 'm using the Fabric 1.1 alpha release and trying to set it up using docker swarm. I 'm using docker compose files with docker stack to deploy the containers.
The issue I 'm having is my chaincode listening port which is 7052 hardcoded somewhere in the peer container is not listening on docker swarm.
The same compose file with minor changes work, if I don't use docker swarm.
I 'm not sure if it's something wrong with the peer itself or with docker-swarm.
This is from my peer container, which clearly is not allowing any connections on 7052 port.
root#71c1b8f22052:/opt/gopath/src/github.com/hyperledger/fabric/peer# telnet 10.0.0.6 7051
Trying 10.0.0.6...
Connected to 10.0.0.6.
Escape character is '^]'.
^CConnection closed by foreign host.
root#71c1b8f22052:/opt/gopath/src/github.com/hyperledger/fabric/peer# telnet 10.0.0.6 7052
Trying 10.0.0.6...
telnet: Unable to connect to remote host: Connection refused
root#71c1b8f22052:/opt/gopath/src/github.com/hyperledger/fabric/peer# telnet 10.0.0.6 7053
Trying 10.0.0.6...
Connected to 10.0.0.6.
Escape character is '^]'.
^CConnection closed by foreign host.
But it's listening on 7052 port.
root#71c1b8f22052:/opt/gopath/src/github.com/hyperledger/fabric/peer# netstat -nalp | grep 7052
tcp 0 0 10.0.0.6:7052 0.0.0.0:* LISTEN 7/peer
I 'm getting this in my chaincode container logs, when I 'm instantiating the chaincode.
2018-02-06 09:45:11.886 UTC [bccsp] initBCCSP -> DEBU 001 Initialize BCCSP [SW]
2018-02-06 09:45:11.906 UTC [grpc] Printf -> DEBU 002 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.0.10:7052: getsockopt: connection refused"; Reconnecting to {peer0.org1.example.com:7052 <nil>}
2018-02-06 09:45:12.905 UTC [grpc] Printf -> DEBU 003 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.0.10:7052: getsockopt: connection refused"; Reconnecting to {peer0.org1.example.com:7052 <nil>}
2018-02-06 09:45:14.612 UTC [grpc] Printf -> DEBU 004 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.0.10:7052: getsockopt: connection refused"; Reconnecting to {peer0.org1.example.com:7052 <nil>}
2018-02-06 09:45:14.904 UTC [shim] userChaincodeStreamGetter -> ERRO 005 context deadline exceeded
error trying to connect to local peer
github.com/hyperledger/fabric/core/chaincode/shim.userChaincodeStreamGetter
/opt/gopath/src/github.com/hyperledger/fabric/core/chaincode/shim/chaincode.go:111
github.com/hyperledger/fabric/core/chaincode/shim.Start
/opt/gopath/src/github.com/hyperledger/fabric/core/chaincode/shim/chaincode.go:150
main.main
/chaincode/input/src/github.com/chaincode/alepomm/alepomm.go:355
runtime.main
/opt/go/src/runtime/proc.go:195
runtime.goexit
/opt/go/src/runtime/asm_amd64.s:2337
Error creating new Smart Contract: error trying to connect to local peer: context deadline exceeded
^^Ignore the timestamp and Ip here, the logs are from diff runs. But it happens the same everytime.
Here is my compose section for the peer.
peer0:
image: hyperledger/fabric-peer
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
- CORE_CHAINCODE_LOGGING_LEVEL=DEBUG
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=fabric
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb0:5984
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
- CORE_PEER_ID=peer0.org1.example.com
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
# - CORE_PEER_ADDRESSAUTODETECT=true
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/tls
ports:
- 7051:7051
- 7053:7053
expose:
- 7051
- 7053
command: peer node start
depends_on:
- couchdb0
networks:
fabric:
aliases:
- "peer0.org1.example.com"
deploy:
placement:
constraints:
- node.hostname == ip-172-31-22-132
This is really weird, if I remove the deploy section (which is stack specific) everything works.
My n/w is an overlay type n/w with scope of swarm.
#yacovm, thank you so much for the help! The issue was that the chaincode container was not launching on the same network as my peer container and hence it was not able to connect to it. To fix it,
CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052
Was added to the env of my peer containers. Now it works like a charm
Related
I have created a composed docker image, which is based on the follow components
version: "3"
services:
db:
image: kartoza/postgis:14-3.2
environment:
- POSTGRES_DB=AAAAAAAA
- POSTGRES_USER=BBBBBBBBBB
- POSTGRES_PASS=CCCCCCCCCC
- POSTGRES_MULTIPLE_EXTENSIONS=postgis,postgis_raster
ports:
- "5432:5432"
restart: on-failure
healthcheck:
test: "exit 0"
shiny:
container_name: shiny
build: ./webapp4
ports:
- "8787:8787"
depends_on:
- "db"
Everything builds up well and goes into operation.
But when the shinyapp tries to connect with the database with the following code
remote_conn <- dbConnect(RPostgres::Postgres(),
dbname = "AAAAAAAA",
host="localhost",
port="5432",
user="BBBBBBBBBB",
password="CCCCCCCCCC")
I have the following output
Warning: Error in : could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Cannot assign requested address
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
55: <Anonymous>
54: stop
53: connection_create
52: .local
51: dbConnect
49: server
3: runApp
2: print.shiny.appobj
1: <Anonymous>
Error : could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Cannot assign requested address
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
Can Someone explain me how to solve this?
Thanks
Have you tried to add same network to both containers?
Just define networks and then add network name to both containers and when connecting make sure to use service name as host argument, in your case it would be
host="db", instead of host="localhost"
For an example look at https://github.com/Tornike-Skhulukhia/postgres-to-mongo-importer/blob/main/docker-compose.yml
I'm trying to install TimeScaleDB using Docker Compose, but I get the following error when importing data using timescaledb-parallel-copy :
/usr/local/bin/docker-entrypoint.sh: running
/docker-entrypoint-initdb.d/import_data.sh timescaledb | panic: could
not connect: dial tcp 172.18.0.2:5432: connect: connection refused
timescaledb | timescaledb | goroutine 6 [running]: timescaledb |
main.processBatches(0xc000016730, 0xc000060780) timescaledb |
/go/src/github.com/timescale/timescaledb-parallel-copy/cmd/timescaledb-parallel-copy/main.go:238
+0x8bb timescaledb | created by main.main timescaledb | /go/src/github.com/timescale/timescaledb-parallel-copy/cmd/timescaledb-parallel-copy/main.go:148
+0x1d2 timescaledb | panic: could not connect: dial tcp 172.18.0.2:5432: connect: connection refused
Here's my docker compose and my docker file :
Docker compose :
version: "3.8"
services:
timescaledb:
container_name: timescaledb
build:
context: "./timescaledb"
dockerfile: "docker_file"
env_file:
- "./timescaledb/environment.env"
volumes:
- "./timescaledb/data:/data"
ports:
- "5432:5432/tcp"
networks:
- local_network
restart: on-failure
networks:
local_network:
Docker file :
FROM timescale/timescaledb:latest-pg13
ADD create_tables.sql /docker-entrypoint-initdb.d
ADD import_data.sh /docker-entrypoint-initdb.d
RUN chmod a+r /docker-entrypoint-initdb.d/*
And here's the import_data.sh script that calls timescaledb-parallel-copy :
#!/bin/bash
timescaledb-parallel-copy --connection "host=timescaledb user=postgres password=XXX sslmode=disable" --db-name YYY --table ZZZ --copy-options "CSV" -skip-header --columns "name, unit" --file "/data/data.csv" --reporting-period 30s workers 4
I also tried using localhost, but I get the same error.
Edit : Problem solved. The host should not be specified in the connection string. According to the official documentation (https://hub.docker.com/_/postgres) :
Also, as of docker-library/postgres#440, the temporary daemon started for these initialization scripts listens only on the Unix socket, so any psql usage should drop the hostname portion (see docker-library/postgres#474 (comment) for example).
I am having two servers (CentOS8).
On server1 I have mysql-server container and on server2 I have zabbix-front-end i.e zabbix-web-apache-mysql (container name zabbixfrontend).
I am trying to connect to mysql-server from zabbixfrontend container. Getting error
bash-4.4$ mysql -h <MYSQL_SERVER_IP> -P 3306 -uroot -p
Enter password:
ERROR 2002 (HY000): Can't connect to MySQL server on '<MYSQL_SERVER_IP>' (115)
When I do nc from zabbixfrontend container to my mysql-server IP I get "No route to host." error message.
bash-4.4$ nc -zv <MYSQL_SERVER_IP> 3306
Ncat: Version 7.70 ( https://nmap.org/ncat )
Ncat: No route to host.
NOTE : I am successfully do nc from the host machine (server2) mysql-server container.
docker-compose.yml
version: '3.5'
services:
zabbix-web-apache-mysql:
image: zabbix/zabbix-web-apache-mysql:centos-8.0-latest
container_name: zabbixfrontend
#network_mode: host
ports:
- "80:8080"
- "443:8443"
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- ./zbx_env/etc/ssl/apache2:/etc/ssl/apache2:ro
- ./usr/share/zabbix/:/usr/share/zabbix/
env_file:
- .env_db_mysql
- .env_web
secrets:
- MYSQL_USER
- MYSQL_PASSWORD
- MYSQL_ROOT_PASSWORD
# zbx_net_frontend:
sysctls:
- net.core.somaxconn=65535
secrets:
MYSQL_USER:
file: ./.MYSQL_USER
MYSQL_PASSWORD:
file: ./.MYSQL_PASSWORD
MYSQL_ROOT_PASSWORD:
file: ./.MYSQL_ROOT_PASSWORD
docker logs zabbixfrontend out as below
** Deploying Zabbix web-interface (Apache) with MySQL database
** Using MYSQL_USER variable from ENV
** Using MYSQL_PASSWORD variable from ENV
********************
* DB_SERVER_HOST: <MYSQL_SERVER_IP>
* DB_SERVER_PORT: 3306
* DB_SERVER_DBNAME: zabbix
********************
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
The nc message is telling the truth: No route to host.
This happens because when you deploy your front-end container in the docker bridge network, its IP address belongs to the 172.18.0.0/16 subnet and you a are trying to reach an the database via an IP address that belongs to a different subnet (10.0.0.0/16).
On the other hand, when you deploy your front-end container on the host network, you no longer face that problem, because now the IP is literally using the IP address of the host machine, 10.0.0.2 and there is no need for a route to be explicitly created to reach 10.0.0.3.
Now the problem you are facing is that you can no longer access the web-ui via the browser. This happens because I assume you kept the ports:" option in your docker-compose.yml and tried to access the service on localhost:80/443. The source and destination ports do not need to be specified if you run the container on the host network. The container will just listen directly on the host on the port that's opened inside the container.
Try to run the front-end container with this config and then access it on localhost:8080 and localhost:8443:
...
network_mode: host
# ports:
# - "80:8080"
# - "443:8443"
volumes:
...
Running containers on the host network is not something that I would usually recommend, but hence your setup is quite special, having one container running on one docker host and another container running in another independent docker host, I assume you don't want create an overlay network and eventually register the two docker hosts to a swarm.
I have been reading this page for years, and now I need some help.
I m starting to configure Prometheus to collect metrics from Docker Swarm and Docker containers, It works really good with cAdvisor and Node Exporter, but now i m having issues to collect metrics from PostgreSQL docker container. I m using this exporter --> https://github.com/wrouesnel/postgres_exporter
This is the service in docker-compose.yml:
postgresql-exporter:
image: wrouesnel/postgres_exporter
ports:
- 9187:9187
networks:
- backend
environment:
- DATA_SOURCE_NAME=postgresql://example:<password>#localhost:5432/example?sslmode=disable
And this is in prometheus.yml
- job_name: 'postgresql-exporter'
static_configs:
- targets: ['postgresql-exporter:9187']
We have two stacks, one with the db and the other one with the monitoring stack.
Logs in the postgresql service:
monitoring_postgresql-exporter.1.krslcea4hz20#master1.xxx.com | time="2019-11-02T16:12:20Z" level=error msg="Error opening connection to database (postgresql://example:PASSWORD_REMOVED#localhost:5432/example?sslmode=disable): dial tcp 127.0.0.1:5432: connect: connection refused" source="postgres_exporter.go:1403"
monitoring_postgresql-exporter.1.krslcea4hz20#master1.xxx.com | time="2019-11-02T16:12:29Z" level=info msg="Established new database connection to \"localhost:5432\"." source="postgres_exporter.go:814"
monitoring_postgresql-exporter.1.krslcea4hz20#master1.xxx.com | time="2019-11-02T16:12:30Z" level=info msg="Established new database connection to \"localhost:5432\"." source="postgres_exporter.go:814"
monitoring_postgresql-exporter.1.krslcea4hz20#master1.xxx.com | time="2019-11-02T16:12:32Z" level=info msg="Established new database connection to \"localhost:5432\"." source="postgres_exporter.go:814"
monitoring_postgresql-exporter.1.krslcea4hz20#master1.xxx.com | time="2019-11-02T16:12:35Z" level=error msg=**"Error opening connection to database (postgresql://example:PASSWORD_REMOVED#localhost:5432/example?sslmode=disable): dial tcp 127.0.0.1:5432: connect: connection refused" source="postgres_exporter.go:1403"**
But when I look in prometheus targets section, the postresql-exporter endpoint says is "UP"
And when I check the pg_up metric it says 0, no conection.
Any idea of how can I solve this?
Any help will be appreciated, thanks!
EDIT: Here is the config of postgreSQL docker service db:
pg:
image: registry.xxx.com:443/pg:201908221000
environment:
- POSTGRES_DB=example
- POSTGRES_USER=example
- POSTGRES_PASSWORD=example
volumes:
- ./postgres/db_data:/var/lib/postgresql/data
networks:
- allnet
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.role == manager]
restart_policy:
condition: on-failure
max_attempts: 3
window: 120s
Thanks
I can't connect to Drone.io with my GitHub.
And have several problems with the app:
1) drone-agent can't connect to server
dodge#comp:$drone agent
28070:M 15 Nov 22:04:01.906 * connecting to server http://<my_ip>
28070:M 15 Nov 22:04:01.906 # connection failed, retry in 15s. websocket.Dial http://<my_ip>: bad scheme
2) I can't add Postgresql to docker-compose.
When I add this text from your site
DRONE_DATABASE_DRIVER: postgres
DRONE_DATABASE_DATASOURCE: postgres://root:password#1.2.3.4:5432/postgres?sslmode=disable
I have this error
INFO: 2017/11/15 19:42:33 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 172.18.0.2:9000: getsockopt: connection refused"; Reconnecting to {drone-server:9000 <nil>}
3) When I use only a server and an agent in docker-compose I have this error
dodge#comp:$drone server
ERRO[0000] sql: unknown driver "sqlite3" (forgotten import?)
FATA[0000] database connection failed
docker-compose.yml
version: '2'
services:
drone-server:
image: drone/drone:0.8
ports:
- 80:8000
- 9000
volumes:
- /var/lib/drone:/var/lib/drone/
- ./drone:/var/lib/drone/
restart: always
environment:
- DRONE_DEBUG=true
- DRONE_OPEN=true
- DRONE_HOST=http://172.18.0.2
- DRONE_GITHUB=true
- DRONE_GITHUB_CLIENT=secretid
- DRONE_GITHUB_SECRET=secretpass
- DRONE_SECRET=password
drone-agent:
image: drone/agent:0.8
command: agent
restart: always
depends_on: [ drone-server ]
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- DRONE_SERVER=drone-server:9000
- DRONE_SECRET=password
4) I cannot start tests in my project. Maybe I missed something during the setup.
$ drone server
$ drone agent
I see the above commands in your examples. These commands are only available in drone 0.7 and below. Drone 0.8 uses drone-server and drone-agent binaries. There seems to be some version disconnect here.
connection failed, retry in 15s. websocket.Dial
drone 0.7 and below used websockets. I see in the docker-compose example you are using drone 0.8 which uses http2 and grpc. There seems to be a disconnect in your configuration vs the version of drone you are using.
sql: unknown driver "sqlite3"
this happens when you compile drone with CGO disabled, or use a version of drone that has been compiled with CGO disabled. If CGO is disabled the sqlite3 driver is not compiled into the binary. Are you trying to build drone from source?
grpc: addrConn.resetTransport failed to create client transport
This error comes from the agent, and is therefore unrelated to a postgres configuration. You should not be providing your agent with a postgres configuration, only the server.
version: '2'
services:
drone-server:
image: drone/drone:latest
ports:
- 80:8000
- 9000:9000
volumes:
- /var/lib/drone:/var/lib/drone/
- ./drone:/var/lib/drone/
restart: always
environment:
- DRONE_DEBUG=true
- DRONE_HOST=http://<container_ip_server>
- DRONE_OPEN=true
- DRONE_GITHUB=true
- DRONE_GITHUB_CLIENT=<client_git>
- DRONE_GITHUB_SECRET=<secret_git>
- DRONE_SECRET=<secret_drone>
- DRONE_GITHUB_MERGE_REF=true
drone-agent:
image: drone/agent:latest
command: agent
restart: always
depends_on: [ drone-server ]
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- DRONE_SERVER=drone-server:9000
- DRONE_SECRET=<drone_secret>
This workes fine.