Running docker compose causes "Connection to localhost:5432 refused." exception - postgresql

I've looked at SO posts related to this questions here, here, here, and here but I haven't had any luck with the fixes proposed. Whenever I run the command docker-compose -f stack.yml up I receive the following stack trace:
Attaching to weg-api_db_1, weg-api_weg-api_1
db_1 | 2018-07-04 14:57:15.384 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2018-07-04 14:57:15.384 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2018-07-04 14:57:15.388 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2018-07-04 14:57:15.402 UTC [23] LOG: database system was interrupted; last known up at 2018-07-04 14:45:24 UTC
db_1 | 2018-07-04 14:57:15.513 UTC [23] LOG: database system was not properly shut down; automatic recovery in progress
db_1 | 2018-07-04 14:57:15.515 UTC [23] LOG: redo starts at 0/16341E0
db_1 | 2018-07-04 14:57:15.515 UTC [23] LOG: invalid record length at 0/1634218: wanted 24, got 0
db_1 | 2018-07-04 14:57:15.515 UTC [23] LOG: redo done at 0/16341E0
db_1 | 2018-07-04 14:57:15.525 UTC [1] LOG: database system is ready to accept connections
weg-api_1 |
weg-api_1 | . ____ _ __ _ _
weg-api_1 | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
weg-api_1 | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
weg-api_1 | \\/ ___)| |_)| | | | | || (_| | ) ) ) )
weg-api_1 | ' |____| .__|_| |_|_| |_\__, | / / / /
weg-api_1 | =========|_|==============|___/=/_/_/_/
weg-api_1 | :: Spring Boot :: (v1.5.3.RELEASE)
weg-api_1 |
weg-api_1 | 2018-07-04 14:57:16.908 INFO 7 --- [ main] api.ApiKt : Starting ApiKt v0.0.1-SNAPSHOT on f9c58f4f2f27 with PID 7 (/app/spring-jpa-postgresql-spring-boot-0.0.1-SNAPSHOT.jar started by root in /app)
weg-api_1 | 2018-07-04 14:57:16.913 INFO 7 --- [ main] api.ApiKt : No active profile set, falling back to default profiles: default
weg-api_1 | 2018-07-04 14:57:17.008 INFO 7 --- [ main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext#6e5e91e4: startup date [Wed Jul 04 14:57:17 GMT 2018]; root of context hierarchy
weg-api_1 | 2018-07-04 14:57:19.082 INFO 7 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat initialized with port(s): 8080 (http)
weg-api_1 | 2018-07-04 14:57:19.102 INFO 7 --- [ main] o.apache.catalina.core.StandardService : Starting service Tomcat
weg-api_1 | 2018-07-04 14:57:19.104 INFO 7 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/8.5.14
weg-api_1 | 2018-07-04 14:57:19.215 INFO 7 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
weg-api_1 | 2018-07-04 14:57:19.215 INFO 7 --- [ost-startStop-1] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 2211 ms
weg-api_1 | 2018-07-04 14:57:19.370 INFO 7 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean : Mapping servlet: 'dispatcherServlet' to [/]
weg-api_1 | 2018-07-04 14:57:19.375 INFO 7 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'characterEncodingFilter' to: [/*]
weg-api_1 | 2018-07-04 14:57:19.376 INFO 7 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
weg-api_1 | 2018-07-04 14:57:19.376 INFO 7 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'httpPutFormContentFilter' to: [/*]
weg-api_1 | 2018-07-04 14:57:19.376 INFO 7 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'requestContextFilter' to: [/*]
weg-api_1 | 2018-07-04 14:57:19.867 ERROR 7 --- [ main] o.a.tomcat.jdbc.pool.ConnectionPool : Unable to create initial connections of pool.
weg-api_1 |
weg-api_1 | org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
I thought that my .yml file was brain-dead-simple, but I must be missing something vital for the internal routing between the two containers to fail.
EDIT
My stack.yml is below:
version: '3'
services:
db:
image: postgres
restart: always
container_name: db
environment:
POSTGRES_USER: root
POSTGRES_PASSWORD: password
POSTGRES_DB: weg
ports:
- "5432:5432"
weg-api:
image: weg-api
restart: always
container_name: weg-api
ports:
- "8080:8080"
depends_on:
- "db"
EDIT
My Springboot application properties are below:
spring.datasource.url=jdbc:postgresql://db:5432/weg
spring.datasource.username=root
spring.datasource.password=password
spring.jpa.generate-ddl=true
I'm at a loss as to how to proceed.

Your database is running on db container, not on your localhost inside your weg-api container. Therefore, you have to change
spring.datasource.url=jdbc:postgresql://localhost:5432/weg
to
spring.datasource.url=jdbc:postgresql://db:5432/weg
I would also suggest you give container_name to each of your containers to be sure the container names are always same. Otherwise you might sometimes get different names depending on your configuration.
version: '3'
services:
db:
image: postgres
restart: always
container_name: db
environment:
POSTGRES_USER: root
POSTGRES_PASSWORD: password
POSTGRES_DB: weg
ports:
- "5432:5432"
weg-api:
image: weg-api
restart: always
container_name: weg-api
ports:
- "8080:8080"
depends_on:
- "db"

Related

minikube service is failing to expose URL

F:\Udemy\GitRepo\Kubernetes-Tutorial>kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-app-deploy-68698d9757-wrs9z 1/1 Running 0 14m 172.17.0.3 minikube <none> <none>
F:\Udemy\GitRepo\Kubernetes-Tutorial>minikube service my-app-svc
|-----------|------------|-------------|-----------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|------------|-------------|-----------------------------|
| default | my-app-svc | 80 | http://172.30.105.146:30365 |
|-----------|------------|-------------|-----------------------------|
* Opening service default/my-app-svc in default browser...
F:\Udemy\GitRepo\Kubernetes-Tutorial>kubectl describe service my-app-svc
Name: my-app-svc
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=my-app
Type: NodePort
IP: 10.98.9.115
Port: <unset> 80/TCP
TargetPort: 9001/TCP
NodePort: <unset> 30365/TCP
Endpoints: 172.17.0.3:9001
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
F:\Udemy\GitRepo\Kubernetes-Tutorial>kubectl logs my-app-deploy-68698d9757-wrs9z
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.3.1.RELEASE)
2021-08-21 13:37:21.046 INFO 1 --- [ main] c.d.d.DockerpublishApplication : Starting DockerpublishApplication v0.0.3 on my-app-deploy-68698d9757-wrs9z with PID 1 (/app.jar started by root in /)
2021-08-21 13:37:21.050 INFO 1 --- [ main] c.d.d.DockerpublishApplication : No active profile set, falling back to default profiles: default
2021-08-21 13:37:22.645 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 9091 (http)
2021-08-21 13:37:22.659 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2021-08-21 13:37:22.660 INFO 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.36]
2021-08-21 13:37:22.785 INFO 1 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2021-08-21 13:37:22.785 INFO 1 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1646 ms
2021-08-21 13:37:23.302 INFO 1 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
2021-08-21 13:37:23.496 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 9091 (http) with context path ''
2021-08-21 13:37:23.510 INFO 1 --- [ main] c.d.d.DockerpublishApplication : Started DockerpublishApplication in 3.279 seconds (JVM running for 4.077)
F:\Udemy\GitRepo\Kubernetes-Tutorial>
Everything seems to be good, but not working well.
Refused to connect issue comes as below for
minikube service my-app-svc
Your service or application is running on different port as you are getting connection refused.
Spring boot running on the 9091 : Tomcat started on port(s): 9091 (http) with context path ''
But your service is redirecting the traffic TargetPort: 9001/TCP
Your target port should be 9091 instead of 9001
Your will access the application over the node port Ip request, which will reach to the K8s service and be forwarded to the TargetPort: 9091/TCP on which the application is running.

How to pass multiple parameter to --detect.maven.build.command for blackduck hub scanning of maven project using jenkins

I am passing below command as PowerShell in Jenkins:
powershell "[Net.ServicePointManager]::SecurityProtocol = 'tls12'; irm https://detect.synopsys.com/detect.ps1?$(Get-Random) | iex; detect" --blackduck.url=$env:HUB_URL --blackduck.trust.cert=true --blackduck.api.token=$env:BLACKDUCK_HUB_TOKEN --detect.project.name=$env:HUB_PROJECT_NAME --detect.project.version.name=$env:VERSION --detect.maven.include.plugins=true --detect.included.detector.types=maven --detect.maven.build.command="E:\apache-maven-3.0.3\bin\mvn.cmd -f pom.xml -s settings.xml -gs settings.xml clean install -DIafConfigSuffix=Cert"`
but when detect executed, --detect.maven.build.command only pass 1st command as highlighted below:
> "C:\Users\a900565\AppData\Local\Temp/synopsys-detect-6.5.0.jar"
> "--blackduck.url=https://blackduckhub.deere.com"
> "--blackduck.trust.cert=true" "--blackduck.api.token=********"
> "--detect.project.name=**********"
> "--detect.project.version.name=master"
> "--detect.maven.include.plugins=true"
> "--detect.included.detector.types=maven"
> "--detect.maven.build.command=E:\apache-maven-3.0.3\bin\mvn.cmd" "-f"
> "pom.xml" "-s" "settings.xml" "-gs" "settings.xml" "clean" "install"
> "-DIafConfigSuffix=Cert" 07:58:14 Java Source:
> JAVA_HOME/bin/java=C:\Program Files\Amazon
> Corretto\jdk1.8.0_202/bin/java 07:58:15 ______ _ _
> 07:58:15 | _ \ | | | | 07:58:15 | | | |___| |_ ___ ___|
> |_ 07:58:15 | | | / _ \ __/ _ \/ __| __| 07:58:15 | |/ / __/ || __/
> (__| |_ 07:58:15 |___/ \___|\__\___|\___|\__| 07:58:15 07:58:17
> 07:58:17 Detect Version: 6.5.0 07:58:17 07:58:17 2020-09-11 07:58:17
> INFO [main] --- 07:58:17 2020-09-11 07:58:17 INFO [main] ---
> Current property values: 07:58:17 2020-09-11 07:58:17 INFO [main] ---
> --property = value [notes] 07:58:18 2020-09-11 07:58:17 INFO [main] --- ------------------------------------------------------------ 07:58:18 2020-09-11 07:58:17 INFO [main] --- blackduck.api.token =
> **************************************************************************************************** [cmd] 07:58:18 2020-09-11 07:58:17 INFO [main] ---
> blackduck.trust.cert = true [cmd] 07:58:18 2020-09-11 07:58:17 INFO
> [main] --- blackduck.url = ************* [cmd] 07:58:18 2020-09-11
> 07:58:17 INFO [main] --- detect.included.detector.types = maven [cmd]
> 07:58:18 2020-09-11 07:58:17 INFO [main] ---
> **detect.maven.build.command = E:\apache-maven-3.0.3\bin\mvn.cmd [cmd]** 07:58:18 2020-09-11 07:58:17 INFO [main] ---
> detect.maven.include.plugins = true [cmd] 07:58:18 2020-09-11
> 07:58:17 INFO [main] --- detect.project.name = ********* [cmd]
> 07:58:18 2020-09-11 07:58:17 INFO [main] ---
> detect.project.version.name = master [cmd]
How can i pass multiple parameter to detect.maven.build.command?
Solution
The issue is related to your nested quotation characters and lack of escape characters. I've taken your PowerShell command and formatted the string correctly with the appropriate escape characters.
powershell label: '', script: '''
[Net.ServicePointManager]::SecurityProtocol = \'tls12\';
irm https://detect.synopsys.com/detect.ps1?$(Get-Random) | iex;
detect
--blackduck.url=$env:HUB_URL
--blackduck.trust.cert=true
--blackduck.api.token=$env:BLACKDUCK_HUB_TOKEN
--detect.project.name=$env:HUB_PROJECT_NAME
--detect.project.version.name=$env:VERSION
--detect.maven.include.plugins=true
--detect.included.detector.types=maven
--detect.maven.build.command="E:\\apache-maven-3.0.3\\bin\\mvn.cmd -f pom.xml -s settings.xml -gs settings.xml clean install -DIafConfigSuffix=Cert"
'''
Pipeline Syntax Generator
You can auto generate this script by using the pipeline syntax generator in Jenkins. Configure a pipeline and click the pipeline syntax hyperlink at the bottom. See image below.
From there you can enter the PowerShell script and click Generate Pipeline Script
Post Script
I noticed that your PowerShell script has some seemingly misplaced quotation marks. If my script doesn't run, please post the PowerShell script that you run directly from the PowerShell console and I will update my answer
See quoting and escaping info here
So, seeing detect is only grabbing the first "word" of your maven command, consider putting a backtick ` before each of the spaces (and maybe some other special characters) in your maven command.
Other useful options, though maybe not useful here:
-hv
--logging.level.com.synopsys.integration=TRACE

Tuning ReactiveElasticsearchClient due to ReadTimeoutException

We've been experimenting around with the ReactiveElasticsearchRepository however we're running into issues when the service remains idle for several hours and you attempt to retrieve data from Elastic Search that it times out.
What we're seeing when making those first few requests is:
2019-11-06 17:31:35.858 WARN [my-service,,,] 56942 --- [ctor-http-nio-1] r.netty.http.client.HttpClientConnect : [id: 0x8cf5e94d, L:/192.168.1.100:60745 - R:elastic.internal.com/192.168.1.101:9200] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
When I enable DEBUG for reactor.netty, I can see that it goes through the motions of trying each connection in the pool:
2019-11-06 17:31:30.841 DEBUG [my-service,,,] 56942 --- [ctor-http-nio-1] r.n.resources.PooledConnectionProvider : [id: 0x8cf5e94d, L:/192.168.1.100:60745 - R:elastic.internal.com/192.168.1.101:9200] Channel acquired, now 1 active connections and 2 inactive connections
2019-11-06 17:31:35.858 WARN [my-service,,,] 56942 --- [ctor-http-nio-1] r.netty.http.client.HttpClientConnect : [id: 0x8cf5e94d, L:/192.168.1.100:60745 - R:elastic.internal.com/192.168.1.101:9200] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
2019-11-06 17:31:35.881 DEBUG [my-service,,,] 56942 --- [ctor-http-nio-1] r.n.resources.PooledConnectionProvider : [id: 0x8cf5e94d, L:/192.168.1.100:60745 ! R:elastic.internal.com/192.168.1.101:9200] Releasing channel
2019-11-06 17:31:35.891 DEBUG [my-service,,,] 56942 --- [ctor-http-nio-1] r.n.resources.PooledConnectionProvider : [id: 0x8cf5e94d, L:/1192.168.1.100:60745 ! R:elastic.internal.com/192.168.1.101:9200] Channel cleaned, now 0 active connections and 2 inactive connections
2019-11-06 17:32:21.249 DEBUG [my-service,,,] 56942 --- [ctor-http-nio-1] r.n.resources.PooledConnectionProvider : [id: 0x38e99d68, L:/192.168.1.100:60744 - R:elastic.internal.com/192.168.1.101:9200] Channel acquired, now 1 active connections and 1 inactive connections
2019-11-06 17:32:26.251 WARN [my-service,,,] 56942 --- [ctor-http-nio-1] r.netty.http.client.HttpClientConnect : [id: 0x38e99d68, L:/192.168.1.100:60744 - R:elastic.internal.com/192.168.1.101:9200] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
2019-11-06 17:32:26.255 DEBUG [my-service,,,] 56942 --- [ctor-http-nio-1] r.n.resources.PooledConnectionProvider : [id: 0x38e99d68, L:/192.168.1.100:60744 ! R:elastic.internal.com/192.168.1.101:9200] Releasing channel
2019-11-06 17:32:26.256 DEBUG [my-service,,,] 56942 --- [ctor-http-nio-1] r.n.resources.PooledConnectionProvider : [id: 0x38e99d68, L:/192.168.1.100:60744 ! R:elastic.internal.com/192.168.1.101:9200] Channel cleaned, now 0 active connections and 1 inactive connections
2019-11-06 17:32:32.592 DEBUG [my-service,,,] 56942 --- [ctor-http-nio-1] r.n.resources.PooledConnectionProvider : [id: 0xdee3a211, L:/1192.168.1.100:60746 - R:elastic.internal.com/192.168.1.101:9200] Channel acquired, now 1 active connections and 0 inactive connections
2019-11-06 17:32:37.597 WARN [my-service,,,] 56942 --- [ctor-http-nio-1] r.netty.http.client.HttpClientConnect : [id: 0xdee3a211, L:/192.168.1.100:60746 - R:elastic.internal.com/192.168.1.101:9200] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
2019-11-06 17:32:37.600 DEBUG [my-service,,,] 56942 --- [ctor-http-nio-1] r.n.resources.PooledConnectionProvider : [id: 0xdee3a211, L:/192.168.1.100:60746 ! R:elastic.internal.com/192.168.1.101:9200] Releasing channel
2019-11-06 17:32:37.600 DEBUG [my-service,,,] 56942 --- [ctor-http-nio-1] r.n.resources.PooledConnectionProvider : [id: 0xdee3a211, L:/192.168.1.100:60746 ! R:elastic.internal.com/192.168.1.101:9200] Channel cleaned, now 0 active connections and 0 inactive connections
Until eventually all the active / inactive connections have been cleaned and then it re-creates new connections which then work.
Is there a way to tune things behind the scenes, to limit how long a connection can remain in the pool for before being re-created? Or an alternative idea to be able to handle these timeouts.

Unable to initialize MongoDB When the Container Starts

Here is my docker-compose.yaml:
version: '3.3'
mongo:
build:
context: '.'
dockerfile: 'Dockerfile'
environment:
MONGO_INITDB_DATABASE: 'mydb'
ports:
- '27017:27017'
volumes:
- 'data-storage:/data/db'
networks:
mynet:
volumes:
data-storage:
networks:
mynet:
Here is my Dockerfile:
FROM mongo:latest
COPY ./initdb.js /docker-entrypoint-initdb.d/
And finally here is my inidb.js:
db.createCollection("strategyitems");
db.strategyitems.createIndex( {strategy: 1 }, { unique: false } );
db.strategyitems.createIndex( {strategy: 1, symbol: 1 }, { unique: true } );
db.strategyitems.insertMany([
{ strategy: "crypto", symbol: "btcusd", eval_period: 15, buy_booster: 8.0, sell_booster: 5.0, buy_lot: 0.2, sell_lot: 0.2 },
{ strategy: "crypto", symbol: "ethusd", eval_period: 15, buy_booster: 8.0, sell_booster: 5.0, buy_lot: 0.2, sell_lot: 0.2 },
{ strategy: "crypto", symbol: "neousd", eval_period: 15, buy_booster: 8.0, sell_booster: 5.0, buy_lot: 0.2, sell_lot: 0.2 }
]);
The container builds and starts successfully... but no way to get the db statements above executed.
If I log into the container, folder /docker-entrypoint-initdb.d/ contains initdb.js... so I'd expect the db get intialized.
Am I missing something?
So the supplied compose file doesn't work for me, I had to edit it to get it up & running (v18.06 CE), so heads-up on that.
version: '3.3'
services:
mongo:
build:
context: .
dockerfile: Dockerfile
environment:
MONGO_INITDB_DATABASE: 'mydb'
ports:
- '27017:27017'
volumes:
- 'data-storage:/data/db'
networks:
mynet:
volumes:
data-storage:
networks:
mynet:
Next, if you'd run docker-compose up before adding the initdb.js file and then stopped with docker-compose down, then docker-compose down stops the containers, but doesn't remove the volume
docker ps
| CONTAINER | ID | IMAGE | COMMAND | CREATED | STATUS | PORTS | NAMES | | | |
|--------------|------------------|----------------------|---------|---------|--------|-------|-------|---------|--------------------------|--------------------|
| c412bbd9a22b | lumberjack_mongo | docker-entrypoint.s… | 7 | minutes | ago | Up | 6 | minutes | 0.0.0.0:27017->27017/tcp | lumberjack_mongo_1 |
docker volume ls
| DRIVER | | VOLUME | NAME |
|--------|---|--------|-------------------------|
| local | | | lumberjack_data-storage |
docker-compose down
Removing lumberjack_mongo_1 ... done
Removing network lumberjack_mynet
docker volume ls
| DRIVER | | VOLUME | NAME |
|--------|---|--------|-------------------------|
| local | | | lumberjack_data-storage |
The problem arises when docker-compose up is run when the volume exists - Docker mounts the volume before the container starts up. Mongo does some pre-checks and if it finds that the directories are present, then skips the initdb sequence.
If you remove the volume after docker-compose down and do a docker-compose up, the volume will be created from scratch, the pre-check finds nothing and initializes the mongodb
docker volume rm lumberjack_data-storage
lumberjack_data-storage
docker-compose up
Creating network "lumberjack_mynet" with the default driver
Creating volume "lumberjack_data-storage" with default driver
Creating lumberjack_mongo_1 ... done
Attaching to lumberjack_mongo_1
[....]
mongo_1 | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/initdb.js
mongo_1 | 2018-08-04T18:08:47.699+0000 I INDEX [LogicalSessionCacheRefresh] build index on: config.system.sessions properties: { v: 2, key: { lastUse: 1 }, name: "lsidTTLIndex", ns: "config.system.sessions", expireAfterSeconds: 1800 }
mongo_1 | 2018-08-04T18:08:47.745+0000 I NETWORK [conn2] received client metadata from 127.0.0.1:45324 conn2: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
mongo_1 | 2018-08-04T18:08:47.747+0000 I STORAGE [conn2] createCollection: initdb.strategyitems with generated UUID: 585edb14-bc63-4879-bc5d-504867fb5e12
mongo_1 | 2018-08-04T18:08:47.851+0000 I INDEX [conn2] build index on: initdb.strategyitems properties: { v: 2, key: { strategy: 1.0 }, name: "strategy_1", ns: "initdb.strategyitems" }
mongo_1 | 2018-08-04T18:08:47.851+0000 I INDEX [conn2] building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongo_1 | 2018-08-04T18:08:47.852+0000 I INDEX [conn2] build index done. scanned 0 total records. 0 secs
mongo_1 | 2018-08-04T18:08:47.881+0000 I INDEX [conn2] build index on: initdb.strategyitems properties: { v: 2, unique: true, key: { strategy: 1.0, symbol: 1.0 }, name: "strategy_1_symbol_1", ns: "initdb.strategyitems" }
mongo_1 | 2018-08-04T18:08:47.881+0000 I INDEX [conn2] building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongo_1 | 2018-08-04T18:08:47.882+0000 I INDEX [conn2] build index done. scanned 0 total records. 0 secs
mongo_1 | 2018-08-04T18:08:47.886+0000 I NETWORK [conn2] end connection 127.0.0.1:45324 (0 connections now open)
[....]
mongo_1 | MongoDB init process complete; ready for start up.
mongo_1 |
mongo_1 | 2018-08-04T18:08:48.933+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
mongo_1 | 2018-08-04T18:08:48.939+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=e90c80083360

Problems in running marble example offline

I'm trying to run this example on my computer without the help of the Blockchain implementation from Bluemix.
https://github.com/IBM-Blockchain/marbles/blob/master/tutorial_part1.md#confignetwork
I have downloaded the hyperledger/fabric-peer docker images and set up the corresponding docker-compose.yml file with the correct CORE_PEER_ID and CORE_VM_ENDPOINT.
vp0:
image: hyperledger/fabric-peer
environment:
- CORE_PEER_ID=vp0
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=172.17.0.1:2375
- CORE_LOGGING_LEVEL=DEBUG
command: peer node start
Now, I try to run the marbles node app with the correct api_host and api_port.
var peers = [{
"api_host": "172.17.0.2", //hostname or ip of peer
"api_port": 7051, //http port
"id": "vp0" //unique id of peer
}];
The fabric peer seems to ignore the connection request with the node app and responds with:
vp0_1 | 2016/10/13 20:15:03 transport: http2Server.HandleStreams received bogus greeting from client: "POST /registrar HTTP/1.1"
I try also to send a GET requests with postman:
http://172.17.0.2:7051/chain
The response gives
vp0_1 | 2016/10/13 20:13:23 transport: http2Server.HandleStreams received bogus greeting from client: "GET /chain HTTP/1.1\r\nHos"
Here is the whole output of my GET request
Starting capstone_vp0_1
Attaching to capstone_vp0_1
vp0_1 | 20:11:36.771 [logging] LoggingInit -> DEBU 001 Setting default logging level to DEBUG for command 'node'
vp0_1 | 20:11:36.771 [peer] func1 -> INFO 002 Auto detected peer address: 172.17.0.2:7051
vp0_1 | 20:11:36.771 [peer] func1 -> INFO 003 Auto detected peer address: 172.17.0.2:7051
vp0_1 | 20:11:36.772 [eventhub_producer] AddEventType -> DEBU 004 registering BLOCK
vp0_1 | 20:11:36.772 [eventhub_producer] AddEventType -> DEBU 005 registering CHAINCODE
vp0_1 | 20:11:36.772 [eventhub_producer] AddEventType -> DEBU 006 registering REJECTION
vp0_1 | 20:11:36.772 [eventhub_producer] AddEventType -> DEBU 007 registering REGISTER
vp0_1 | 20:11:36.772 [nodeCmd] serve -> INFO 008 Security enabled status: false
vp0_1 | 20:11:36.772 [nodeCmd] serve -> INFO 009 Privacy enabled status: false
vp0_1 | 20:11:36.772 [eventhub_producer] start -> INFO 00a event processor started
vp0_1 | 20:11:36.772 [db] open -> DEBU 00b Is db path [/var/hyperledger/production/db] empty [false]
vp0_1 | 20:11:36.985 [chaincode] NewChaincodeSupport -> INFO 00c Chaincode support using peerAddress: 172.17.0.2:7051
vp0_1 | 20:11:36.986 [chaincode] NewChaincodeSupport -> DEBU 00d Turn off keepalive(value 0)
vp0_1 | 20:11:36.986 [sysccapi] RegisterSysCC -> INFO 00e system chaincode (noop,github.com/hyperledger/fabric/bddtests/syschaincode/noop) disabled
vp0_1 | 20:11:36.986 [nodeCmd] serve -> DEBU 00f Running as validating peer - making genesis block if needed
vp0_1 | 20:11:36.986 [state] loadConfig -> INFO 010 Loading configurations...
vp0_1 | 20:11:36.986 [state] loadConfig -> INFO 011 Configurations loaded. stateImplName=[buckettree], stateImplConfigs=map[numBuckets:%!s(int=1000003) maxGroupingAtEachLevel:%!s(int=5) bucketCacheSize:%!s(int=100)], deltaHistorySize=[500]
vp0_1 | 20:11:36.986 [state] NewState -> INFO 012 Initializing state implementation [buckettree]
vp0_1 | 20:11:36.986 [buckettree] initConfig -> INFO 013 configs passed during initialization = map[string]interface {}{"numBuckets":1000003, "maxGroupingAtEachLevel":5, "bucketCacheSize":100}
vp0_1 | 20:11:36.986 [buckettree] initConfig -> INFO 014 Initializing bucket tree state implemetation with configurations &{maxGroupingAtEachLevel:5 lowestLevel:9 levelToNumBucketsMap:map[2:13 3:65 9:1000003 1:3 8:200001 6:8001 4:321 7:40001 5:1601 0:1] hashFunc:0xab4560}
vp0_1 | 20:11:36.986 [buckettree] newBucketCache -> INFO 015 Constructing bucket-cache with max bucket cache size = [100] MBs
vp0_1 | 20:11:36.986 [buckettree] loadAllBucketNodesFromDB -> INFO 016 Loaded buckets data in cache. Total buckets in DB = [0]. Total cache size:=0
vp0_1 | 20:11:36.987 [nodeCmd] serve -> DEBU 017 Running as validating peer - installing consensus
vp0_1 | 20:11:36.987 [peer] initDiscovery -> DEBU 018 Retrieved discovery list from disk: []
vp0_1 | 20:11:36.987 [consensus/controller] NewConsenter -> INFO 019 Creating default consensus plugin (noops)
vp0_1 | 20:11:36.988 [consensus/noops] newNoops -> DEBU 01a Creating a NOOPS object
vp0_1 | 20:11:36.988 [consensus/noops] newNoops -> INFO 01b NOOPS consensus type = *noops.Noops
vp0_1 | 20:11:36.988 [consensus/noops] newNoops -> INFO 01c NOOPS block size = 500
vp0_1 | 20:11:36.988 [consensus/noops] newNoops -> INFO 01d NOOPS block wait = 1s
vp0_1 | 20:11:36.988 [peer] chatWithSomePeers -> DEBU 01e Starting up the first peer of a new network
vp0_1 | 20:11:36.988 [consensus/statetransfer] verifyAndRecoverBlockchain -> DEBU 01f Validating existing blockchain, highest validated block is 0, valid through 0
vp0_1 | 20:11:36.988 [consensus/statetransfer] blockThread -> INFO 021 Validated blockchain to the genesis block
vp0_1 | 20:11:36.988 [consensus/handler] 1 -> DEBU 020 Starting up message thread for consenter
vp0_1 | 20:11:36.988 [rest] StartOpenchainRESTServer -> INFO 022 Initializing the REST service on 0.0.0.0:7050, TLS is disabled.
vp0_1 | 20:11:36.988 [nodeCmd] serve -> INFO 023 Starting peer with ID=name:"vp0" , network ID=dev, address=172.17.0.2:7051, rootnodes=, validator=true
vp0_1 | 20:11:36.988 [peer] ensureConnected -> DEBU 024 Starting Peer reconnect service (touch service), with period = 6s
vp0_1 | 20:11:42.989 [peer] ensureConnected -> DEBU 025 Touch service indicates no dropped connections
vp0_1 | 20:11:42.989 [peer] ensureConnected -> DEBU 026 Connected to: []
vp0_1 | 20:11:42.989 [peer] ensureConnected -> DEBU 027 Discovery knows about: []
You are getting this error message because you are using the wrong port (7051 is actually the grpc port) while sending the REST request.
The default rest API port for the hyperledger fabric peer is 7050. So your GET request should be:
http://172.17.0.2:7050/chain
The port in use by the peer for the REST interface can be verified by noting the following log entry (seen in your log)
vp0_1 | 20:11:36.988 [rest] StartOpenchainRESTServer -> INFO 022 Initializing the REST service on 0.0.0.0:7050, TLS is disabled.