minikube service is failing to expose URL - minikube

F:\Udemy\GitRepo\Kubernetes-Tutorial>kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-app-deploy-68698d9757-wrs9z 1/1 Running 0 14m 172.17.0.3 minikube <none> <none>
F:\Udemy\GitRepo\Kubernetes-Tutorial>minikube service my-app-svc
|-----------|------------|-------------|-----------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|------------|-------------|-----------------------------|
| default | my-app-svc | 80 | http://172.30.105.146:30365 |
|-----------|------------|-------------|-----------------------------|
* Opening service default/my-app-svc in default browser...
F:\Udemy\GitRepo\Kubernetes-Tutorial>kubectl describe service my-app-svc
Name: my-app-svc
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=my-app
Type: NodePort
IP: 10.98.9.115
Port: <unset> 80/TCP
TargetPort: 9001/TCP
NodePort: <unset> 30365/TCP
Endpoints: 172.17.0.3:9001
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
F:\Udemy\GitRepo\Kubernetes-Tutorial>kubectl logs my-app-deploy-68698d9757-wrs9z
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.3.1.RELEASE)
2021-08-21 13:37:21.046 INFO 1 --- [ main] c.d.d.DockerpublishApplication : Starting DockerpublishApplication v0.0.3 on my-app-deploy-68698d9757-wrs9z with PID 1 (/app.jar started by root in /)
2021-08-21 13:37:21.050 INFO 1 --- [ main] c.d.d.DockerpublishApplication : No active profile set, falling back to default profiles: default
2021-08-21 13:37:22.645 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 9091 (http)
2021-08-21 13:37:22.659 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2021-08-21 13:37:22.660 INFO 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.36]
2021-08-21 13:37:22.785 INFO 1 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2021-08-21 13:37:22.785 INFO 1 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1646 ms
2021-08-21 13:37:23.302 INFO 1 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
2021-08-21 13:37:23.496 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 9091 (http) with context path ''
2021-08-21 13:37:23.510 INFO 1 --- [ main] c.d.d.DockerpublishApplication : Started DockerpublishApplication in 3.279 seconds (JVM running for 4.077)
F:\Udemy\GitRepo\Kubernetes-Tutorial>
Everything seems to be good, but not working well.
Refused to connect issue comes as below for
minikube service my-app-svc

Your service or application is running on different port as you are getting connection refused.
Spring boot running on the 9091 : Tomcat started on port(s): 9091 (http) with context path ''
But your service is redirecting the traffic TargetPort: 9001/TCP
Your target port should be 9091 instead of 9001
Your will access the application over the node port Ip request, which will reach to the K8s service and be forwarded to the TargetPort: 9091/TCP on which the application is running.

Related

How to pass multiple parameter to --detect.maven.build.command for blackduck hub scanning of maven project using jenkins

I am passing below command as PowerShell in Jenkins:
powershell "[Net.ServicePointManager]::SecurityProtocol = 'tls12'; irm https://detect.synopsys.com/detect.ps1?$(Get-Random) | iex; detect" --blackduck.url=$env:HUB_URL --blackduck.trust.cert=true --blackduck.api.token=$env:BLACKDUCK_HUB_TOKEN --detect.project.name=$env:HUB_PROJECT_NAME --detect.project.version.name=$env:VERSION --detect.maven.include.plugins=true --detect.included.detector.types=maven --detect.maven.build.command="E:\apache-maven-3.0.3\bin\mvn.cmd -f pom.xml -s settings.xml -gs settings.xml clean install -DIafConfigSuffix=Cert"`
but when detect executed, --detect.maven.build.command only pass 1st command as highlighted below:
> "C:\Users\a900565\AppData\Local\Temp/synopsys-detect-6.5.0.jar"
> "--blackduck.url=https://blackduckhub.deere.com"
> "--blackduck.trust.cert=true" "--blackduck.api.token=********"
> "--detect.project.name=**********"
> "--detect.project.version.name=master"
> "--detect.maven.include.plugins=true"
> "--detect.included.detector.types=maven"
> "--detect.maven.build.command=E:\apache-maven-3.0.3\bin\mvn.cmd" "-f"
> "pom.xml" "-s" "settings.xml" "-gs" "settings.xml" "clean" "install"
> "-DIafConfigSuffix=Cert" 07:58:14 Java Source:
> JAVA_HOME/bin/java=C:\Program Files\Amazon
> Corretto\jdk1.8.0_202/bin/java 07:58:15 ______ _ _
> 07:58:15 | _ \ | | | | 07:58:15 | | | |___| |_ ___ ___|
> |_ 07:58:15 | | | / _ \ __/ _ \/ __| __| 07:58:15 | |/ / __/ || __/
> (__| |_ 07:58:15 |___/ \___|\__\___|\___|\__| 07:58:15 07:58:17
> 07:58:17 Detect Version: 6.5.0 07:58:17 07:58:17 2020-09-11 07:58:17
> INFO [main] --- 07:58:17 2020-09-11 07:58:17 INFO [main] ---
> Current property values: 07:58:17 2020-09-11 07:58:17 INFO [main] ---
> --property = value [notes] 07:58:18 2020-09-11 07:58:17 INFO [main] --- ------------------------------------------------------------ 07:58:18 2020-09-11 07:58:17 INFO [main] --- blackduck.api.token =
> **************************************************************************************************** [cmd] 07:58:18 2020-09-11 07:58:17 INFO [main] ---
> blackduck.trust.cert = true [cmd] 07:58:18 2020-09-11 07:58:17 INFO
> [main] --- blackduck.url = ************* [cmd] 07:58:18 2020-09-11
> 07:58:17 INFO [main] --- detect.included.detector.types = maven [cmd]
> 07:58:18 2020-09-11 07:58:17 INFO [main] ---
> **detect.maven.build.command = E:\apache-maven-3.0.3\bin\mvn.cmd [cmd]** 07:58:18 2020-09-11 07:58:17 INFO [main] ---
> detect.maven.include.plugins = true [cmd] 07:58:18 2020-09-11
> 07:58:17 INFO [main] --- detect.project.name = ********* [cmd]
> 07:58:18 2020-09-11 07:58:17 INFO [main] ---
> detect.project.version.name = master [cmd]
How can i pass multiple parameter to detect.maven.build.command?
Solution
The issue is related to your nested quotation characters and lack of escape characters. I've taken your PowerShell command and formatted the string correctly with the appropriate escape characters.
powershell label: '', script: '''
[Net.ServicePointManager]::SecurityProtocol = \'tls12\';
irm https://detect.synopsys.com/detect.ps1?$(Get-Random) | iex;
detect
--blackduck.url=$env:HUB_URL
--blackduck.trust.cert=true
--blackduck.api.token=$env:BLACKDUCK_HUB_TOKEN
--detect.project.name=$env:HUB_PROJECT_NAME
--detect.project.version.name=$env:VERSION
--detect.maven.include.plugins=true
--detect.included.detector.types=maven
--detect.maven.build.command="E:\\apache-maven-3.0.3\\bin\\mvn.cmd -f pom.xml -s settings.xml -gs settings.xml clean install -DIafConfigSuffix=Cert"
'''
Pipeline Syntax Generator
You can auto generate this script by using the pipeline syntax generator in Jenkins. Configure a pipeline and click the pipeline syntax hyperlink at the bottom. See image below.
From there you can enter the PowerShell script and click Generate Pipeline Script
Post Script
I noticed that your PowerShell script has some seemingly misplaced quotation marks. If my script doesn't run, please post the PowerShell script that you run directly from the PowerShell console and I will update my answer
See quoting and escaping info here
So, seeing detect is only grabbing the first "word" of your maven command, consider putting a backtick ` before each of the spaces (and maybe some other special characters) in your maven command.
Other useful options, though maybe not useful here:
-hv
--logging.level.com.synopsys.integration=TRACE

Tuning ReactiveElasticsearchClient due to ReadTimeoutException

We've been experimenting around with the ReactiveElasticsearchRepository however we're running into issues when the service remains idle for several hours and you attempt to retrieve data from Elastic Search that it times out.
What we're seeing when making those first few requests is:
2019-11-06 17:31:35.858 WARN [my-service,,,] 56942 --- [ctor-http-nio-1] r.netty.http.client.HttpClientConnect : [id: 0x8cf5e94d, L:/192.168.1.100:60745 - R:elastic.internal.com/192.168.1.101:9200] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
When I enable DEBUG for reactor.netty, I can see that it goes through the motions of trying each connection in the pool:
2019-11-06 17:31:30.841 DEBUG [my-service,,,] 56942 --- [ctor-http-nio-1] r.n.resources.PooledConnectionProvider : [id: 0x8cf5e94d, L:/192.168.1.100:60745 - R:elastic.internal.com/192.168.1.101:9200] Channel acquired, now 1 active connections and 2 inactive connections
2019-11-06 17:31:35.858 WARN [my-service,,,] 56942 --- [ctor-http-nio-1] r.netty.http.client.HttpClientConnect : [id: 0x8cf5e94d, L:/192.168.1.100:60745 - R:elastic.internal.com/192.168.1.101:9200] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
2019-11-06 17:31:35.881 DEBUG [my-service,,,] 56942 --- [ctor-http-nio-1] r.n.resources.PooledConnectionProvider : [id: 0x8cf5e94d, L:/192.168.1.100:60745 ! R:elastic.internal.com/192.168.1.101:9200] Releasing channel
2019-11-06 17:31:35.891 DEBUG [my-service,,,] 56942 --- [ctor-http-nio-1] r.n.resources.PooledConnectionProvider : [id: 0x8cf5e94d, L:/1192.168.1.100:60745 ! R:elastic.internal.com/192.168.1.101:9200] Channel cleaned, now 0 active connections and 2 inactive connections
2019-11-06 17:32:21.249 DEBUG [my-service,,,] 56942 --- [ctor-http-nio-1] r.n.resources.PooledConnectionProvider : [id: 0x38e99d68, L:/192.168.1.100:60744 - R:elastic.internal.com/192.168.1.101:9200] Channel acquired, now 1 active connections and 1 inactive connections
2019-11-06 17:32:26.251 WARN [my-service,,,] 56942 --- [ctor-http-nio-1] r.netty.http.client.HttpClientConnect : [id: 0x38e99d68, L:/192.168.1.100:60744 - R:elastic.internal.com/192.168.1.101:9200] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
2019-11-06 17:32:26.255 DEBUG [my-service,,,] 56942 --- [ctor-http-nio-1] r.n.resources.PooledConnectionProvider : [id: 0x38e99d68, L:/192.168.1.100:60744 ! R:elastic.internal.com/192.168.1.101:9200] Releasing channel
2019-11-06 17:32:26.256 DEBUG [my-service,,,] 56942 --- [ctor-http-nio-1] r.n.resources.PooledConnectionProvider : [id: 0x38e99d68, L:/192.168.1.100:60744 ! R:elastic.internal.com/192.168.1.101:9200] Channel cleaned, now 0 active connections and 1 inactive connections
2019-11-06 17:32:32.592 DEBUG [my-service,,,] 56942 --- [ctor-http-nio-1] r.n.resources.PooledConnectionProvider : [id: 0xdee3a211, L:/1192.168.1.100:60746 - R:elastic.internal.com/192.168.1.101:9200] Channel acquired, now 1 active connections and 0 inactive connections
2019-11-06 17:32:37.597 WARN [my-service,,,] 56942 --- [ctor-http-nio-1] r.netty.http.client.HttpClientConnect : [id: 0xdee3a211, L:/192.168.1.100:60746 - R:elastic.internal.com/192.168.1.101:9200] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
2019-11-06 17:32:37.600 DEBUG [my-service,,,] 56942 --- [ctor-http-nio-1] r.n.resources.PooledConnectionProvider : [id: 0xdee3a211, L:/192.168.1.100:60746 ! R:elastic.internal.com/192.168.1.101:9200] Releasing channel
2019-11-06 17:32:37.600 DEBUG [my-service,,,] 56942 --- [ctor-http-nio-1] r.n.resources.PooledConnectionProvider : [id: 0xdee3a211, L:/192.168.1.100:60746 ! R:elastic.internal.com/192.168.1.101:9200] Channel cleaned, now 0 active connections and 0 inactive connections
Until eventually all the active / inactive connections have been cleaned and then it re-creates new connections which then work.
Is there a way to tune things behind the scenes, to limit how long a connection can remain in the pool for before being re-created? Or an alternative idea to be able to handle these timeouts.

Running docker compose causes "Connection to localhost:5432 refused." exception

I've looked at SO posts related to this questions here, here, here, and here but I haven't had any luck with the fixes proposed. Whenever I run the command docker-compose -f stack.yml up I receive the following stack trace:
Attaching to weg-api_db_1, weg-api_weg-api_1
db_1 | 2018-07-04 14:57:15.384 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2018-07-04 14:57:15.384 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2018-07-04 14:57:15.388 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2018-07-04 14:57:15.402 UTC [23] LOG: database system was interrupted; last known up at 2018-07-04 14:45:24 UTC
db_1 | 2018-07-04 14:57:15.513 UTC [23] LOG: database system was not properly shut down; automatic recovery in progress
db_1 | 2018-07-04 14:57:15.515 UTC [23] LOG: redo starts at 0/16341E0
db_1 | 2018-07-04 14:57:15.515 UTC [23] LOG: invalid record length at 0/1634218: wanted 24, got 0
db_1 | 2018-07-04 14:57:15.515 UTC [23] LOG: redo done at 0/16341E0
db_1 | 2018-07-04 14:57:15.525 UTC [1] LOG: database system is ready to accept connections
weg-api_1 |
weg-api_1 | . ____ _ __ _ _
weg-api_1 | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
weg-api_1 | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
weg-api_1 | \\/ ___)| |_)| | | | | || (_| | ) ) ) )
weg-api_1 | ' |____| .__|_| |_|_| |_\__, | / / / /
weg-api_1 | =========|_|==============|___/=/_/_/_/
weg-api_1 | :: Spring Boot :: (v1.5.3.RELEASE)
weg-api_1 |
weg-api_1 | 2018-07-04 14:57:16.908 INFO 7 --- [ main] api.ApiKt : Starting ApiKt v0.0.1-SNAPSHOT on f9c58f4f2f27 with PID 7 (/app/spring-jpa-postgresql-spring-boot-0.0.1-SNAPSHOT.jar started by root in /app)
weg-api_1 | 2018-07-04 14:57:16.913 INFO 7 --- [ main] api.ApiKt : No active profile set, falling back to default profiles: default
weg-api_1 | 2018-07-04 14:57:17.008 INFO 7 --- [ main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext#6e5e91e4: startup date [Wed Jul 04 14:57:17 GMT 2018]; root of context hierarchy
weg-api_1 | 2018-07-04 14:57:19.082 INFO 7 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat initialized with port(s): 8080 (http)
weg-api_1 | 2018-07-04 14:57:19.102 INFO 7 --- [ main] o.apache.catalina.core.StandardService : Starting service Tomcat
weg-api_1 | 2018-07-04 14:57:19.104 INFO 7 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/8.5.14
weg-api_1 | 2018-07-04 14:57:19.215 INFO 7 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
weg-api_1 | 2018-07-04 14:57:19.215 INFO 7 --- [ost-startStop-1] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 2211 ms
weg-api_1 | 2018-07-04 14:57:19.370 INFO 7 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean : Mapping servlet: 'dispatcherServlet' to [/]
weg-api_1 | 2018-07-04 14:57:19.375 INFO 7 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'characterEncodingFilter' to: [/*]
weg-api_1 | 2018-07-04 14:57:19.376 INFO 7 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
weg-api_1 | 2018-07-04 14:57:19.376 INFO 7 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'httpPutFormContentFilter' to: [/*]
weg-api_1 | 2018-07-04 14:57:19.376 INFO 7 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'requestContextFilter' to: [/*]
weg-api_1 | 2018-07-04 14:57:19.867 ERROR 7 --- [ main] o.a.tomcat.jdbc.pool.ConnectionPool : Unable to create initial connections of pool.
weg-api_1 |
weg-api_1 | org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
I thought that my .yml file was brain-dead-simple, but I must be missing something vital for the internal routing between the two containers to fail.
EDIT
My stack.yml is below:
version: '3'
services:
db:
image: postgres
restart: always
container_name: db
environment:
POSTGRES_USER: root
POSTGRES_PASSWORD: password
POSTGRES_DB: weg
ports:
- "5432:5432"
weg-api:
image: weg-api
restart: always
container_name: weg-api
ports:
- "8080:8080"
depends_on:
- "db"
EDIT
My Springboot application properties are below:
spring.datasource.url=jdbc:postgresql://db:5432/weg
spring.datasource.username=root
spring.datasource.password=password
spring.jpa.generate-ddl=true
I'm at a loss as to how to proceed.
Your database is running on db container, not on your localhost inside your weg-api container. Therefore, you have to change
spring.datasource.url=jdbc:postgresql://localhost:5432/weg
to
spring.datasource.url=jdbc:postgresql://db:5432/weg
I would also suggest you give container_name to each of your containers to be sure the container names are always same. Otherwise you might sometimes get different names depending on your configuration.
version: '3'
services:
db:
image: postgres
restart: always
container_name: db
environment:
POSTGRES_USER: root
POSTGRES_PASSWORD: password
POSTGRES_DB: weg
ports:
- "5432:5432"
weg-api:
image: weg-api
restart: always
container_name: weg-api
ports:
- "8080:8080"
depends_on:
- "db"

Problems in running marble example offline

I'm trying to run this example on my computer without the help of the Blockchain implementation from Bluemix.
https://github.com/IBM-Blockchain/marbles/blob/master/tutorial_part1.md#confignetwork
I have downloaded the hyperledger/fabric-peer docker images and set up the corresponding docker-compose.yml file with the correct CORE_PEER_ID and CORE_VM_ENDPOINT.
vp0:
image: hyperledger/fabric-peer
environment:
- CORE_PEER_ID=vp0
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=172.17.0.1:2375
- CORE_LOGGING_LEVEL=DEBUG
command: peer node start
Now, I try to run the marbles node app with the correct api_host and api_port.
var peers = [{
"api_host": "172.17.0.2", //hostname or ip of peer
"api_port": 7051, //http port
"id": "vp0" //unique id of peer
}];
The fabric peer seems to ignore the connection request with the node app and responds with:
vp0_1 | 2016/10/13 20:15:03 transport: http2Server.HandleStreams received bogus greeting from client: "POST /registrar HTTP/1.1"
I try also to send a GET requests with postman:
http://172.17.0.2:7051/chain
The response gives
vp0_1 | 2016/10/13 20:13:23 transport: http2Server.HandleStreams received bogus greeting from client: "GET /chain HTTP/1.1\r\nHos"
Here is the whole output of my GET request
Starting capstone_vp0_1
Attaching to capstone_vp0_1
vp0_1 | 20:11:36.771 [logging] LoggingInit -> DEBU 001 Setting default logging level to DEBUG for command 'node'
vp0_1 | 20:11:36.771 [peer] func1 -> INFO 002 Auto detected peer address: 172.17.0.2:7051
vp0_1 | 20:11:36.771 [peer] func1 -> INFO 003 Auto detected peer address: 172.17.0.2:7051
vp0_1 | 20:11:36.772 [eventhub_producer] AddEventType -> DEBU 004 registering BLOCK
vp0_1 | 20:11:36.772 [eventhub_producer] AddEventType -> DEBU 005 registering CHAINCODE
vp0_1 | 20:11:36.772 [eventhub_producer] AddEventType -> DEBU 006 registering REJECTION
vp0_1 | 20:11:36.772 [eventhub_producer] AddEventType -> DEBU 007 registering REGISTER
vp0_1 | 20:11:36.772 [nodeCmd] serve -> INFO 008 Security enabled status: false
vp0_1 | 20:11:36.772 [nodeCmd] serve -> INFO 009 Privacy enabled status: false
vp0_1 | 20:11:36.772 [eventhub_producer] start -> INFO 00a event processor started
vp0_1 | 20:11:36.772 [db] open -> DEBU 00b Is db path [/var/hyperledger/production/db] empty [false]
vp0_1 | 20:11:36.985 [chaincode] NewChaincodeSupport -> INFO 00c Chaincode support using peerAddress: 172.17.0.2:7051
vp0_1 | 20:11:36.986 [chaincode] NewChaincodeSupport -> DEBU 00d Turn off keepalive(value 0)
vp0_1 | 20:11:36.986 [sysccapi] RegisterSysCC -> INFO 00e system chaincode (noop,github.com/hyperledger/fabric/bddtests/syschaincode/noop) disabled
vp0_1 | 20:11:36.986 [nodeCmd] serve -> DEBU 00f Running as validating peer - making genesis block if needed
vp0_1 | 20:11:36.986 [state] loadConfig -> INFO 010 Loading configurations...
vp0_1 | 20:11:36.986 [state] loadConfig -> INFO 011 Configurations loaded. stateImplName=[buckettree], stateImplConfigs=map[numBuckets:%!s(int=1000003) maxGroupingAtEachLevel:%!s(int=5) bucketCacheSize:%!s(int=100)], deltaHistorySize=[500]
vp0_1 | 20:11:36.986 [state] NewState -> INFO 012 Initializing state implementation [buckettree]
vp0_1 | 20:11:36.986 [buckettree] initConfig -> INFO 013 configs passed during initialization = map[string]interface {}{"numBuckets":1000003, "maxGroupingAtEachLevel":5, "bucketCacheSize":100}
vp0_1 | 20:11:36.986 [buckettree] initConfig -> INFO 014 Initializing bucket tree state implemetation with configurations &{maxGroupingAtEachLevel:5 lowestLevel:9 levelToNumBucketsMap:map[2:13 3:65 9:1000003 1:3 8:200001 6:8001 4:321 7:40001 5:1601 0:1] hashFunc:0xab4560}
vp0_1 | 20:11:36.986 [buckettree] newBucketCache -> INFO 015 Constructing bucket-cache with max bucket cache size = [100] MBs
vp0_1 | 20:11:36.986 [buckettree] loadAllBucketNodesFromDB -> INFO 016 Loaded buckets data in cache. Total buckets in DB = [0]. Total cache size:=0
vp0_1 | 20:11:36.987 [nodeCmd] serve -> DEBU 017 Running as validating peer - installing consensus
vp0_1 | 20:11:36.987 [peer] initDiscovery -> DEBU 018 Retrieved discovery list from disk: []
vp0_1 | 20:11:36.987 [consensus/controller] NewConsenter -> INFO 019 Creating default consensus plugin (noops)
vp0_1 | 20:11:36.988 [consensus/noops] newNoops -> DEBU 01a Creating a NOOPS object
vp0_1 | 20:11:36.988 [consensus/noops] newNoops -> INFO 01b NOOPS consensus type = *noops.Noops
vp0_1 | 20:11:36.988 [consensus/noops] newNoops -> INFO 01c NOOPS block size = 500
vp0_1 | 20:11:36.988 [consensus/noops] newNoops -> INFO 01d NOOPS block wait = 1s
vp0_1 | 20:11:36.988 [peer] chatWithSomePeers -> DEBU 01e Starting up the first peer of a new network
vp0_1 | 20:11:36.988 [consensus/statetransfer] verifyAndRecoverBlockchain -> DEBU 01f Validating existing blockchain, highest validated block is 0, valid through 0
vp0_1 | 20:11:36.988 [consensus/statetransfer] blockThread -> INFO 021 Validated blockchain to the genesis block
vp0_1 | 20:11:36.988 [consensus/handler] 1 -> DEBU 020 Starting up message thread for consenter
vp0_1 | 20:11:36.988 [rest] StartOpenchainRESTServer -> INFO 022 Initializing the REST service on 0.0.0.0:7050, TLS is disabled.
vp0_1 | 20:11:36.988 [nodeCmd] serve -> INFO 023 Starting peer with ID=name:"vp0" , network ID=dev, address=172.17.0.2:7051, rootnodes=, validator=true
vp0_1 | 20:11:36.988 [peer] ensureConnected -> DEBU 024 Starting Peer reconnect service (touch service), with period = 6s
vp0_1 | 20:11:42.989 [peer] ensureConnected -> DEBU 025 Touch service indicates no dropped connections
vp0_1 | 20:11:42.989 [peer] ensureConnected -> DEBU 026 Connected to: []
vp0_1 | 20:11:42.989 [peer] ensureConnected -> DEBU 027 Discovery knows about: []
You are getting this error message because you are using the wrong port (7051 is actually the grpc port) while sending the REST request.
The default rest API port for the hyperledger fabric peer is 7050. So your GET request should be:
http://172.17.0.2:7050/chain
The port in use by the peer for the REST interface can be verified by noting the following log entry (seen in your log)
vp0_1 | 20:11:36.988 [rest] StartOpenchainRESTServer -> INFO 022 Initializing the REST service on 0.0.0.0:7050, TLS is disabled.

Cloud Foundry bosh Error 140003: unknown resource pool

I'm attempting to setup a service broker to add postgres to our Cloud Foundry installation. We're running our system on vmWare. I'm using this release in order to do that:
cf-contrib-release
I added the release in bosh:
#bosh releases
Acting as user 'director' on 'microbosh-ba846726bed7032f1fd4'
+-----------------------+----------------------+-------------+
| Name | Versions | Commit Hash |
+-----------------------+----------------------+-------------+
| cf | 208.12* | a0de569a+ |
| cf-autoscaling | 13* | 927bc7ed+ |
| cf-metrics | 34* | 22f7e1e1 |
| cf-mysql | 20* | caa23b3d+ |
| | 22* | af278086+ |
| cf-rabbitmq | 161* | 4d298aec |
| cf-riak-cs | 10* | 5e7e46c9+ |
| cf-services-contrib | 6* | 57fd2098+ |
| docker | 23* | 82346881+ |
| newrelic_broker | 1.3* | 1ce3471d+ |
| notifications-with-ui | 18* | 490b6446+ |
| postgresql-docker | 4* | a53c9333+ |
| push-console-release | console-du-jour-203* | d2d31585+ |
| spring-cloud-broker | 1.0.0* | efd69612 |
+-----------------------+----------------------+-------------+
(*) Currently deployed
(+) Uncommitted changes
Releases total: 13
I setup my resource pools and jobs in my yaml file according to this doumentation:
http://bosh.io/docs/vsphere-cpi.html#resource-pools
This is how our cluster looks:
vmware cluster
And here is what I put in the yaml file:
resource_pools:
- name: default
network: default
stemcell:
name: bosh-vsphere-esxi-ubuntu-trusty-go_agent
version: '2865.1'
cloud_properties:
cpu: 2
ram: 4096
disk: 10240
datacenters:
- name: 'Universal City'
clusters:
- USH_UCS_CLOUD_FOUNDRY_NONPROD_01: {resource_pool: 'USH_UCS_CLOUD_FOUNDRY_NONPROD_01_RP'}
jobs:
- name: gateways
release: cf-services-contrib
templates:
- name: postgresql_gateway_ng
instances: 1
resource_pool: 'USH_UCS_CLOUD_FOUNDRY_NONPROD_01_RP'
networks:
- name: default
default: [dns, gateway]
properties:
# Service credentials
uaa_client_id: "cf"
uaa_endpoint: http://uaa.devcloudwest.example.com
uaa_client_auth_credentials:
username: admin
password: secret
And I'm getting an error when I run 'bosh deploy' that says:
Error 140003: Job `gateways' references an unknown resource pool `USH_UCS_CLOUD_FOUNDRY_NONPROD_01_RP'
Here's my yaml file in it's entirety:
name: cf-22b9f4d62bb6f0563b71
director_uuid: fd713790-b1bc-401a-8ea1-b8209f1cc90c
releases:
- name: cf-services-contrib
version: 6
compilation:
workers: 3
network: default
reuse_compilation_vms: true
cloud_properties:
ram: 5120
disk: 10240
cpu: 2
update:
canaries: 1
canary_watch_time: 30000-60000
update_watch_time: 30000-60000
max_in_flight: 4
networks:
- name: default
type: manual
subnets:
- range: exam 10.114..130.0/24
gateway: exam 10.114..130.1
cloud_properties:
name: 'USH_UCS_CLOUD_FOUNDRY'
#resource_pools:
# - name: common
# network: default
# size: 8
# stemcell:
# name: bosh-vsphere-esxi-ubuntu-trusty-go_agent
# version: '2865.1'
resource_pools:
- name: default
network: default
stemcell:
name: bosh-vsphere-esxi-ubuntu-trusty-go_agent
version: '2865.1'
cloud_properties:
cpu: 2
ram: 4096
disk: 10240
datacenters:
- name: 'Universal City'
clusters:
- USH_UCS_CLOUD_FOUNDRY_NONPROD_01: {resource_pool: 'USH_UCS_CLOUD_FOUNDRY_NONPROD_01_RP'}
jobs:
- name: gateways
release: cf-services-contrib
templates:
- name: postgresql_gateway_ng
instances: 1
resource_pool: 'USH_UCS_CLOUD_FOUNDRY_NONPROD_01_RP'
networks:
- name: default
default: [dns, gateway]
properties:
# Service credentials
uaa_client_id: "cf"
uaa_endpoint: http://uaa.devcloudwest.example.com
uaa_client_auth_credentials:
username: admin
password: secret
- name: postgresql_service_node
release: cf-services-contrib
template: postgresql_node_ng
instances: 1
resource_pool: common
persistent_disk: 10000
properties:
postgresql_node:
plan: default
networks:
- name: default
default: [dns, gateway]
properties:
networks:
apps: default
management: default
cc:
srv_api_uri: http://api.devcloudwest.example.com
nats:
address: exam 10.114..130.11
port: 25555
user: nats #CHANGE
password: secret
authorization_timeout: 5
service_plans:
postgresql:
default:
description: "Developer, 250MB storage, 10 connections"
free: true
job_management:
high_water: 230
low_water: 20
configuration:
capacity: 125
max_clients: 10
quota_files: 4
quota_data_size: 240
enable_journaling: true
backup:
enable: false
lifecycle:
enable: false
serialization: enable
snapshot:
quota: 1
postgresql_gateway:
token: f75df200-4daf-45b5-b92a-cb7fa1a25660
default_plan: default
supported_versions: ["9.3"]
version_aliases:
current: "9.3"
cc_api_version: v2
postgresql_node:
supported_versions: ["9.3"]
default_version: "9.3"
max_tmp: 900
password: secret
And here's gist with the debug output from that error:
postgres_2423_debug.txt
The docs for the jobs blocks say:
resource_pool [String, required]: A valid resource pool name from the Resource Pools block. BOSH runs instances of this job in a VM from the named resource pool.
This needs to match the name of one of your resource_pools, namely default, not the name of the resource pool in vSphere.
The only sections that have direct references to the IaaS are things that say cloud_properties. Specific names of resources (like networks, clusters, or datacenters in your vSphere, or subnets, AZs, and instance types in AWS) only show up in places that say cloud_properties.
You use that data to define "networks" and "resource pools" at a higher level of abstraction that is IaaS-agnostic, e.g. except for cloud properties, the specifications you give for resource pools is the same whether you're deploying to vSphere, AWS, OpenStack, etc.
Then your jobs reference these networks, resource pools, etc. by the logical name you've given to the abstractions. In particular, jobs don't require any IaaS-specific configuration whatsoever, just references to a logical network(s) and a resource pool that you've defined elsewhere in your manifest.