Here is my docker-compose.yaml:
version: '3.3'
mongo:
build:
context: '.'
dockerfile: 'Dockerfile'
environment:
MONGO_INITDB_DATABASE: 'mydb'
ports:
- '27017:27017'
volumes:
- 'data-storage:/data/db'
networks:
mynet:
volumes:
data-storage:
networks:
mynet:
Here is my Dockerfile:
FROM mongo:latest
COPY ./initdb.js /docker-entrypoint-initdb.d/
And finally here is my inidb.js:
db.createCollection("strategyitems");
db.strategyitems.createIndex( {strategy: 1 }, { unique: false } );
db.strategyitems.createIndex( {strategy: 1, symbol: 1 }, { unique: true } );
db.strategyitems.insertMany([
{ strategy: "crypto", symbol: "btcusd", eval_period: 15, buy_booster: 8.0, sell_booster: 5.0, buy_lot: 0.2, sell_lot: 0.2 },
{ strategy: "crypto", symbol: "ethusd", eval_period: 15, buy_booster: 8.0, sell_booster: 5.0, buy_lot: 0.2, sell_lot: 0.2 },
{ strategy: "crypto", symbol: "neousd", eval_period: 15, buy_booster: 8.0, sell_booster: 5.0, buy_lot: 0.2, sell_lot: 0.2 }
]);
The container builds and starts successfully... but no way to get the db statements above executed.
If I log into the container, folder /docker-entrypoint-initdb.d/ contains initdb.js... so I'd expect the db get intialized.
Am I missing something?
So the supplied compose file doesn't work for me, I had to edit it to get it up & running (v18.06 CE), so heads-up on that.
version: '3.3'
services:
mongo:
build:
context: .
dockerfile: Dockerfile
environment:
MONGO_INITDB_DATABASE: 'mydb'
ports:
- '27017:27017'
volumes:
- 'data-storage:/data/db'
networks:
mynet:
volumes:
data-storage:
networks:
mynet:
Next, if you'd run docker-compose up before adding the initdb.js file and then stopped with docker-compose down, then docker-compose down stops the containers, but doesn't remove the volume
docker ps
| CONTAINER | ID | IMAGE | COMMAND | CREATED | STATUS | PORTS | NAMES | | | |
|--------------|------------------|----------------------|---------|---------|--------|-------|-------|---------|--------------------------|--------------------|
| c412bbd9a22b | lumberjack_mongo | docker-entrypoint.s… | 7 | minutes | ago | Up | 6 | minutes | 0.0.0.0:27017->27017/tcp | lumberjack_mongo_1 |
docker volume ls
| DRIVER | | VOLUME | NAME |
|--------|---|--------|-------------------------|
| local | | | lumberjack_data-storage |
docker-compose down
Removing lumberjack_mongo_1 ... done
Removing network lumberjack_mynet
docker volume ls
| DRIVER | | VOLUME | NAME |
|--------|---|--------|-------------------------|
| local | | | lumberjack_data-storage |
The problem arises when docker-compose up is run when the volume exists - Docker mounts the volume before the container starts up. Mongo does some pre-checks and if it finds that the directories are present, then skips the initdb sequence.
If you remove the volume after docker-compose down and do a docker-compose up, the volume will be created from scratch, the pre-check finds nothing and initializes the mongodb
docker volume rm lumberjack_data-storage
lumberjack_data-storage
docker-compose up
Creating network "lumberjack_mynet" with the default driver
Creating volume "lumberjack_data-storage" with default driver
Creating lumberjack_mongo_1 ... done
Attaching to lumberjack_mongo_1
[....]
mongo_1 | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/initdb.js
mongo_1 | 2018-08-04T18:08:47.699+0000 I INDEX [LogicalSessionCacheRefresh] build index on: config.system.sessions properties: { v: 2, key: { lastUse: 1 }, name: "lsidTTLIndex", ns: "config.system.sessions", expireAfterSeconds: 1800 }
mongo_1 | 2018-08-04T18:08:47.745+0000 I NETWORK [conn2] received client metadata from 127.0.0.1:45324 conn2: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
mongo_1 | 2018-08-04T18:08:47.747+0000 I STORAGE [conn2] createCollection: initdb.strategyitems with generated UUID: 585edb14-bc63-4879-bc5d-504867fb5e12
mongo_1 | 2018-08-04T18:08:47.851+0000 I INDEX [conn2] build index on: initdb.strategyitems properties: { v: 2, key: { strategy: 1.0 }, name: "strategy_1", ns: "initdb.strategyitems" }
mongo_1 | 2018-08-04T18:08:47.851+0000 I INDEX [conn2] building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongo_1 | 2018-08-04T18:08:47.852+0000 I INDEX [conn2] build index done. scanned 0 total records. 0 secs
mongo_1 | 2018-08-04T18:08:47.881+0000 I INDEX [conn2] build index on: initdb.strategyitems properties: { v: 2, unique: true, key: { strategy: 1.0, symbol: 1.0 }, name: "strategy_1_symbol_1", ns: "initdb.strategyitems" }
mongo_1 | 2018-08-04T18:08:47.881+0000 I INDEX [conn2] building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongo_1 | 2018-08-04T18:08:47.882+0000 I INDEX [conn2] build index done. scanned 0 total records. 0 secs
mongo_1 | 2018-08-04T18:08:47.886+0000 I NETWORK [conn2] end connection 127.0.0.1:45324 (0 connections now open)
[....]
mongo_1 | MongoDB init process complete; ready for start up.
mongo_1 |
mongo_1 | 2018-08-04T18:08:48.933+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
mongo_1 | 2018-08-04T18:08:48.939+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=e90c80083360
Related
What I'm looking for is a yq query that returns the service names that are using
a specified volume for a given docker-compose.yml file.
For example, in the stripped down docker-compose.yml file below, say I am looking for the names of all services that
use the volume v-app-olorin.
version: "3"
services:
arwen:
this: that
volumes:
- v-app-mithrandir:/data/mithrandir
- v-app-olorin:/data/olorin
boromir:
volumes:
- v-app-mithrandir:/data/mithrandir
- v-app-stormcrow:/data/stormcrow
cirdan:
volumes:
- v-app-mithrandir:/data/mithrandir
- v-app-olorin:/data/olorin
volumes:
v-app-mithrandir:
name: v-app-mithrandir
v-app-olorin:
name: v-app-olorin
v-app-stormcrow:
name: v-app-stormcrow
The expected response would be:
arwen
cirdan
I can match simple key values with something like this:
yq e '.services | with_entries(select(.value.this == "that")) | to_entries | .[] | .key' docker-compose.yml
arwen
But I'm having trouble matching an element of the volumes array. Thank you for any help.
here's an expression that does that:
yq '.services[] | select(.volumes[] | contains("v-app-olorin")) | key' docker-compose.yml
Explanation:
splat out the services entries into their invidiual nodes .services[]
select the ones that have "v-app-olorin" in their volumes array: select(.volumes[] | contains("v-app-olorin"))
get the key of that services entry
Disclaimer: I wrote yq
I'm trying to visualize the mongodb data in kibana using logstash configuration.Below is my configuration.I'm getting some outputs in terminal and it is looping forever. I couldn't see any index created by the name mentioned in the config file and if the index was generated also don't have any data on it. Saying no results to match in the discover tab.How to make the configuration to visualize the data in kibana?
input {
mongodb {
uri => "mongodb+srv:###############?retryWrites=true&w=majority"
placeholder_db_dir => "C:/logstash-mongodb"
placeholder_db_name => "logstash1_sqlite.db"
collection => "logs"
batch_size => 1
}
}
filter {
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
action => "index"
index => "ayesha_logs"
hosts => ["localhost:9200"]
}
}
http://localhost:9200/ayesha_logs/_search?pretty
Terminal logs:
D, [2020-10-01T08:11:45.717000 #2372] DEBUG -- : MONGODB | range-api-test-cluster-shard-00-02.icqif.azure.mongodb.net:27017 req:259 conn:1:1 sconn:231839 | coexistence-poc.listCollections | STARTED | {"listCollections"=>1, "cursor"=>{}, "nameOnly"=>true, "$db"=>"coexistence-poc", "$clusterTime"=>{"clusterTime"=>#<BSON::Timestamp:0x32598cb2 #increment=1, #seconds=1601532700>, "signature"=>{"hash"=><BSON::Binary:0x2622 type=generic data=0xfaf25a8d85...
D, [2020-10-01T08:11:45.755000 #2372] DEBUG -- : MONGODB | range-api-test-cluster-shard-00-02.icqif.azure.mongodb.net:27017 req:259 | coexistence-poc.listCollections | SUCCEEDED | 0.038s
D, [2020-10-01T08:11:50.801000 #2372] DEBUG -- : MONGODB | range-api-test-cluster-shard-00-02.icqif.azure.mongodb.net:27017 req:260 conn:1:1 sconn:231839 | coexistence-poc.find | STARTED | {"find"=>"coexistence-pinfobackfill-logs", "filter"=>{"_id"=>{"$gt"=>BSON::ObjectId('5f71f009b6b9115861d379d8')}}, "limit"=>50, "$db"=>"coexistence-poc", "$clusterTime"=>{"clusterTime"=>#<BSON::Timestamp:0x32598cb2 #increment=1, #seconds=1601532700>, ...
D, [2020-10-01T08:11:50.843000 #2372] DEBUG -- : MONGODB | range-api-test-cluster-shard-00-02.icqif.azure.mongodb.net:27017 req:260 | coexistence-poc.find | SUCCEEDED | 0.042s
D, [2020-10-01T08:11:50.859000 #2372] DEBUG -- : MONGODB | range-api-test-cluster-shard-00-02.icqif.azure.mongodb.net:27017 req:261 conn:1:1 sconn:231839 | coexistence-poc.listCollections | STARTED | {"listCollections"=>1, "cursor"=>{}, "nameOnly"=>true, "$db"=>"coexistence-poc", "$clusterTime"=>{"clusterTime"=>#<BSON::Timestamp:0x32598cb2 #increment=1, #seconds=1601532700>, "signature"=>{"hash"=><BSON::Binary:0x2622 type=generic data=0xfaf25a8d85...
D, [2020-10-01T08:11:50.906000 #2372] DEBUG -- : MONGODB | range-api-test-cluster-shard-00-02.icqif.azure.mongodb.net:27017 req:261 | coexistence-poc.listCollections | SUCCEEDED | 0.047s
Did you create your Kibana's index pattern ?
If not, just go to Menu > stack managment > Kibana > Index pattern
click on
And follow the steps.
You will then be able to use you index in Discover or visualization tabs.
I've looked at SO posts related to this questions here, here, here, and here but I haven't had any luck with the fixes proposed. Whenever I run the command docker-compose -f stack.yml up I receive the following stack trace:
Attaching to weg-api_db_1, weg-api_weg-api_1
db_1 | 2018-07-04 14:57:15.384 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2018-07-04 14:57:15.384 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2018-07-04 14:57:15.388 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2018-07-04 14:57:15.402 UTC [23] LOG: database system was interrupted; last known up at 2018-07-04 14:45:24 UTC
db_1 | 2018-07-04 14:57:15.513 UTC [23] LOG: database system was not properly shut down; automatic recovery in progress
db_1 | 2018-07-04 14:57:15.515 UTC [23] LOG: redo starts at 0/16341E0
db_1 | 2018-07-04 14:57:15.515 UTC [23] LOG: invalid record length at 0/1634218: wanted 24, got 0
db_1 | 2018-07-04 14:57:15.515 UTC [23] LOG: redo done at 0/16341E0
db_1 | 2018-07-04 14:57:15.525 UTC [1] LOG: database system is ready to accept connections
weg-api_1 |
weg-api_1 | . ____ _ __ _ _
weg-api_1 | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
weg-api_1 | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
weg-api_1 | \\/ ___)| |_)| | | | | || (_| | ) ) ) )
weg-api_1 | ' |____| .__|_| |_|_| |_\__, | / / / /
weg-api_1 | =========|_|==============|___/=/_/_/_/
weg-api_1 | :: Spring Boot :: (v1.5.3.RELEASE)
weg-api_1 |
weg-api_1 | 2018-07-04 14:57:16.908 INFO 7 --- [ main] api.ApiKt : Starting ApiKt v0.0.1-SNAPSHOT on f9c58f4f2f27 with PID 7 (/app/spring-jpa-postgresql-spring-boot-0.0.1-SNAPSHOT.jar started by root in /app)
weg-api_1 | 2018-07-04 14:57:16.913 INFO 7 --- [ main] api.ApiKt : No active profile set, falling back to default profiles: default
weg-api_1 | 2018-07-04 14:57:17.008 INFO 7 --- [ main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext#6e5e91e4: startup date [Wed Jul 04 14:57:17 GMT 2018]; root of context hierarchy
weg-api_1 | 2018-07-04 14:57:19.082 INFO 7 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat initialized with port(s): 8080 (http)
weg-api_1 | 2018-07-04 14:57:19.102 INFO 7 --- [ main] o.apache.catalina.core.StandardService : Starting service Tomcat
weg-api_1 | 2018-07-04 14:57:19.104 INFO 7 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/8.5.14
weg-api_1 | 2018-07-04 14:57:19.215 INFO 7 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
weg-api_1 | 2018-07-04 14:57:19.215 INFO 7 --- [ost-startStop-1] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 2211 ms
weg-api_1 | 2018-07-04 14:57:19.370 INFO 7 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean : Mapping servlet: 'dispatcherServlet' to [/]
weg-api_1 | 2018-07-04 14:57:19.375 INFO 7 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'characterEncodingFilter' to: [/*]
weg-api_1 | 2018-07-04 14:57:19.376 INFO 7 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
weg-api_1 | 2018-07-04 14:57:19.376 INFO 7 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'httpPutFormContentFilter' to: [/*]
weg-api_1 | 2018-07-04 14:57:19.376 INFO 7 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'requestContextFilter' to: [/*]
weg-api_1 | 2018-07-04 14:57:19.867 ERROR 7 --- [ main] o.a.tomcat.jdbc.pool.ConnectionPool : Unable to create initial connections of pool.
weg-api_1 |
weg-api_1 | org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
I thought that my .yml file was brain-dead-simple, but I must be missing something vital for the internal routing between the two containers to fail.
EDIT
My stack.yml is below:
version: '3'
services:
db:
image: postgres
restart: always
container_name: db
environment:
POSTGRES_USER: root
POSTGRES_PASSWORD: password
POSTGRES_DB: weg
ports:
- "5432:5432"
weg-api:
image: weg-api
restart: always
container_name: weg-api
ports:
- "8080:8080"
depends_on:
- "db"
EDIT
My Springboot application properties are below:
spring.datasource.url=jdbc:postgresql://db:5432/weg
spring.datasource.username=root
spring.datasource.password=password
spring.jpa.generate-ddl=true
I'm at a loss as to how to proceed.
Your database is running on db container, not on your localhost inside your weg-api container. Therefore, you have to change
spring.datasource.url=jdbc:postgresql://localhost:5432/weg
to
spring.datasource.url=jdbc:postgresql://db:5432/weg
I would also suggest you give container_name to each of your containers to be sure the container names are always same. Otherwise you might sometimes get different names depending on your configuration.
version: '3'
services:
db:
image: postgres
restart: always
container_name: db
environment:
POSTGRES_USER: root
POSTGRES_PASSWORD: password
POSTGRES_DB: weg
ports:
- "5432:5432"
weg-api:
image: weg-api
restart: always
container_name: weg-api
ports:
- "8080:8080"
depends_on:
- "db"
I have a Postgres 9.4 RDS instance with Multi-AZ, and there's a slave, read-only replica.
Up to this point the load balancing was made in the business layer of my app, but it's inefficient, and I was hoping to use PGPool, so the app interacts with a single Postgres connection.
It turns out that using PGPool has been a pain in the ass. If I set it to act as a load balancer, simple SELECT queries throw errors like:
SQLSTATE[HY000]: General error: 7
message contents do not agree with length in message type "N"
server sent data ("D" message)
without prior row description ("T" message)
If I set it to act in a master/slave mode with stream replication (as suggested in Postgres mail list) I get:
psql: ERROR: MD5 authentication is unsupported
in replication and master-slave modes.
HINT: check pg_hba.conf
Yeah, well, pg_hba.conf if off hands in RDS so I can't alter it.
Has anyone got PGPool to work in RDS? Are there other tools that can act as middleware to take advantage of reading replicas in RDS?
I was able to make it work here are my working config files:
You have to use md5 authentication, and sync the username/password from your database to the pool_passwd file. Also need enable_pool_hba, load_balance_mode, and master_slave_mode on.
pgpool.conf
listen_addresses = '*'
port = 9999
pcp_listen_addresses = '*'
pcp_port = 9898
pcp_socket_dir = '/tmp'
listen_backlog_multiplier = 1
backend_hostname0 = 'master-rds-database-with-multi-AZ.us-west-2.rds.amazonaws.com'
backend_port0 = 5432
backend_weight0 = 0
backend_flag0 = 'ALWAYS_MASTER'
backend_hostname1 = 'readonly-replica.us-west-2.rds.amazonaws.com'
backend_port1 = 5432
backend_weight1 = 999
backend_flag1 = 'ALWAYS_MASTER'
enable_pool_hba = on
pool_passwd = 'pool_passwd'
ssl = on
num_init_children = 1
max_pool = 2
connection_cache = off
replication_mode = off
load_balance_mode = on
master_slave_mode = on
pool_hba.conf
local all all md5
host all all 127.0.0.1/32 md5
pool_passwd
username:md5d51c9a7e9353746a6020f9602d452929
to update pool_password you can use pg_md5 or
echo username:md5`echo -n usernamepassword | md5sum`
username:md5d51c9a7e9353746a6020f9602d452929 -
Output of running example:
psql --dbname=database --host=localhost --username=username --port=9999
database=> SHOW POOL_NODES;
node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay
---------+-------------------------------------------------+------+--------+-----------+---------+------------+-------------------+-------------------
0 | master-rds-database.us-west-2.rds.amazonaws.com | 8193 | up | 0.000000 | primary | 0 | false | 0
1 | readonly-replica.us-west-2.rds.amazonaws.com | 8193 | up | 1.000000 | standby | 0 | true | 0
database=> select now();
node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay
---------+-------------------------------------------------+------+--------+-----------+---------+------------+-------------------+-------------------
0 | master-rds-database.us-west-2.rds.amazonaws.com | 8193 | up | 0.000000 | primary | 0 | false | 0
1 | readonly-replica.us-west-2.rds.amazonaws.com | 8193 | up | 1.000000 | standby | 1 | true | 1
database=> CREATE TABLE IF NOT EXISTS tmp_test_read_write ( data varchar(40) );
CREATE TABLE
database=> INSERT INTO tmp_test_read_write (data) VALUES (concat('',inet_server_addr()));
INSERT 0 1
database=> select data as master_ip,inet_server_addr() as replica_ip from tmp_test_read_write;
master_ip | replica_ip
--------------+---------------
172.31.37.69 | 172.31.20.121
(1 row)
You can also see from the logs id does both databases:
2018-10-16 07:56:37: pid 124528: LOG: DB node id: 0 backend pid: 21731 statement: CREATE TABLE IF NOT EXISTS tmp_test_read_write ( data varchar(40) );
2018-10-16 07:56:47: pid 124528: LOG: DB node id: 0 backend pid: 21731 statement: INSERT INTO tmp_test_read_write (data) VALUES (concat('',inet_server_addr()));
2018-10-16 07:56:52: pid 124528: LOG: DB node id: 1 backend pid: 24890 statement: select data as master_ip,inet_server_addr() as replica_ip from tmp_test_read_write;
Notice the insert used ip_address of master, and the next select used ip_address of the read only replica.
I can update after more testing, but psql client testing looks promising.
There is Citus(pgShard) that is supposed to work with standard Amazon RDS instances. It has catches though. You will have a single point of failure if you use the open source version. It's coordinator node is not duplicated.
You can get a fully HA seamless fail over version of it but you have to buy the enterprise licence, but it is CRAZY expensive. It will easily cost you $50,000 to $100,000 or more per year.
Also they are REALLY pushing their cloud version now which is even more insanely expensive.
https://www.citusdata.com/
I have also heard of people using HAProxy to balance between Postgres or MySql nodes.
I'm attempting to setup a service broker to add postgres to our Cloud Foundry installation. We're running our system on vmWare. I'm using this release in order to do that:
cf-contrib-release
I added the release in bosh:
#bosh releases
Acting as user 'director' on 'microbosh-ba846726bed7032f1fd4'
+-----------------------+----------------------+-------------+
| Name | Versions | Commit Hash |
+-----------------------+----------------------+-------------+
| cf | 208.12* | a0de569a+ |
| cf-autoscaling | 13* | 927bc7ed+ |
| cf-metrics | 34* | 22f7e1e1 |
| cf-mysql | 20* | caa23b3d+ |
| | 22* | af278086+ |
| cf-rabbitmq | 161* | 4d298aec |
| cf-riak-cs | 10* | 5e7e46c9+ |
| cf-services-contrib | 6* | 57fd2098+ |
| docker | 23* | 82346881+ |
| newrelic_broker | 1.3* | 1ce3471d+ |
| notifications-with-ui | 18* | 490b6446+ |
| postgresql-docker | 4* | a53c9333+ |
| push-console-release | console-du-jour-203* | d2d31585+ |
| spring-cloud-broker | 1.0.0* | efd69612 |
+-----------------------+----------------------+-------------+
(*) Currently deployed
(+) Uncommitted changes
Releases total: 13
I setup my resource pools and jobs in my yaml file according to this doumentation:
http://bosh.io/docs/vsphere-cpi.html#resource-pools
This is how our cluster looks:
vmware cluster
And here is what I put in the yaml file:
resource_pools:
- name: default
network: default
stemcell:
name: bosh-vsphere-esxi-ubuntu-trusty-go_agent
version: '2865.1'
cloud_properties:
cpu: 2
ram: 4096
disk: 10240
datacenters:
- name: 'Universal City'
clusters:
- USH_UCS_CLOUD_FOUNDRY_NONPROD_01: {resource_pool: 'USH_UCS_CLOUD_FOUNDRY_NONPROD_01_RP'}
jobs:
- name: gateways
release: cf-services-contrib
templates:
- name: postgresql_gateway_ng
instances: 1
resource_pool: 'USH_UCS_CLOUD_FOUNDRY_NONPROD_01_RP'
networks:
- name: default
default: [dns, gateway]
properties:
# Service credentials
uaa_client_id: "cf"
uaa_endpoint: http://uaa.devcloudwest.example.com
uaa_client_auth_credentials:
username: admin
password: secret
And I'm getting an error when I run 'bosh deploy' that says:
Error 140003: Job `gateways' references an unknown resource pool `USH_UCS_CLOUD_FOUNDRY_NONPROD_01_RP'
Here's my yaml file in it's entirety:
name: cf-22b9f4d62bb6f0563b71
director_uuid: fd713790-b1bc-401a-8ea1-b8209f1cc90c
releases:
- name: cf-services-contrib
version: 6
compilation:
workers: 3
network: default
reuse_compilation_vms: true
cloud_properties:
ram: 5120
disk: 10240
cpu: 2
update:
canaries: 1
canary_watch_time: 30000-60000
update_watch_time: 30000-60000
max_in_flight: 4
networks:
- name: default
type: manual
subnets:
- range: exam 10.114..130.0/24
gateway: exam 10.114..130.1
cloud_properties:
name: 'USH_UCS_CLOUD_FOUNDRY'
#resource_pools:
# - name: common
# network: default
# size: 8
# stemcell:
# name: bosh-vsphere-esxi-ubuntu-trusty-go_agent
# version: '2865.1'
resource_pools:
- name: default
network: default
stemcell:
name: bosh-vsphere-esxi-ubuntu-trusty-go_agent
version: '2865.1'
cloud_properties:
cpu: 2
ram: 4096
disk: 10240
datacenters:
- name: 'Universal City'
clusters:
- USH_UCS_CLOUD_FOUNDRY_NONPROD_01: {resource_pool: 'USH_UCS_CLOUD_FOUNDRY_NONPROD_01_RP'}
jobs:
- name: gateways
release: cf-services-contrib
templates:
- name: postgresql_gateway_ng
instances: 1
resource_pool: 'USH_UCS_CLOUD_FOUNDRY_NONPROD_01_RP'
networks:
- name: default
default: [dns, gateway]
properties:
# Service credentials
uaa_client_id: "cf"
uaa_endpoint: http://uaa.devcloudwest.example.com
uaa_client_auth_credentials:
username: admin
password: secret
- name: postgresql_service_node
release: cf-services-contrib
template: postgresql_node_ng
instances: 1
resource_pool: common
persistent_disk: 10000
properties:
postgresql_node:
plan: default
networks:
- name: default
default: [dns, gateway]
properties:
networks:
apps: default
management: default
cc:
srv_api_uri: http://api.devcloudwest.example.com
nats:
address: exam 10.114..130.11
port: 25555
user: nats #CHANGE
password: secret
authorization_timeout: 5
service_plans:
postgresql:
default:
description: "Developer, 250MB storage, 10 connections"
free: true
job_management:
high_water: 230
low_water: 20
configuration:
capacity: 125
max_clients: 10
quota_files: 4
quota_data_size: 240
enable_journaling: true
backup:
enable: false
lifecycle:
enable: false
serialization: enable
snapshot:
quota: 1
postgresql_gateway:
token: f75df200-4daf-45b5-b92a-cb7fa1a25660
default_plan: default
supported_versions: ["9.3"]
version_aliases:
current: "9.3"
cc_api_version: v2
postgresql_node:
supported_versions: ["9.3"]
default_version: "9.3"
max_tmp: 900
password: secret
And here's gist with the debug output from that error:
postgres_2423_debug.txt
The docs for the jobs blocks say:
resource_pool [String, required]: A valid resource pool name from the Resource Pools block. BOSH runs instances of this job in a VM from the named resource pool.
This needs to match the name of one of your resource_pools, namely default, not the name of the resource pool in vSphere.
The only sections that have direct references to the IaaS are things that say cloud_properties. Specific names of resources (like networks, clusters, or datacenters in your vSphere, or subnets, AZs, and instance types in AWS) only show up in places that say cloud_properties.
You use that data to define "networks" and "resource pools" at a higher level of abstraction that is IaaS-agnostic, e.g. except for cloud properties, the specifications you give for resource pools is the same whether you're deploying to vSphere, AWS, OpenStack, etc.
Then your jobs reference these networks, resource pools, etc. by the logical name you've given to the abstractions. In particular, jobs don't require any IaaS-specific configuration whatsoever, just references to a logical network(s) and a resource pool that you've defined elsewhere in your manifest.