is it possible to install plugin into PostgreSQL 13 running in kubernetes - postgresql

I have a PostgreSQL 13 running in kubernetes that need to add a new plugin zhparser. To my surprise I did not found a way to install this plugin into PostgreSQL that running in kubernetes, the PostgreSQL contains some popular plugin but did not have zhparser. is it possible to install zlparse into PostgreSQL that running in kubernetes? this is the kubernetes yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: reddwarf-postgresql-postgresql
namespace: reddwarf-storage
uid: cc36a4d3-d8a3-474a-a5fd-311e4c70e6a1
resourceVersion: '23279377'
generation: 15
creationTimestamp: '2021-11-27T06:10:09Z'
labels:
app.kubernetes.io/component: primary
app.kubernetes.io/instance: reddwarf-postgresql
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-10.9.1
status:
observedGeneration: 15
replicas: 1
readyReplicas: 1
currentReplicas: 1
updatedReplicas: 1
currentRevision: reddwarf-postgresql-postgresql-6477565bf5
updateRevision: reddwarf-postgresql-postgresql-6477565bf5
collisionCount: 0
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: reddwarf-postgresql
app.kubernetes.io/name: postgresql
role: primary
template:
metadata:
name: reddwarf-postgresql
creationTimestamp: null
labels:
app.kubernetes.io/component: primary
app.kubernetes.io/instance: reddwarf-postgresql
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-10.9.1
role: primary
spec:
volumes:
- name: dshm
emptyDir:
medium: Memory
- name: data-reddwarf-postgresql-postgresql-general
persistentVolumeClaim:
claimName: data-reddwarf-postgresql-postgresql-general
containers:
- name: reddwarf-postgresql
image: docker.io/bitnami/postgresql:13.3.0-debian-10-r75
ports:
- name: tcp-postgresql
containerPort: 5432
protocol: TCP
env:
- name: BITNAMI_DEBUG
value: 'false'
- name: POSTGRESQL_PORT_NUMBER
value: '5432'
- name: POSTGRESQL_VOLUME_DIR
value: /bitnami/postgresql
- name: PGDATA
value: /bitnami/postgresql/data
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: reddwarf-postgresql
key: postgresql-password
- name: POSTGRESQL_ENABLE_LDAP
value: 'no'
- name: POSTGRESQL_ENABLE_TLS
value: 'no'
- name: POSTGRESQL_LOG_HOSTNAME
value: 'false'
- name: POSTGRESQL_LOG_CONNECTIONS
value: 'false'
- name: POSTGRESQL_LOG_DISCONNECTIONS
value: 'false'
- name: POSTGRESQL_PGAUDIT_LOG_CATALOG
value: 'off'
- name: POSTGRESQL_CLIENT_MIN_MESSAGES
value: error
- name: POSTGRESQL_SHARED_PRELOAD_LIBRARIES
value: pgaudit
resources:
limits:
cpu: 600m
memory: 1Gi
requests:
cpu: 250m
memory: 256Mi
volumeMounts:
- name: dshm
mountPath: /dev/shm
- name: data-reddwarf-postgresql-postgresql-general
mountPath: /bitnami/postgresql
livenessProbe:
exec:
command:
- /bin/sh
- '-c'
- exec pg_isready -U "postgres" -h 127.0.0.1 -p 5432
initialDelaySeconds: 30
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 6
readinessProbe:
exec:
command:
- /bin/sh
- '-c'
- '-e'
- >
exec pg_isready -U "postgres" -h 127.0.0.1 -p 5432
[ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f
/bitnami/postgresql/.initialized ]
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 6
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 1001
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
automountServiceAccountToken: false
securityContext:
fsGroup: 1001
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/component: primary
app.kubernetes.io/instance: reddwarf-postgresql
app.kubernetes.io/name: postgresql
namespaces:
- reddwarf-storage
topologyKey: kubernetes.io/hostname
schedulerName: default-scheduler
serviceName: reddwarf-postgresql-headless
podManagementPolicy: OrderedReady
updateStrategy:
type: RollingUpdate
revisionHistoryLimit: 10
or should I change the official Docker file and add the plugin to build my own Dockerfile branch?

Related

Helm configmap template not working when using files

---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.configMap.name | quote }}
labels:
name: {{ .Values.configMap.name | quote }}
data:
application.yaml: |-
{{ .Files.Get "application.yaml" | indent 4}}
otherFile.csv: |-
{{ .Files.Get "otherFile.csv" | indent 4}}
But I want to make the configmap more generic, so I was thinking on something like this:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.configMap.name | quote }}
labels:
name: {{ .Values.configMap.name | quote }}
data:
{{- range $data := .Values.configMap.data }}
{{ $data }}: |-
{{ .Files.Get $data | indent 4}}
{{- end}}
And in the values.yaml to have something like this:
configMap:
name: app-configmap
data:
- application.yaml
- otherFile.csv
If I don't make it generic it works, but if I try to make it generic I get errors:
executing "app/templates/configmap.yaml" at <.Files.AsConfig>: wrong number of args for AsConfig: want 0 got 1
helm.go:84: [debug] template: app/templates/configmap.yaml:10:9: executing "app/templates/configmap.yaml" at <.Files.AsConfig>: wrong number of args for AsConfig: want 0 got 1
Any hint?

Replace string into a multiline string

I have the following yaml file:
values: |
nameOverride: my-service
fullnameOverride: ""
namespace: my-ns
containerApps:
- name: app-frontend
image_tag: xxxxxxx
- name: app-backend
image_tag: xxxxxxx
I'm looking for a way to replace e.g. xxxxxx by yyyyyy on containerApps.[app-frontend].image_tag within a multi-line value (values: |).
The output being:
values: |
nameOverride: my-service
fullnameOverride: ""
namespace: my-ns
containerApps:
- name: app-frontend
image_tag: yyyyyy
- name: app-backend
image_tag: xxxxxxx
How this can be accomplished using yq?
Any help is welcomed.
Here's a solution using mikefarah/yq. It decodes the multiline string using #yamld, makes the substitution using sub, and encodes the result back using #yaml.
yq '
.values |= (
#yamld | (
.containerApps[] | select(.name == "app-frontend") | .image_tag
) |= sub("xxxxxxx", "yyyyyy")
| #yaml
)
'
values: |
nameOverride: my-service
fullnameOverride: ""
namespace: my-ns
containerApps:
- name: app-frontend
image_tag: yyyyyy
- name: app-backend
image_tag: xxxxxxx
To update the file in-place (instead of just outputting it), use the -i flag.

Helm add secrets from another YAML file

I have a credential.yaml as:
key1: value1
key2: value2
...and so on
how do I add these key, values in credential.yaml as secrets? I am able to add secrets defined in Values object by looping over them as follows:
apiVersion: v1
kind: Secret
metadata:
name: {{ include "ocp-auth.fullname" . }}
labels:
{{- include "ocp-auth.labels" . | nindent 4 }}
type: Opaque
data:
{{- range $key,$value := .Values.secrets }}
{{ $key }}: {{ $value | b64enc | quote }}
{{- end }}
but this is not working for credential.yaml
1、If you use the file name as the key and the file content as the value, you can write as follows:
apiVersion: v1
kind: Secret
metadata:
name: {{ include "ocp-auth.fullname" . }}
labels:
{{- include "ocp-auth.labels" . | nindent 4 }}
type: Opaque
data:
{{ (.Files.Glob "data/credential.yaml").AsSecrets | indent 2 }}
Where data/credential.yaml is the path where the yaml file is located.
Result:
apiVersion: v1
data:
credential.yaml: a2V5MTogdmFsdWUxCmtleTI6IHZhbHVlMg==
kind: Secret
metadata:
name: ocp-auth
type: Opaque
Decode:
# echo "a2V5MTogdmFsdWUxCmtleTI6IHZhbHVlMg==" | base64 -D
key1: value1
key2: value2
2、If you need to use each key in the file as an item in data, you can write as follows:
apiVersion: v1
kind: Secret
metadata:
name: {{ include "ocp-auth.fullname" . }}
type: Opaque
data:
{{- range .Files.Lines "data/credential.yaml" }}
{{- range $i, $v := . | split ":" }}
{{- if eq $i "_0" }}
{{ $v }}:
{{- else }}
{{ $v | trim | b64enc }}
{{- end }}
{{- end }}
{{- end }}
Result:
apiVersion: v1
data:
key1: dmFsdWUx
key2: dmFsdWUy
kind: Secret
metadata:
name: ops-auth
type: Opaque
Decode:
# echo "dmFsdWUx" | base64 -D
value1
# echo "dmFsdWUy" | base64 -D
value2
This way of writing has more restrictions and is not recommended.
First, it can only process a single row of data. And second, it assumes that there is only one ':' in this row.
helm doc

Is there a way to setup a sink and source connector for this debezium connector?

I'm using the debezium-connector found here: https://repo1.maven.org/maven2/io/debezium/debezium-connector-oracle/1.4.0.Final/debezium-connector-oracle-1.4.0.Final-plugin.tar.gz
And I'm following these instructions for docker-compose: https://github.com/confluentinc/demo-scene/blob/master/oracle-and-kafka/docker-compose.yml
I did it for jdbc-connector by using confluent-hub but I don't know how to do it for debezium. It's not solved by adding it into /usr/share/java and running
So my docker-compose is:
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.0.1
hostname: zookeeper
container_name: zookeeper
volumes:
- /dados/persistence/zookeeper/data:/var/lib/zookeeper/data
- /dados/persistence/zookeeper/log:/var/lib/zookeeper/log
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-server:6.0.1
hostname: broker
container_name: broker
volumes:
- /dados/persistence/broker/data:/var/lib/kafka/data
depends_on:
- zookeeper
ports:
- "9092:9092"
- "9101:9101"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: kafka:29092
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
schema-registry:
image: confluentinc/cp-schema-registry:6.0.1
hostname: schema-registry
container_name: schema-registry
depends_on:
- zookeeper
- kafka
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: zookeeper:2181
kafka-connect:
image: cnfldemos/cp-server-connect-datagen:0.4.0-6.0.1
hostname: connect
container_name: kafka-connect
volumes:
- /dados/packages/confluent-hub/share/confluent-hub-components:/usr/share/confluent-hub-components/custom
- /dados/persistence/kafka-connect/jars:/etc/kafka-connect/jars
depends_on:
- zookeeper
- kafka
- schema-registry
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: 'kafka:29092'
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_REST_ADVERTISED_HOST_NAME: "kafka-connect"
CONNECT_LOG4J_ROOT_LOGLEVEL: "INFO"
CONNECT_LOG4J_APPENDER_STDOUT_LAYOUT_CONVERSIONPATTERN: "[%d] %p %X{connector.context}%m (%c:%L)%n"
CONNECT_LOG4J_LOGGERS: "org.apache.kafka.connect.runtime.rest=WARN,org.reflections=ERROR"
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components,/usr/share/confluent-hub-components/custom"
LD_LIBRARY_PATH: '/usr/share/java/debezium-connector-oracle/instantclient_19_6/'
control-center:
image: confluentinc/cp-enterprise-control-center:6.0.1
hostname: control-center
container_name: control-center
depends_on:
- kafka
- schema-registry
- kafka-connect
- ksqldb
ports:
- "9021:9021"
environment:
CONTROL_CENTER_BOOTSTRAP_SERVERS: 'kafka:29092'
CONTROL_CENTER_CONNECT_CLUSTER: 'kafka-connect:8083'
CONTROL_CENTER_KSQL_KSQLDB1_URL: "http://10.58.0.207:8088"
CONTROL_CENTER_KSQL_KSQLDB1_ADVERTISED_URL: "http://10.58.0.207:8088"
CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://10.58.0.207:8081"
CONTROL_CENTER_REPLICATION_FACTOR: 1
CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
CONFLUENT_METRICS_TOPIC_REPLICATION: 1
PORT: 9021
ksqldb:
image: confluentinc/cp-ksqldb-server:6.0.1
hostname: ksqldb
container_name: ksqldb-server
depends_on:
- kafka
- kafka-connect
ports:
- "8088:8088"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
KSQL_LISTENERS: "http://0.0.0.0:8088"
KSQL_BOOTSTRAP_SERVERS: kafka:29092
KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: "true"
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: "true"
KSQL_KSQL_CONNECT_URL: http://kafka-connect:8083
KSQL_KSQL_SCHEMA_REGISTRY_URL: http://schema-registry:8081
ksqldb-cli:
image: confluentinc/cp-ksqldb-cli:6.0.1
container_name: ksqldb-cli
depends_on:
- kafka
- kafka-connect
- ksqldb
entrypoint: /bin/sh
tty: true
ksql-datagen:
image: confluentinc/ksqldb-examples:6.0.1
hostname: ksql-datagen
container_name: ksql-datagen
depends_on:
- ksqldb
- kafka
- schema-registry
- kafka-connect
command: "bash -c 'echo Waiting for Kafka to be ready... && \
cub kafka-ready -b broker:29092 1 40 && \
echo Waiting for Confluent Schema Registry to be ready... && \
cub sr-ready schema-registry 8081 40 && \
echo Waiting a few seconds for topic creation to finish... && \
sleep 11 && \
tail -f /dev/null'"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
STREAMS_BOOTSTRAP_SERVERS: kafka:29092
STREAMS_SCHEMA_REGISTRY_HOST: schema-registry
STREAMS_SCHEMA_REGISTRY_PORT: 8081
rest-proxy:
image: confluentinc/cp-kafka-rest:6.0.1
depends_on:
- kafka
- schema-registry
ports:
- 8082:8082
hostname: rest-proxy
container_name: rest-proxy
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'kafka:29092'
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
You need to add /etc/kafka-connect/jars to CONNECT_PLUGIN_PATH

Cloud Foundry bosh Error 140003: unknown resource pool

I'm attempting to setup a service broker to add postgres to our Cloud Foundry installation. We're running our system on vmWare. I'm using this release in order to do that:
cf-contrib-release
I added the release in bosh:
#bosh releases
Acting as user 'director' on 'microbosh-ba846726bed7032f1fd4'
+-----------------------+----------------------+-------------+
| Name | Versions | Commit Hash |
+-----------------------+----------------------+-------------+
| cf | 208.12* | a0de569a+ |
| cf-autoscaling | 13* | 927bc7ed+ |
| cf-metrics | 34* | 22f7e1e1 |
| cf-mysql | 20* | caa23b3d+ |
| | 22* | af278086+ |
| cf-rabbitmq | 161* | 4d298aec |
| cf-riak-cs | 10* | 5e7e46c9+ |
| cf-services-contrib | 6* | 57fd2098+ |
| docker | 23* | 82346881+ |
| newrelic_broker | 1.3* | 1ce3471d+ |
| notifications-with-ui | 18* | 490b6446+ |
| postgresql-docker | 4* | a53c9333+ |
| push-console-release | console-du-jour-203* | d2d31585+ |
| spring-cloud-broker | 1.0.0* | efd69612 |
+-----------------------+----------------------+-------------+
(*) Currently deployed
(+) Uncommitted changes
Releases total: 13
I setup my resource pools and jobs in my yaml file according to this doumentation:
http://bosh.io/docs/vsphere-cpi.html#resource-pools
This is how our cluster looks:
vmware cluster
And here is what I put in the yaml file:
resource_pools:
- name: default
network: default
stemcell:
name: bosh-vsphere-esxi-ubuntu-trusty-go_agent
version: '2865.1'
cloud_properties:
cpu: 2
ram: 4096
disk: 10240
datacenters:
- name: 'Universal City'
clusters:
- USH_UCS_CLOUD_FOUNDRY_NONPROD_01: {resource_pool: 'USH_UCS_CLOUD_FOUNDRY_NONPROD_01_RP'}
jobs:
- name: gateways
release: cf-services-contrib
templates:
- name: postgresql_gateway_ng
instances: 1
resource_pool: 'USH_UCS_CLOUD_FOUNDRY_NONPROD_01_RP'
networks:
- name: default
default: [dns, gateway]
properties:
# Service credentials
uaa_client_id: "cf"
uaa_endpoint: http://uaa.devcloudwest.example.com
uaa_client_auth_credentials:
username: admin
password: secret
And I'm getting an error when I run 'bosh deploy' that says:
Error 140003: Job `gateways' references an unknown resource pool `USH_UCS_CLOUD_FOUNDRY_NONPROD_01_RP'
Here's my yaml file in it's entirety:
name: cf-22b9f4d62bb6f0563b71
director_uuid: fd713790-b1bc-401a-8ea1-b8209f1cc90c
releases:
- name: cf-services-contrib
version: 6
compilation:
workers: 3
network: default
reuse_compilation_vms: true
cloud_properties:
ram: 5120
disk: 10240
cpu: 2
update:
canaries: 1
canary_watch_time: 30000-60000
update_watch_time: 30000-60000
max_in_flight: 4
networks:
- name: default
type: manual
subnets:
- range: exam 10.114..130.0/24
gateway: exam 10.114..130.1
cloud_properties:
name: 'USH_UCS_CLOUD_FOUNDRY'
#resource_pools:
# - name: common
# network: default
# size: 8
# stemcell:
# name: bosh-vsphere-esxi-ubuntu-trusty-go_agent
# version: '2865.1'
resource_pools:
- name: default
network: default
stemcell:
name: bosh-vsphere-esxi-ubuntu-trusty-go_agent
version: '2865.1'
cloud_properties:
cpu: 2
ram: 4096
disk: 10240
datacenters:
- name: 'Universal City'
clusters:
- USH_UCS_CLOUD_FOUNDRY_NONPROD_01: {resource_pool: 'USH_UCS_CLOUD_FOUNDRY_NONPROD_01_RP'}
jobs:
- name: gateways
release: cf-services-contrib
templates:
- name: postgresql_gateway_ng
instances: 1
resource_pool: 'USH_UCS_CLOUD_FOUNDRY_NONPROD_01_RP'
networks:
- name: default
default: [dns, gateway]
properties:
# Service credentials
uaa_client_id: "cf"
uaa_endpoint: http://uaa.devcloudwest.example.com
uaa_client_auth_credentials:
username: admin
password: secret
- name: postgresql_service_node
release: cf-services-contrib
template: postgresql_node_ng
instances: 1
resource_pool: common
persistent_disk: 10000
properties:
postgresql_node:
plan: default
networks:
- name: default
default: [dns, gateway]
properties:
networks:
apps: default
management: default
cc:
srv_api_uri: http://api.devcloudwest.example.com
nats:
address: exam 10.114..130.11
port: 25555
user: nats #CHANGE
password: secret
authorization_timeout: 5
service_plans:
postgresql:
default:
description: "Developer, 250MB storage, 10 connections"
free: true
job_management:
high_water: 230
low_water: 20
configuration:
capacity: 125
max_clients: 10
quota_files: 4
quota_data_size: 240
enable_journaling: true
backup:
enable: false
lifecycle:
enable: false
serialization: enable
snapshot:
quota: 1
postgresql_gateway:
token: f75df200-4daf-45b5-b92a-cb7fa1a25660
default_plan: default
supported_versions: ["9.3"]
version_aliases:
current: "9.3"
cc_api_version: v2
postgresql_node:
supported_versions: ["9.3"]
default_version: "9.3"
max_tmp: 900
password: secret
And here's gist with the debug output from that error:
postgres_2423_debug.txt
The docs for the jobs blocks say:
resource_pool [String, required]: A valid resource pool name from the Resource Pools block. BOSH runs instances of this job in a VM from the named resource pool.
This needs to match the name of one of your resource_pools, namely default, not the name of the resource pool in vSphere.
The only sections that have direct references to the IaaS are things that say cloud_properties. Specific names of resources (like networks, clusters, or datacenters in your vSphere, or subnets, AZs, and instance types in AWS) only show up in places that say cloud_properties.
You use that data to define "networks" and "resource pools" at a higher level of abstraction that is IaaS-agnostic, e.g. except for cloud properties, the specifications you give for resource pools is the same whether you're deploying to vSphere, AWS, OpenStack, etc.
Then your jobs reference these networks, resource pools, etc. by the logical name you've given to the abstractions. In particular, jobs don't require any IaaS-specific configuration whatsoever, just references to a logical network(s) and a resource pool that you've defined elsewhere in your manifest.