Couldn't find network parameters file and compatibility zone wasn't configured/isn't reachable, corda - postgresql

Am stuck, trying to integrate azure postgres with corda. I'm using gradle's deployNodes task to create the network locally.
Below is the exception that I get.
[ERROR] 2021-09-06T06:53:02,290Z [main] internal.NodeStartupLogging. - Exception during node startup: Couldn't find network parameters file and compatibility zone wasn't configured/isn't reachable. - Couldn't find network parameters file and compatibility zone wasn't configured/isn't reachable. [errorCode=1917kd6, moreInformationAt=https://errors.corda.net/OS/4.5/1917kd6]
I had no issues when connected with a local instance of postgres installed on my machine.
Below given is the azure postgres configuration that errors.
extraConfig = [
'dataSourceProperties' : [
'dataSourceClassName': 'org.postgresql.ds.PGSimpleDataSource',
'dataSource.url': 'jdbc:postgresql://azure_database_name.postgres.database.azure.com:5432/demo?searchpath=demo_schema?ssl=true',
'dataSource.user': 'userN#me#azure_database_name',
'dataSource.password': 'pa$$word'
]

Try to do this:
create a /drivers folder in your project root folder. This folder will contain the postgresql-XX.X.jar of PostgreSQL
update your build.gradle to add the location of the jar of the drivers with jarDirs, like so:
extraConfig = [
jarDirs : ['/your-project-absolute-path/drivers'],
dataSourceProperties : [
'dataSourceClassName': 'org.postgresql.ds.PGSimpleDataSource',
'dataSource.url': 'jdbc:postgresql://azure_database_name.postgres.database.azure.com:5432/demo?searchpath=demo_schema?ssl=true',
'dataSource.user': 'userN#me#azure_database_name',
'dataSource.password': 'pa$$word'
]
]
run ./gradlew deployNodes
run ./build/nodes/runnodes
the node should start (it will take some time since it has to connect to Azure DB)

Related

Beam SDK harness still trying to launch docker when I set environment_type to be `PROCESS`

According to the beam harness documentation:
PROCESS: User code is executed by processes that are automatically started by the runner on each worker node.
args = [
"--runner=portableRunner",
"--streaming",
"--sdk_worker_parallelism=2",
"--environment_type=PROCESS",
"--environment_config={\"command\": \"/opt/apache/beam/boot\"}",
]
consumer_config = {
"security.protocol": "SASL_SSL",
"sasl.mechanism": "AWS_MSK_IAM",
"sasl.jaas.config": "software.amazon.msk.auth.iam.IAMLoginModule required;",
"sasl.client.callback.handler.class": "software.amazon.msk.auth.iam.IAMClientCallbackHandler",
"bootstrap.servers": bootstrap_servers,
}
with beam.Pipeline(options=PipelineOptions(args)) as p:
data = p | "Reading messages from Kafka" >> ReadFromKafka(
consumer_config=consumer_config,
topics=topics,
with_metadata=True
)
data | 'Writing to stdout' >> beam.Map(logging.info)
But when I run the code (deployed to k8s using flinkk8soperator), it is complaining:
Caused by: java.io.IOException: Cannot run program "docker": error=2, No such file or directory
Wondering if I misunderstand anything? Thanks!
After couple digging, I finally make the cross language work without using DinD or DooD. Here's the steps:
Ensure both job and task manager mount a shared volume for artifact staging. (This is required, otherwise the task manager will complained unable to find the submitted jar)
Ensure your docker image can run both java and python beam code, here's what I did:
# python SDK
COPY --from=apache/beam_python3.7_sdk:2.41.0 /opt/apache/beam/ /opt/apache/beam/
# java SDK
COPY --from=apache/beam_java8_sdk:2.41.0 /opt/apache/beam/ /opt/apache/beam_java/
In the job, you'll need to start the expansion service with extra args, for example the KafkaIo:
from apache_beam.io.kafka import ReadFromKafka, default_io_expansion_service
ReadFromKafka(
consumer_config=consumer_config,
topics=[topic],
with_metadata=False,
expansion_service=default_io_expansion_service(
append_args=[
'--defaultEnvironmentType=PROCESS',
"--defaultEnvironmentConfig={\"command\":\"/opt/apache/beam_java/boot\"}",
]
)
You portable execution relies on xLang support that requires starting a Java SDK with docker. Your cluster doesn't have docker installed.

How do I set node definition from docker plugin?

I try to set Docker containers as node with the following Custom Mapping :
hostname.selector=docker:IPAddress
node.name.selector=docker:Name
username.selector=root
osFamily.selector=Docker
ssh-authentication=password
ssh-password-storage-path=keys/${node.hostname}/${node.username}
node.ssh-authentication.selector=password
docker-shell.default=bash
I alway get this error message :
Failed: AuthenticationFailure: Authentication failure connecting to node: "xxxxxx". Make sure your resource definitions and credentials are up to date.
Set the Docker node executor. Project Settings > Edit Configuration > Default Node Executor tab (select "docker-container-node-executor") and save it.

How to register app from private repo in Spring Cloud dataflow 2.6.1

I'm using SCDF 2.6.1 in Openshift 3, and I'm facing error while registering the app, error log like below :
java.lang.NullPointerException: null
at org.springframework.cloud.dataflow.configuration.metadata.container.DefaultContainerImageMetadataResolver.getRegistryRequest(DefaultContainerImageMetadataResolver.java:162)
at org.springframework.cloud.dataflow.configuration.metadata.container.DefaultContainerImageMetadataResolver.getImageLabels(DefaultContainerImageMetadataResolver.java:110)
at org.springframework.cloud.dataflow.configuration.metadata.BootApplicationConfigurationMetadataResolver.resolvePortNamesFromContainerImage(BootApplicationConfigurationMetadataResolver.java:215)
at org.springframework.cloud.dataflow.configuration.metadata.BootApplicationConfigurationMetadataResolver.listPortNames(BootApplicationConfigurationMetadataResolver.java:163)
at org.springframework.cloud.dataflow.server.controller.AppRegistryController.getInfo(AppRegistryController.java:193)
at org.springframework.cloud.dataflow.server.controller.AppRegistryController.info(AppRegistryController.java:162)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
I checked the line of code in DefaultContainerImageMetadataResolver.java:162
// Convert the image name into a well-formed ContainerImage
ContainerImage containerImage = this.containerImageParser.parse(imageName);
// Find a registry configuration that matches the image's registry host
RegistryConfiguration registryConf = this.registryConfigurationMap.get(containerImage.getRegistryHost());
// Retrieve a registry authorizer that supports the configured authorization type.
RegistryAuthorizer registryAuthorizer = this.registryAuthorizerMap.get(registryConf.getAuthorizationType());
I'm pretty sure the error is because registryConf is null as result from
RegistryConfiguration registryConf = this.registryConfigurationMap.get(containerImage.getRegistryHost());
How to put my private repo URI in registryConfigurationMap ?
I have tried to put imagePullSecret in the deployment.yml which is registered with the private repo, but I think it doesn't work because in the startup log, I still see :
2020-09-03 04:55:24.111 INFO 1 --- [ main] urationMetadataResolverAutoConfiguration :
Final Registry Configurations: {registry-1.docker.io=RegistryConfiguration{registryHost='registry-1.docker.io', user='null', secret='****'', authorizationType=dockeroauth2, manifestMediaType='application/vnd.docker.distribution.manifest.v2+json', disableSslVerification='false',
extra={registryAuthUri=https://auth.docker.io/token?service=registry.docker.io&scope=repository:{repository}:pull&offline_token=1&client_id=shell }}}
The only place where SCDF server downloads the container image layer is when it looks for app metadata.
Currently, this is configured to use the docker registry host (as this is where all the out-of-the-box applications are hosted).
If you want to override, you can modify these property values at the time of server startup and proceed.
Remember the fact that this configuration is only needed to download the app metadata layer of the image - not to download the entire container image at the SCDF server side.

Bazel Kubernetes Object Error: no objects passed to apply (Google Container Registry)

I have a k8s_object rule to apply a deployment to my Google Kubernetes Cluster. Here is my setup:
load("#io_bazel_rules_docker//nodejs:image.bzl", "nodejs_image")
nodejs_image(
name = "image",
data = [":lib", "//:package.json"],
entry_point = ":index.ts",
)
load("#io_bazel_rules_k8s//k8s:object.bzl", "k8s_object")
k8s_object(
name = "k8s_deployment",
template = ":gateway.deployment.yaml",
kind = "deployment",
cluster = "gke_cents-ideas_europe-west3-b_cents-ideas",
images = {
"gcr.io/cents-ideas/gateway:latest": ":image"
},
)
But when I run bazel run //services/gateway:k8s_deployment.apply, I get the following error
INFO: Analyzed target //services/gateway:k8s_deployment.apply (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //services/gateway:k8s_deployment.apply up-to-date:
bazel-bin/services/gateway/k8s_deployment.apply
INFO: Elapsed time: 0.113s, Critical Path: 0.00s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
INFO: Build completed successfully, 1 total action
$ /snap/bin/kubectl --kubeconfig= --cluster=gke_cents-ideas_europe-west3-b_cents-ideas --context= --user= apply -f -
2020/02/12 14:52:44 Unable to publish images: unable to publish image gcr.io/cents-ideas/gateway:latest
error: no objects passed to apply
error: no objects passed to apply
It doesn't push the new image to the Google Container Registry.
Strangely, this worked a few days ago. But I didn't change anything.
Here is the full code if you need to take a closer look: https://github.com/flolude/cents-ideas/blob/069c773ade88dfa8aff492f024a1ade1f8ed282e/services/gateway/BUILD
Update
I don't know if this has something to do with this issue but when I run
gcloud auth configure-docker
I get some warnings:
WARNING: `docker-credential-gcloud` not in system PATH.
gcloud's Docker credential helper can be configured but it will not work until this is corrected.
WARNING: Your config file at [/home/flolu/.docker/config.json] contains these credential helper entries:
{
"credHelpers": {
"asia.gcr.io": "gcloud",
"staging-k8s.gcr.io": "gcloud",
"us.gcr.io": "gcloud",
"gcr.io": "gcloud",
"marketplace.gcr.io": "gcloud",
"eu.gcr.io": "gcloud"
}
}
Adding credentials for all GCR repositories.
WARNING: A long list of credential helpers may cause delays running 'docker build'. We recommend passing the registry name to configure only the registry you are using.
gcloud credential helpers already registered correctly.
I had google-cloud-sdk installed via snap install. What I did to make it work is to remove google-cloud-sdk via
snap remove google-cloud-sdk
and then followed those instructions to install it via
sudo apt install google-cloud-sdk
Now it works fine

HADOOPFS - Could not verify the base directory in streamsets

I am having issues running the Pipeline with in streamsets, I can see the following error is :
HADOOPFS_44 - Could not verify the base directory: 'java.net.ConnectException: Call From SDC/...... to ......failed on connection exception: java.net.ConnectException: Connection refused;
For more details see:
https://cwiki.apache.org/confluence/display/HADOOP2/ConnectionRefused
You should follow the steps given in the Hadoop wiki. The machine running StreamSets Data Collector cannot connect to the Hadoop cluster for some reason.