Streamsets version from CLI - streamsets

I'm currently writing code that locally installs streamsets extensions via a CLI. One of the checks I want to write is to ensure that the extension works for the streamsets version that's installed locally.
When I try to query the version from the CLI this is what I run into.
:) streamsets --version
Invalid sub-command
streamsets <SUB_COMMAND> [<SUB_COMMAND_ARGUMENTS>]
Sub-commands:
dc: Starts the Data Collector
create-dc: Creates new instance of Data Collector
cli: Data Collector CLI
jks-cs: Java Keystore Credential Store
stagelibs: Data Collector Stage library installer
show-vault-id: Shows the user-id to authorize in Vault
setup-mapr: Enables the MapR stage library for the detected
MapR installation.
:( streamsets cli --version
Found unexpected parameters: [--version]
) streamsets dc --version
Invalid option(s)
streamsets dc <OPTIONS>
Options:
-verbose : prints out Data Collector detailed environment settings
-exec : starts Data Collector JVM within the same process of the script
-skipenvsourcing : skips the sourcing of the libexec/sdc-env.sh file
How do I figure out what version of streamsets is installed other than traversing the file system and finding the VERSION file?

You can get a Data Collector's info, including the version, with the CLI's ping command:
$ streamsets cli -U http://localhost:18630 ping
{
"info" : {
"built.date" : "2018-06-13T23:02Z",
"version" : "3.4.0-SNAPSHOT",
"built.repo.sha" : "9f803ed0f5167bbb91af2493b20c9a20b566106f",
"source.md5.checksum" : "1d59dbb2281a974f4b192a28efe7624c",
"built.by" : "pat"
},
"version" : "3.4.0-SNAPSHOT",
"builtDate" : "2018-06-13T23:02Z",
"builtBy" : "pat",
"builtRepoSha" : "9f803ed0f5167bbb91af2493b20c9a20b566106f",
"sourceMd5Checksum" : "1d59dbb2281a974f4b192a28efe7624c"
}
system info is a synonym for ping
This isn't (yet) documented. I'll open a doc Jira to do so.

Related

Print editions using metaboss on Solana

I'm trying to create prints from a master edition (aka original edition) using from the console. The number of prints should be limited to a fixed number.
I followed this procedure :
Upload the image to Arweave : arloader upload image.jpg --with-sol --sol-keypair-path ~/.config/solana/id.json --ar-default-keypair --no-bundle.
Create the json file with NFT metadata :
{
"name": "name_of__the_collection",
"symbol": "token_of_the_collection",
"uri": "https://arweave.net/[arweave_img_tx_id]",
"seller_fee_basis_points": 0,
"creators": [
{
"address": "address_of_the_creator_of_the_collection",
"verified": false,
"share": 100
}
]
}
Mint the NFT :
metaboss mint one --keypair ~/.config/solana/id.json --nft-data-file ./metadata.json --max-editions='10'
Create the all the prints :
metaboss mint missing-editions --account address_of_the_creator_of_the_collection
I have two issues :
On solana explorer, I have an error : error loading image
The 4. command returns an error : Error: failed to get account data
What's wrong ?
[edit] Error 1 : I used uri key instead of the image in the metadata. That's why solana explorer couldn't find the image.
Generally the process is good. There are some details that have to be aligned though:
Regarding the missing image:
You have to upload the metadata JSON file, too. This is what you reference in the mint command.
Your metadata is not 100% valid. E.g. you are missing the properties field. Have a look into the Token Metadata docs for more details.
Regarding metaboss mint missing-editions:
The Account you specify with --account should not be the address of the creator of the collection but instead the Master Edition Address. (Master Edition is the NFT you minted in step 3)
Since the command runs a GPA call you should add --timeout 120 and use not use the default RPC. Otherwise you will not get results.
If it still does not work you can also run
metaboss mint editions --next-editions 9
Please let me know in case of any uncertainties.

Beam SDK harness still trying to launch docker when I set environment_type to be `PROCESS`

According to the beam harness documentation:
PROCESS: User code is executed by processes that are automatically started by the runner on each worker node.
args = [
"--runner=portableRunner",
"--streaming",
"--sdk_worker_parallelism=2",
"--environment_type=PROCESS",
"--environment_config={\"command\": \"/opt/apache/beam/boot\"}",
]
consumer_config = {
"security.protocol": "SASL_SSL",
"sasl.mechanism": "AWS_MSK_IAM",
"sasl.jaas.config": "software.amazon.msk.auth.iam.IAMLoginModule required;",
"sasl.client.callback.handler.class": "software.amazon.msk.auth.iam.IAMClientCallbackHandler",
"bootstrap.servers": bootstrap_servers,
}
with beam.Pipeline(options=PipelineOptions(args)) as p:
data = p | "Reading messages from Kafka" >> ReadFromKafka(
consumer_config=consumer_config,
topics=topics,
with_metadata=True
)
data | 'Writing to stdout' >> beam.Map(logging.info)
But when I run the code (deployed to k8s using flinkk8soperator), it is complaining:
Caused by: java.io.IOException: Cannot run program "docker": error=2, No such file or directory
Wondering if I misunderstand anything? Thanks!
After couple digging, I finally make the cross language work without using DinD or DooD. Here's the steps:
Ensure both job and task manager mount a shared volume for artifact staging. (This is required, otherwise the task manager will complained unable to find the submitted jar)
Ensure your docker image can run both java and python beam code, here's what I did:
# python SDK
COPY --from=apache/beam_python3.7_sdk:2.41.0 /opt/apache/beam/ /opt/apache/beam/
# java SDK
COPY --from=apache/beam_java8_sdk:2.41.0 /opt/apache/beam/ /opt/apache/beam_java/
In the job, you'll need to start the expansion service with extra args, for example the KafkaIo:
from apache_beam.io.kafka import ReadFromKafka, default_io_expansion_service
ReadFromKafka(
consumer_config=consumer_config,
topics=[topic],
with_metadata=False,
expansion_service=default_io_expansion_service(
append_args=[
'--defaultEnvironmentType=PROCESS',
"--defaultEnvironmentConfig={\"command\":\"/opt/apache/beam_java/boot\"}",
]
)
You portable execution relies on xLang support that requires starting a Java SDK with docker. Your cluster doesn't have docker installed.

Couldn't find network parameters file and compatibility zone wasn't configured/isn't reachable, corda

Am stuck, trying to integrate azure postgres with corda. I'm using gradle's deployNodes task to create the network locally.
Below is the exception that I get.
[ERROR] 2021-09-06T06:53:02,290Z [main] internal.NodeStartupLogging. - Exception during node startup: Couldn't find network parameters file and compatibility zone wasn't configured/isn't reachable. - Couldn't find network parameters file and compatibility zone wasn't configured/isn't reachable. [errorCode=1917kd6, moreInformationAt=https://errors.corda.net/OS/4.5/1917kd6]
I had no issues when connected with a local instance of postgres installed on my machine.
Below given is the azure postgres configuration that errors.
extraConfig = [
'dataSourceProperties' : [
'dataSourceClassName': 'org.postgresql.ds.PGSimpleDataSource',
'dataSource.url': 'jdbc:postgresql://azure_database_name.postgres.database.azure.com:5432/demo?searchpath=demo_schema?ssl=true',
'dataSource.user': 'userN#me#azure_database_name',
'dataSource.password': 'pa$$word'
]
Try to do this:
create a /drivers folder in your project root folder. This folder will contain the postgresql-XX.X.jar of PostgreSQL
update your build.gradle to add the location of the jar of the drivers with jarDirs, like so:
extraConfig = [
jarDirs : ['/your-project-absolute-path/drivers'],
dataSourceProperties : [
'dataSourceClassName': 'org.postgresql.ds.PGSimpleDataSource',
'dataSource.url': 'jdbc:postgresql://azure_database_name.postgres.database.azure.com:5432/demo?searchpath=demo_schema?ssl=true',
'dataSource.user': 'userN#me#azure_database_name',
'dataSource.password': 'pa$$word'
]
]
run ./gradlew deployNodes
run ./build/nodes/runnodes
the node should start (it will take some time since it has to connect to Azure DB)

Bazel Kubernetes Object Error: no objects passed to apply (Google Container Registry)

I have a k8s_object rule to apply a deployment to my Google Kubernetes Cluster. Here is my setup:
load("#io_bazel_rules_docker//nodejs:image.bzl", "nodejs_image")
nodejs_image(
name = "image",
data = [":lib", "//:package.json"],
entry_point = ":index.ts",
)
load("#io_bazel_rules_k8s//k8s:object.bzl", "k8s_object")
k8s_object(
name = "k8s_deployment",
template = ":gateway.deployment.yaml",
kind = "deployment",
cluster = "gke_cents-ideas_europe-west3-b_cents-ideas",
images = {
"gcr.io/cents-ideas/gateway:latest": ":image"
},
)
But when I run bazel run //services/gateway:k8s_deployment.apply, I get the following error
INFO: Analyzed target //services/gateway:k8s_deployment.apply (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //services/gateway:k8s_deployment.apply up-to-date:
bazel-bin/services/gateway/k8s_deployment.apply
INFO: Elapsed time: 0.113s, Critical Path: 0.00s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
INFO: Build completed successfully, 1 total action
$ /snap/bin/kubectl --kubeconfig= --cluster=gke_cents-ideas_europe-west3-b_cents-ideas --context= --user= apply -f -
2020/02/12 14:52:44 Unable to publish images: unable to publish image gcr.io/cents-ideas/gateway:latest
error: no objects passed to apply
error: no objects passed to apply
It doesn't push the new image to the Google Container Registry.
Strangely, this worked a few days ago. But I didn't change anything.
Here is the full code if you need to take a closer look: https://github.com/flolude/cents-ideas/blob/069c773ade88dfa8aff492f024a1ade1f8ed282e/services/gateway/BUILD
Update
I don't know if this has something to do with this issue but when I run
gcloud auth configure-docker
I get some warnings:
WARNING: `docker-credential-gcloud` not in system PATH.
gcloud's Docker credential helper can be configured but it will not work until this is corrected.
WARNING: Your config file at [/home/flolu/.docker/config.json] contains these credential helper entries:
{
"credHelpers": {
"asia.gcr.io": "gcloud",
"staging-k8s.gcr.io": "gcloud",
"us.gcr.io": "gcloud",
"gcr.io": "gcloud",
"marketplace.gcr.io": "gcloud",
"eu.gcr.io": "gcloud"
}
}
Adding credentials for all GCR repositories.
WARNING: A long list of credential helpers may cause delays running 'docker build'. We recommend passing the registry name to configure only the registry you are using.
gcloud credential helpers already registered correctly.
I had google-cloud-sdk installed via snap install. What I did to make it work is to remove google-cloud-sdk via
snap remove google-cloud-sdk
and then followed those instructions to install it via
sudo apt install google-cloud-sdk
Now it works fine

How to compiled native_pgsql in gammu?

I have configured gammurc and gammu --identify is working. But I received error when I do gammu-smsd -c smsdrc
Here the following error :
gammu-smsd: the native_pgsql driver was not compiled in!
When I run gammu-smsd -v it's tell this :
Compiled in Features :
OS Support :
- ALARM
- WINDOWS_SERVICE
- EVENT_LOG
Backend services :
- NULL
- FILES
- ODBC
I'am using windows and pgsql
how do I fix this problem ? and
How do I compiled native_pgsql ?
thanks
You can use ODBC driver instead. It's only driver which works without additional dependencies on Windows.
To have Gammu built with native PostgreSQL driver on Windows, see our compilation instructions. Gammu will automatically search for the libraries in common locations, but you might end up needing to enter POSTGRES_INCLUDE_DIR and POSTGRES_LIBRARY manually in CMake.