Google Data Transfer from S3-compatible sources MissingRegion - google-cloud-storage

Mainly wondering if anyone has encountered before (and how to resolve).
Transfer job from s3-compatible source to Cloud Storage fails. Unable to list objects from the source.
Agent logs show
W0111 06:49:13.002952 8 helpers.go:465 got unknown FailureType: UNKNOWN_FAILURE, MissingRegion: could not find region configuration
I have been following the guide https://cloud.google.com/storage-transfer/docs/s3-compatible
Source is non-Amazon S3-compatible storage service. Source has been tested with other S3 client tools (including gsutil).
Appears AWS SDK for Go is in use
https://aws.github.io/aws-sdk-go-v2/docs/configuring-sdk/
The SDK requires a region to operate but does not have a default Region.
Amazon S3-compatible storage services (MinIO, Hitichai Content Platform) do not require a region.
I have tried passing AWS_REGION, AWS_DEFAULT_REGION environment variables to Agent running in Docker. Mounting ~/.aws/config in the container.
SDK may require both of these to be done
Set the AWS_REGION environment variable to the default Region
Set the region explicitly using config.WithRegion as an argument to config.LoadDefaultConfig when loading configuration.
Thanks

Related

Azure Data Factory not Using Data Flow Runtime

I have an Azure Data Factory with a pipeline that I'm using to pick up data from an on-premise database and copy to CosmosDB in the cloud. I'm using a data flow step at the end to delete documents that don't exist in the source from the sink.
I have 3 integration runtimes set up:
AutoResolveIntegrationRuntime (default set up by Azure)
Self hosted integration runtime (I set this up to connect to the on-premise database so it's used by the source dataset)
Data flow integration runtime (I set this up to be used by the data flow step with a TTL setting)
The issue I'm seeing is when I trigger the pipeline the AutoResolveIntegrationRuntime is the one being used so I'm not getting the optimisation that I need from the Data flow integration runtime with the TTL.
Any thoughts on what might be going wrong here?
Per my experience, only the AutoResolveIntegrationRuntime (default set up by Azure) supports the optimization:
When we choose the data flow run on non-default integration, there isn't the optimization:
And once the integration runtime created, we also couldn't change the settings:
Data Factory documents didn't talk more about this. When I run the pipeline, I found that the dataflowruntime won't work:
That means that no matter which integration runtime you used to connect to the dataset, data low will always use the Azure Default integration runtime.
SHIR doesnt support dataflow execution.

Cloud Run + Firebase Hosting region rewrites issue

I'm trying to use Firebase Hosting for CDN connected to Cloud Run. Yesterday I was testing something for region eu-west1 and it went well. Today I'm trying to do the same but for region eu-west4 and I'm getting error that this region is not supported.
I switched to eu-west1 and it worked.
Is this bug or region eu-west4 is not supported?
=== Deploying to 'xxxxxxxx'...
i deploying hosting
Error: HTTP Error: 400, Cloud Run region `europe-west4` is not supported.
"rewrites": [
{
"source": "**",
"run": {
"serviceId": "web-client",
"region": "europe-west4"
}
}
],
same for new asia-southeast1 region also
Error: HTTP Error: 400, Cloud Run region `asia-southeast1` is not supported.
From this info here is the Details information regarding Rewrite:
Firebase Hosting originates in us-central1 so while deploying cloud Run it's recommended to select us-central1 region for less First Contentful Paint score or quick loading of your website, but kills the advantage of your nearby region availability purpose(really unfortunate for google fanboys).
Example: if your location is India your nearest cloud run available is asia-southeast1 Singapore we can't select asia-southeast1
Request path would go like this:
you→India(CDN)→USA(Firebase)→Signapore(CloudRun+aync call to Firestore India)→USA→CDN→you (which is REALLY BAD in terms of latency).
you→India(CDN)→USA(Firebase)→USA us-central1(CloudRun+aync call to Firestore India)→USA→CDN→you
(static Page will Load FAST, but Firestore dynamic data on webapp will data load with REALLY BAD in terms of latency, we should select us-central1 for Firestore also this makes no use of your local region GCP products this really strange that Firebase hosting not available for at least for AMERICA EUROPE ASIA-PACIFIC zones atleast).
Conclusion(till this date):
Cloud Run region rewrites issue for Firebase Hosting is there for many regions but, for the optimal page load result we should select us-central1 it is really unfortunate THIS IS THE REAL PROBLEM compare to Rewrite Issue, to avoid website Firestore latency for non USA users we should use cloud run/cloud function cache control such that data will cached at your local/near by region CDN for fast data loading (we cant use firebase web SDK since CDN caching not possible via if we use SDK, we should use Cloud function in firebase/cloud run)
Firebase Hosting to Cloud Run rewrite availability ( as of Aug 31, 2020)
Available:
us-central1,
us-east1,
asia-northeast1,
europe-west1
Not available
asia-east1,
europe-north1,
europe-west4,
us-east4,
us-west1,
asia-southeast1
Please file a feature request for Firebase rewrite availability if it's not available in your region Cloud Run and Firebase hosting not available for at least for AMERICA EUROPE ASIA-PACIFIC zones.
FYI: Cloud Firestore multi-region also not available for Asia region if using multi-region Firestore is the fix for locked Firebase hosting and Cloud run regions to us-central1
Cloud Run availability region
(Please comment if you get the rewrite access to any of the above mentioned region)
I actually managed to figure out a way to "fix" this. I changed my regions to europe-west4 instead of my previous europe-west1 and that "fixed" my deployment problem.

Cloud SQL API [sql-component.googleapis.com] not enabled on project

I am running a cloud build trigger on a cloudbuid.yaml file in which I build a docker container and then deploy it to cloud run. The error stacktrace is as follows:
API [sql-component.googleapis.com] not enabled on project
The problem is that I have enabled both SQL and SQL Admin APIs in both projects (one for the cloud build and one for the database), which was confirmed in the console and in gcloud.
Here is the yaml code for the step I am referring to:
- name: 'gcr.io/cloud-builders/gcloud'
args: [
'beta',
'run',
'deploy',
'MY_NAME',
'--image', 'gcr.io/MY_PROJECT/MY_IMAGE',
'--region', 'MY_REGION',
'--platform', 'managed',
'--set-cloudsql-instances', 'MY_CONNECTION_NAME',
'--set-env-vars', 'NODE_ENV=production,INSTANCE_CONNECTION_NAME=MY_CONNECTION_NAME,SQL_USER=MY_USER,SQL_PASSWORD=MY_PASSWORD,SQL_NAME=MY_SCHEMA,TOPIC_NAME=MY_TOPIC'
]
Any suggestions?
Thanks.
P.S.: As per Eespinola suggestion, I checked and confirmed I am running Google Cloud SDK 254.0.0.
P.S. 2: I have also tried to create a project from scratch but ended up with the same results.
Ok so as per the same thread eespinola posted (see above), the Cloud Build gcloud step will be updated according to Cloud SDK 254.0.0 update in a near future (the actual date may or may not be posted in the same thread in the future). Until then, the alternative is to use the YAML file without the --add-cloudsql-instances flag and add it manually in the UI (I still have not tried this but it should work as per Google's development team).

gcloud init cli command :ERROR: gcloud crashed (ValueError): the query contains a null character

I am trying to initialize my gcloud settings for a project. But when I run the gcloud init command, it gives error gcloud crashed.
It was previously working but all of sudden today this command crashed. I tried 'gcloud auth login' and pasted the credentials, but it still gives the same error
gcloud init
Welcome! This command will take you through the configuration of gcloud.
Settings from your current configuration [default] are:
core:
disable_usage_reporting: 'False'
Pick configuration to use:
[1] Re-initialize this configuration [default] with new settings
[2] Create a new configuration
Please enter your numeric choice: 1
Your current configuration has been set to: [default]
You can skip diagnostics next time by using the following flag:
gcloud init --skip-diagnostics
Network diagnostic detects and fixes local network connection issues.
Checking network connection...done.
Reachability Check passed.
Network diagnostic passed (1/1 checks passed).
ERROR: gcloud crashed (ValueError): the query contains a null character
If you would like to report this issue, please run the following command:
gcloud feedback
To check gcloud for common problems, please run the following command:
gcloud info --run-diagnostics
The actual results should be like below:
gcloud init
Welcome! This command will take you through the configuration of gcloud.
Settings from your current configuration [default] are:
core:
account: prajakta#gmail.com
disable_usage_reporting: 'False'
project: default-1234
Pick configuration to use:
[1] Re-initialize this configuration [default] with new settings
[2] Create a new configuration
Please enter your numeric choice: 1
Your current configuration has been set to: [default]
You can skip diagnostics next time by using the following flag:
gcloud init --skip-diagnostics
Network diagnostic detects and fixes local network connection issues.
Checking network connection...done.
Reachability Check passed.
Network diagnostic passed (1/1 checks passed).
Choose the account you would like to use to perform operations for
this configuration:
[1] prajakta#gmail.com
[2] Log in with a new account
Please enter your numeric choice: 1
Pick cloud project to use:
[1] default-1234
[2] abc-project
[3] Create a new project
Please enter numeric choice or text value (must exactly match list
item): 1
Your current project has been set to: [default-1234].
Your Google Cloud SDK is configured and ready to use!
From the output that you included, it appears to have completed successfully:
Your Google Cloud SDK is configured and ready to use!
Are you able to use any commands?
gcloud config list
gcloud auth list
gcloud projects list
It's not clear which operating system you're using but it's probable that either some dependent piece of software was upgraded and caused the break; and|or Cloud SDK (aka gcloud) has upgraded on your machine and it is broken.
You may be best-placed to contact Google Cloud Support, or if you don't have a support contract, to file an issue on Google issue tracker for gcloud here:
https://issuetracker.google.com/issues/new?component=187143
NB You've included your email address and several of your projects in your question, you may wish to redact these as they're not necessary to help answer the question.

fabric-samples:balance-transfer example - v1.1.0 - Missing instructions?

fabric-samples:balance-transfer example - v1.1.0 - on a customized network with cryptogen generated crypto - fabric-client-kv* contents are failing to be created. Missing instructions? Please provide what needs to be done for creating these folders and contents in root directory of sample and in /tmp directory for wallet setup.
Created a customized network
Generated cryptogen content for the customized network
Brought of the network and verified it to be correctly running
Adapted the runApps.sh and testAPIs.sh scripts to use customized network with its crypto
User enroll and registration process failed due to missing fabric-client-kv* contents
This is not an issue when sample itself is run. The fabric-client-kv* contents are generated or re-generated
What is missing and what needs to be done to succeed?
If you regenerate the certificates same should be updated in docker-compose and network-config. If your adding a new organization to the network, Need to create a network connection profile configuration which will have the setting for keyValueStore and cryptoStore. In the balance transfer example crypto materials are stored in tmp folder, In this case, if you restart the system you will lose those materials, You can change these configurations on org*.yaml.