GCloud Recommendations AI import catalog error - gcloud

No matter i was using GCloud console or command line, when tried to import
Merchant center's data from BigQuery, it shows
Import failed: Failed to fetch service account credentials for project:
xxxx. Importing from BQ will fail.

Related

did any one able to import azure storage blob using pulumi. if so please let me know the command?

I am trying to import the azure storage blob state in to pulumi using PULUMI CLI.
tried below cmd
pulumi import --yes azure-native:storage:Blob testblob
it thrown error with below.
error: Preview failed: "resourceGroupName" not found in resource state
please let me know if any one is able to successfully import the azure storage blob resource in to pulumi.
thanks,
kumar
tried below cmd
pulumi import --yes azure-native:storage:Blob testblob
it thrown error with below.
error: Preview failed: "resourceGroupName" not found in resource state
expected result: to import successfully
actual result: import failed.
If you look at the docs for a Blob resource there's an import section (this section exists on all resources).
The actual command you'll need is:
pulumi import azure-native:storage:Blob myresource1 /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}/blobServices/default/containers/{containerName}/blobs/{blobName}

How to import sql file in Google SQL with binary mode enabled?

I have a database that is giving error:
ASCII '\0' appeared in the statement, but this is not allowed unless option --binary-mode is enabled and mysql is run in non-interactive mode. Set --binary-mode to 1 if ASCII '\0' is expected.
I'm including importing the database through the console with gcloud sql import sql mydb gs://my-path/mydb.sql --database=mydb but I don't see in the documentation any flags for binary mode. Is it possible at all?
Optional - is there a way to set this flag when importing through the MySQL Workbench. I haven't seen anything about it there too, but may be I'm missing some setting or something. If there is way to set that flag, then I can import my database through MySQL Workbench.
Thank you.
Depending where the source database is hosted, on Cloud SQL or on an on-premise environment, the proper flags are set during the export, so the dump file is compatible with the target database.
Since you would like to import a file that has been exported from an on-premise environment, mysqldump is the suggested way to perform the export.
First, create a dump file as suggested in the documentation. Make sure to pay attention to the following 2 points:
Do not export customer-created MySQL users. This will cause the import to the new instance to fail. Instead, manually create the MySQL users you wish to.
Make sure that you have configured the appropriate flags in order to make sure that the dump file will contain all the necessary details you need. Eg triggers, stored procedures etc.
Then, create a Cloud Storage Bucket and upload the dump file to the bucket.
Before proceeding with the import, grant the Storage Object Admin role to the service account of the target Cloud SQL instance. You may do that with the following command:
gsutil iam ch serviceAccount:[SERVICE-ACCOUNT]:objectAdmin gs://[BUCKET-NAME]
You may locate the aforementioned Service Account in the Cloud SQL instance Overview, or by running the following command:
gcloud sql instances describe [INSTANCE_NAME]
The service account will be mentioned at the serviceAccountEmailAddress field.
Now you are able to do the import either from Console, or using the gcloud command or a REST API.
More details in Google documentation
Best Practices for importing/exporting data

Reading bucket from another project in cloudshell

Because Firestore does not have a way to clone projects, I am attempting to achieve the equivalent by copying data from one project into a GCS bucket and read it into another project.
Specifically, using cloudshell I populate the bucket with data exported from Firestore project A and am attempting to import it into Firestore project B. The bucket belongs to Firestore project A.
I am able to export the data from Firestore project A without any issue. When I attempt to import into Firestore project B with the cloudshell command
gcloud beta firestore import gs://bucketname
I get the error message
project-b#appspot.gserviceaccount.com does not have storage.
buckets.get access to bucketname
I have searched high and low for a way to provide the access rights storage.bucket.get to project B, but am not finding anything that works.
Can anyone point me to how this is done? I have been through the Google docs half a dozen times and am either not finding the right information or not understanding the information that I find.
Many thanks in advance.
For import from a project A in a project B, the service account in project B must have the right permissions for the Cloud Storage bucket in project A.
In your case, the service account is:
project-ID#appspot.gserviceaccount.com
To grant the right permissions you can use this command on the Cloud Shell of project B:
gsutil acl ch -u project-ID#appspot.gserviceaccount.com:OWNER gs://[BUCKET_NAME]
gsutil -m acl ch -r -u project-ID#appspot.gserviceaccount.com:OWNER gs://[BUCKET_NAME]
Then, you can import using the firestore import:
gcloud beta firestore import gs://[BUCKET_NAME]/[EXPORT_PREFIX]
I was not able to get the commands provided by "sotis" to work, however his answer certainly got me heading down the right path. The commands that eventually worked for me were:
gcloud config set project [SOURCE_PROJECT_ID]
gcloud beta firestore export gs://[BUCKET_NAME]
gcloud config set project [TARGET_PROJECT_ID]
gsutil acl ch -u [RIGHTS_RECIPIENT]:R gs://[BUCKET_NAME]
gcloud beta firestore import gs://[BUCKET_NAME]/[TIMESTAMPED_DIRECTORY]
Where:
* SOURCE_PROJECT_ID = the name of the project you are cloning
* TARGET_PROJECT_ID = the destination project for the cloning
* RIGHTS_RECIPIENT = the email address of the account to receive read rights
* BUCKET_NAME = the name of the bucket that stores the data.
Please note, you have to manually create this bucket before you export to it.
Also, make sure the bucket is in the same geographic region as the projects you are working with.
* TIMESTAMPED_DIRECTORY = the name of the data directory automatically created by the "export" command
I am sure that this is not the only way to solve the problem, however it worked for me and appears to be the "shortest path" solution I have seen.

Connecting to a Postgres Heroku DB from AWS Glue, SSL issue

I'm trying to connect to my Heroku DB and I'm getting the following series of errors related to SSL:
SSL connection to data store using host matching failed. Retrying without host matching.
SSL connection to data store failed. Retrying without SSL.
Check that your connection definition references your JDBC database with correct URL syntax, username, and password. org.postgresql.util.PSQLException: Connection attempt timed out.
I managed to connect to the DB with DBeaver and had similar SSL problems until I set the SSL Factory to org.postgresql.ssl.NonValidatingFactory, but Glue doesn't offer any SSL options.
The DB is actually hosted on AWS, the connection URL is:
jdbc:postgresql://ec2-52-19-160-2.eu-west-1.compute.amazonaws.com:5432/something
(p.s. the AWS Glue forums are useless! They don't seem to be answering anyones questions)
I was having the same issue and it seems that the issue is that Heroku requires a newer JDBC driver than the one that Amazon requires. See this thread:
AWS Data Pipelines with a Heroku Database
Also, it seems that you can use the jbdc directly from your python scripts. See here:
https://dzone.com/articles/extract-data-into-aws-glue-using-jdbc-drivers-and
So it seems like you need to download a new driver, upload it to s3, then manually use it in your scripts as mentioned here:
https://gist.github.com/saiteja09/2af441049f253d90e7677fb1f2db50cc
Good luck!
UPDATE: I was able to use the following code snippet in a Glue Job to connect to the data. I had to upload the Postgres driver to S3 and then add it to the path for my Glue Job. Also, make sure that either the Jars are public or you've configured the IAM user's policy such that they have access to the bucket.
%pyspark
import sys
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.dynamicframe import DynamicFrame
from awsglue.transforms import *
glueContext = GlueContext(SparkContext.getOrCreate())
source_df = spark.read.format("jdbc").option("url","jdbc:postgresql://<hostname>:<port>/<datbase>“).option("dbtable", “<table>”).option("driver", "org.postgresql.Driver").option("sslfactory", "org.postgresql.ssl.NonValidatingFactory").option("ssl", "true").option("user", “<username>”).option("password", “<password>”).load()
dynamic_dframe = DynamicFrame.fromDF(source_df, glueContext, "dynamic_df")

Google Cloud SQL import - HTTPError 403: Insufficient Permission

I would like to import mysql dump file into mysql database instance with the gcloud import tool, but I am getting an error:
ubuntu#machine:~/sql$ gcloud sql instances import sql-56-test-8ef0cb104575 gs://dbf/bt_ca_dev_tmp-2017-01-19.sql.gz
ERROR: (gcloud.sql.instances.import) HTTPError 403: Insufficient Permission
What exact permissions am I missing? I can create sql instance with registered service account but I am not possible to import data?
You have issue with permissions
create a bucket if you don't have one, run
`gsutil mb -p [PROJECT_NAME] -l [LOCATION_NAME] gs://[BUCKET_NAME]`
Describe the sql instance you are exporting from and copy the sa
`gcloud sql instances describe [INSTANCE_NAME]`
Add the service account to the bucket ACL as a writer
`gsutil acl ch -u [SERVICE_ACCOUNT_ADDRESS]:W gs://[BUCKET_NAME]`
Add the service account to the import file as a reader
`gsutil acl ch -u [SERVICE_ACCOUNT_ADDRESS]:R gs://[BUCKET_NAME]/[IMPORT_FILE_NAME]`
Import the file:
gcloud sql import csv [INSTANCE_NAME] gs://[BUCKET_NAME]/[FILE_NAME] \
--database=[DATABASE_NAME] --table=[TABLE_NAME]
According to the documentation, this should do the trick. The wierd thing is that it needs write permissions. This should do the trick:
gsutil iam ch serviceAccount:"${SERVICE_ACCOUNT}":roles/storage.legacyBucketWriter gs://${BUCKET_NAME}
gsutil iam ch serviceAccount:"${SERVICE_ACCOUNT}":roles/storage.objectViewer gs://${BUCKET_NAME}
It's a permission problem of the Cloud SQL service account in the Google Storage bucket you're trying to use. To solve it you need to grant Storage Legacy Bucket Reader, Storage Legacy Object Owner, Storage Object Viewer roles to the service account email that you get from
gcloud sql instances describe <YOUR_DB_NAME> | grep serviceAccountEmailAddress
To do it go to the Cloud Storage / your bucket in Google Cloud Console and under Permission write the serviceAccountEmailAddress in ADD. Finally, add the roles you need.