Snowflake external stage - getting error access denied - snowflake-schema

I have created an external stage for my S3 bucket in snowflake. When I try to access the stage using
List #database.schema.stagename
am getting access denied error. We have checked the S3 bucket policy and fixed the issue.
But
I want to check log in snowflake. is it possible any of the snowflake log will give details about this issue. ? if any where I have to check ?

Open a support ticket(support#snowflake.com) to get details about a specific incident or error.

Related

facing issue while using synapsesql (####.dfs.windows.core.net not found)

I was working on connecting dedicated sql pool(formerly sql DWH) to synapse spark notebooks. I was using spark.read.synapsesql(). I'm able to write data as table but not able to read data from the table.
val df:DataFrame = spark.read.option(Constants.SERVER, "XXXXX.database.windows.net")
.option(Constants.USER, "XXXXX")
.option(Constants.PASSWORD, "XXXXX")
.option(Constants.TEMP_FOLDER,"abfss://xxxxx#xxxx.dfs.core.windows.net/Tempfolder/")
.synapsesql("dedicated-poc.dbo.customer"
com.microsoft.spark.sqlanalytics.SQLAnalyticsConnectorException: com.microsoft.sqlserver.jdbc.SQLServerException: External file access failed due to internal error: 'Error occurred while accessing HDFS: Java exception raised on call to HdfsBridge_Connect.
Java exception message: Configuration property XXXXXXXX.dfs.core.windows.net not found.' at com.microsoft.spark.sqlanalytics.ItemsScanBuilder$PlanInputPartitionsUtilities$.extractDataAndGetLocation(ItemsScanBuilder.scala:183)
Permission: we have owner, storage data blob contributor access for synapse and specific user
To resolve the above exception, please try the below:
Try updating the code by adding below:
spark._jsc.hadoopConfiguration().set("fs.azure.account.key.xxxxx.dfs.core.windows.net", "xxxx==")
To read data from table, try including date data type in SQL Pool and then read.
Note:
Synapse RBAC roles do not grant permissions to create or manage SQL pools, Apache Spark pools, and Integration runtimes in Azure Synapse workspaces. Azure Owner or Azure Contributor roles on the resource group are required for these actions.
Give Azure owner role to resource group instead of synapse and specific user.
Check if there is any firewall rule that is blocking the connectivity and disable it.
If still the issue persists, raise a Azure support request
For more in detail, please refer below links:
Azure Synapse RBAC roles - Azure Synapse Analytics | Microsoft Docs
azure databricks - File read from ADLS Gen2 Error - Configuration property xxx.dfs.core.windows.net not found - Stack Overflow

Google Cloud Composer Environment Setup Error: Connect to Google Cloud Storage

I am trying to create an environment in Google Cloud Composer. Link here
When creating the environment from scratch and selecting all the default fields, the following error appears:
CREATE operation on this environment failed 22 hours ago with the following error message:
CREATE operation failed. Composer Agent failed with: Cloud Storage Assertions Failed: Unable to write to GCS bucket.
GCS bucket write check failed.
I then created a google cloud storage bucket within the same project to see if that would help and the same error still appears.
Has anyone been able successfully create a Google Cloud Composer environment and if so please provide guidance on why this error message continues to appear?
Update: Need to update permissions to allow access it seems like. Here is a screenshot of my permissions page but not editable.
It seems like you haven't given the required IAM policies to the service account. I would advise you to read more about the IAM policies on Google Cloud here
When it comes to the permissions of the bucket, there are permissions like the Storage Object Admin that might fit your needs.

How to restore permission when I am the admin of the project?

After mistakenly add myself to a wrong role, I am no longer able to access "IAM & admin".
While trying to extract Big Query tables to Google Storage, I received the following error,
bq extract --compression GZIP Dataset.TableName gs://tableName_*.csv.gz
Waiting on bqjob_r4250d44ecf982a22_00000169c666b451_1 ... (23s) Current status: DONE
BigQuery error in extract operation: Error processing job 'Dataset:bqjob_r4250d44ecf982a22_00000169c666b451_1': Access Denied: BigQuery BigQuery: Permission denied while writing data.
I thought I may have a permission issue, therefore I change my role in Google Cloud. I don't remember what role I changed. It may be owner or creator.
After that, I am not able to to access the project in Big Query, as well as "IAM & Admin" page.
bq extract --compression GZIP Dataset.TableName gs://tableName_*.csv.gz
BigQuery error in extract operation: Access Denied: Project projectName: The user myemail#xxx.com does not have bigquery.jobs.create permission in project projectName.
Since I am the admin of this account, there is no other person who has the access. What options do I have to restore the access?
Thank you in advanced.
For this case, please open a case through the billing support form, and for "How can we hep?" select "other." https://support.google.com/cloud/contact/cloud_platform_billing
This way, I can follow up with you in private and get the details necessary to move forward. Please let me know once you submit the case and what your case number is so I can follow up.
Edit: For anyone else viewing this issue, the above method is just for this case and not the correct avenue of support for this problem. If you have a support package and you have this issue, please reach out through normal channels.
Thanks,
Hunter,
GCP Billing

Databricks fails accessing a Data Lake Gen1 while trying to enumerate a directory

I am using (well... trying to use) Azure Databricks and I have created a notebook.
I would like the notebook to connect my Azure Data Lake (Gen1) and transform the data. I followed the documentation and put the code in the first cell of my notebook:
spark.conf.set("dfs.adls.oauth2.access.token.provider.type", "ClientCredential")
spark.conf.set("dfs.adls.oauth2.client.id", "**using the application ID of the registered application**")
spark.conf.set("dfs.adls.oauth2.credential", "**using one of the registered application keys**")
spark.conf.set("dfs.adls.oauth2.refresh.url", "https://login.microsoftonline.com/**using my-tenant-id**/oauth2/token")
dbutils.fs.ls("adl://**using my data lake uri**.azuredatalakestore.net/tenantdata/events")
The execution fails with this error:
com.microsoft.azure.datalake.store.ADLException: Error enumerating
directory /
Operation null failed with exception java.io.IOException : Server
returned HTTP response code: 400 for URL:
https://login.microsoftonline.com/using my-tenant-id/oauth2/token
Last encountered exception thrown after 5 tries.
[java.io.IOException,java.io.IOException,java.io.IOException,java.io.IOException,java.io.IOException]
[ServerRequestId:null] at
com.microsoft.azure.datalake.store.ADLStoreClient.getExceptionFromResponse(ADLStoreClient.java:1169)
at
com.microsoft.azure.datalake.store.ADLStoreClient.enumerateDirectoryInternal(ADLStoreClient.java:558)
at
com.microsoft.azure.datalake.store.ADLStoreClient.enumerateDirectory(ADLStoreClient.java:534)
at
com.microsoft.azure.datalake.store.ADLStoreClient.enumerateDirectory(ADLStoreClient.java:398)
at
com.microsoft.azure.datalake.store.ADLStoreClient.enumerateDirectory(ADLStoreClient.java:384)
I have given the registered application the Reader role to the Data Lake:
Question
How can I allow Spark to access the Data Lake?
Update
I have granted both the tenantdata and events folders Read and Execute access:
The RBAC roles on the Gen1 lake do not grant access to the data (just the resource itself), with exception of the Owner role which grants Super User access and does grant full data access.
You must grant access to the folders/files themselves using Data Explorer in the Portal or download storage explorer using POSIX permissions.
This guide explains the detail of how to do that: https://learn.microsoft.com/en-us/azure/data-lake-store/data-lake-store-access-control
Reference: https://learn.microsoft.com/en-us/azure/data-lake-store/data-lake-store-secure-data
Only the Owner role automatically enables file system access. The
Contributor, Reader, and all other roles require ACLs to enable any
level of access to folders and files

Talend :tS3Put gives access denied error

I am trying to upload a list of files from a folder into an Amazon S3 folder.
I am able to manually upload files on the folder.But when I run the job which does the same thing ,the talend job gives an "Access Denied" error.
I have the required keys for the S3 bucket.
If you are getting the Access Denied error then it mean you do not have access to that bucke or check the access constraint again.
You can also manually copy the files to S3 by downloading the software called "CloudBerry Exlorer for Amazon S3".
Just download and provide the access key and see whether you have access to the bucket or not.