I have written aws glue job where i am trying to read snowflake tables as spark dataframe and also trying to write a spark dataframe into the snowflake tables. My job is failing stating "Insufficient privileges to operate on schema" in both scenario.
But when i am directly writing insert statement on snowflake cli, i am able to insert data. So basically i have insert privilege.
So why my job is failing when i am trying to insert data from dataframe or reading data from snowflake table as a dataframe?
Below is my code to write data into snowflake table.
sfOptions = {
"sfURL" : "xt30972.snowflakecomputing.com",
"sfAccount" : "*****",
"sfUser" : "*****",
"sfPassword" : "****",
"sfDatabase" : "*****",
"sfSchema" : "******"
}
df=spark.read.format("csv").option("header","false").option("delimiter",',').load(aws s3 file_name)
df2.write.format("net.snowflake.spark.snowflake") \
.options(**sfOptions) \
.option("dbtable", table_name) \
.mode("append") \
.save()
When you are using Snowflake CLI, I assume that you switch to a proper role to execute SELECT or INSERT. On Spark, you need to manually switch to the role that has SELECT/INSERT grants before operating on a table. You do this by issuing below.
Utils.runQuery(sfOptions, "USE ROLE <your_role>")
This will switch the role for the duration of your Spark session.
Also, please note that Snowflake's access structure is hierarchy based. That means you need to have "usage" privileges on the database and schema that houses the table you are trying to use. Please make sure that you have all the right grants to the role using to SELECT or INSERT.
Related
I have a read-only user which has been granted select on all tables including default privileges for any future schema additions.
I have a Continuous Aggregate on a TimescaleDB Hypertable. The read-only user can query the hypertable fine, but when they attempt to make a query on the continuous aggregate they get the following error:
ERROR: permission denied for table _materialized_hypertable_4
Is there any special permission or additional configuration I need to add for this user to be able to query Continuous Aggregate materialized views?
I am running Postgres 13.2 with TimescaleDB 2.1.0.
I'm trying to set up a logical replication between two postgre instances on my server (if that's relevant : the master would be a version 12.3 postgres, the slave a 13.1).
I use a replication account on the master server, with replication grant.
After setting up a publication on master and a subscription on slave, I see error messages :
-- on master :
ERROR: permission denied for table table1
STATEMENT: COPY public.table1 TO STDOUT
-- on slave :
ERROR: could not start initial contents copy for table "public.table1": ERROR: permission denied for table table1
And indeed, using psql to connect to my database as replication, I see that COPY table1 TO STDOUT gets rejected.
Question
Is there a grant I should add to allow COPY ... TO STDOUT for the replication user ?
or is there something else I am missing ?
I had overlooked a line in the docs :
In order to be able to copy the initial table data, the role used for the replication connection must have the SELECT privilege on a published table (or be a superuser).
That would be :
-- while connected to the target database :
GRANT SELECT ON table1, table2, ... TO replication;
-- or, to grant access on all tables :
GRANT SELECT ON ALL TABLES IN SCHEMA public TO replication;
I have a table MYSCHEMA.TEST_SNOWFLAKE_ROLE_T in Snowflake created using the role CONSOLE_USER.
MYSCHEMA has a FUTURE GRANTS associated with it, which grants the following privileges to the role BATCH_USER for any table created under the schema MYSCHEMA - DELETE, INSERT, REFERENCES, SELECT, TRUNCATE, UPDATE.
The role BATCH_USER also has CREATE STAGE and USAGE privileges on the schema MYSCHEMA.
A second user belonging to the role BATCH_USER tries to insert data into the same table from a dataframe, using the following Spark SQL (Databricks), but fails with an insufficient privileges error message.
df.write.mode(op_mode) \
.format("snowflake") \
.options(**self.sfoptions) \
.option("dbtable", snowflake_tbl_name) \
.option("truncate_table", "on") \
.save
The following error message appears:
Py4JJavaError: An error occurred while calling o908.save.
: net.snowflake.client.jdbc.SnowflakeSQLException: SQL access control error
: Insufficient privileges to operate on table 'TEST_SNOWFLAKE_ROLE_T')
The role CONSOLE_USER has ownership rights on the table, hence the role BATCH_USER would not be able to drop the table, but adding the option option("truncate_table", "on") should have prevented automatic overwrite of the Table schema.
I've gone through the available Snowflake and Databricks documentation several times, but can't seem to figure out what is causing the insufficient privilege issue.
Any help is much appreciated!
I figured it out eventually.
The error occured because the table was created by the role CONSOLE_USER, which retained ownership privileges on the table.
The Spark connector for Snowflake uses a staging table for writing the data. If the data loading operation is successful, the original target table is dropped and the staging table is renamed to the original target table’s name.
Now, in order to rename a table or swap two tables, the role used to perform the operation must have OWNERSHIP privileges on the table(s). In the situation above, the ownership was never transferred to the role BATCH_USER, hence the error.
df.write.mode(op_mode) \
.format("snowflake") \
.options(**self.sfoptions) \
.option("dbtable", snowflake_tbl_name) \
.option("truncate_table", "on") \
.option("usestagingtable", "off") \
.save
The solution was to avoid using a staging table altogether, although going by the documentation, Snowflake recommends using one, pretty strongly.
This is a good reference for troubleshooting custom privileges:
https://docs.snowflake.net/manuals/user-guide/security-access-control-overview.html#role-hierarchy-and-privilege-inheritance
Is the second batch_user inheriting any privileges?
Check on this by asking the user in their session to see what privileges they have on the table: https://docs.snowflake.net/manuals/sql-reference/sql/show-grants.html
What are the grants listed for the Batch_user having access issues to the following:
SHOW GRANTS ON
SHOW GRANTS OF ROLE
SHOW FUTURE GRANTS IN SCHEMA { }
Was a role specified for the second batch_user when they tried to write to "dbtable"?
"When a user attempts to execute an action on an object, Snowflake compares the privileges available in the user’s session against the privileges required on the object for that action. If the session has the required privileges on the object, the action is allowed." ref:
Try: https://docs.snowflake.net/manuals/sql-reference/sql/use-role.html
3.Since you mentioned Future Grants were used on the objects created - FUTURE be ing limited to SECURITYADMIN via https://community.snowflake.com/s/question/0D50Z00009MDCBv/can-a-role-have-rights-to-grant-future-rights
I wan't to replicate two databases with symmetricDS,
First, i try to do this tutorial https://www.symmetricds.org/doc/3.8/html/tutorials.html
i have install symmetricDs I have installed symmetricDS on my postgresql serveur.
and execute this commands :
../bin/dbimport --engine corp-000 --format XML --alter-case create_sample.xml
../bin/symadmin --engine corp-000 create-sym-tables
../bin/dbimport --engine corp-000 insert_sample.sql
bin/sym
But No tables was write in my database, and no symmetric table too
Can't you help ?
Verify that the user account provided to connect your SymmetricDS nodes has the ability to create tables. Then I would check the symmetric.log for any obvious errors. By default it would create the tables in the default schema of the user account provided so be sure to check that schema for the tables.
I'm trying to change the last column of the hive table (which is type STRING in hive) to a Postgres type date below is the command:
sqoop export
--connect jdbc:postgresql://192.168.11.1:5432/test
--username test
--password test_password
--table posgres_table
--hcatalog-database hive_db
--hcatalog-table hive_table
I have tried using, this but the column in Postgres is still empty:
-map-column-hive batch_date=date
-map-column-hive works only for Sqoop import (i.e. while fetching data from RDBMS to HDFS/Hive)
All you need to make your Hive's String data in proper date format, it should work.
Internally, sqoop export create statements like
INSERT INTO posgres_table...
You can verify by manually creating INSERT INTO posgres_table values(...) statment via JDBC driver or any client like pgAdmin, squirrel-sql, etc.