Granting permission in Postgresql to user never gives permission - postgresql

I'm trying to give access to a database to a new user in PostgreSQL 10.12. For context, I'm using Ubuntu 18.04. I created the database with this code:
CREATE DATABASE jiradb WITH ENCODING 'UNICODE' LC_COLLATE 'C' LC_CTYPE 'C' TEMPLATE template0;
The user was given access with this code:
GRANT ALL PRIVILEGES ON DATABASE
Whenever I type "\z", all I see is this:
Access privileges
Schema | Name | Type | Access privileges | Column privileges | Policies
--------+------+------+-------------------+-------------------+----------
(0 rows)}
What I want to see is that this user "jiradbuser" has all access to the database "jiradb". I've checked PostgreSQL's web site, and nothing there has been helpful. How can I give this user the proper access?

The command
GRANT { { CREATE | CONNECT | TEMPORARY | TEMP } [, ...] | ALL [ PRIVILEGES ] }
ON DATABASE database_name [, ...]
TO role_specification [, ...] [ WITH GRANT OPTION ]
give access over the database with { CREATE | CONNECT | TEMPORARY | TEMP }
you will see the privileges with meta-command
\l
the meta-command
\z
Is to see privileges for tables view, sequences, in other words, are different types of objects (database and tables view, sequences)
you will see the tables view, sequences privileges given with the command
GRANT { { SELECT | INSERT | UPDATE | DELETE | TRUNCATE | REFERENCES | TRIGGER }
[, ...] | ALL [ PRIVILEGES ] }
ON { [ TABLE ] table_name [, ...]
| ALL TABLES IN SCHEMA schema_name [, ...] }
TO role_specification [, ...] [ WITH GRANT OPTION ]
here you can use
\z

Related

Unable to export AWS RDS Postgres table to CSV in S3, using aws_s3.query_export_to_s3 function

I followed closely the documentation regarding exporting AWS RDS Postgres tables to S3 as CSV and still could not make it work.
Documentation URL
More specifically, after creating the aws_s3 extension
CREATE EXTENSION IF NOT EXISTS aws_s3 CASCADE;
I tried to execute this function:
SELECT * from aws_s3.query_export_to_s3('select * from users limit 10', 'sample-s3-bucket', '/users_demo.csv');
which failed with the following error:
[42883] ERROR: function aws_s3.query_export_to_s3(unknown, unknown,
unknown) does not exist Hint: No function matches the given name and
argument types. You might need to add explicit type casts.
This error is not documented (or I could not find relevant documentation). It seems the aws_s3.query_export_to_s3 function is actually missing from the extension!
I tried to see if this function aws_s3.query_export_to_s3 is actually missing. I have already tried:
Listing the available extensions shows aws_s3 esxtension as installed:
SELECT * FROM pg_available_extensions where name like '%aw%';
Result:
name | default_version | installed_version | comment
-------------+-----------------+-------------------+---------------------------------------------
aws_s3 | 1.0 | 1.0 | AWS S3 extension for importing data from S3
aws_commons | 1.0 | 1.0 | Common data types across AWS services
(2 rows)
Listing the extension functions does not show the query_export_to_s3 function:
SELECT e.extname, ne.nspname AS extschema, p.proname, np.nspname AS proschema
FROM pg_catalog.pg_extension AS e
INNER JOIN pg_catalog.pg_depend AS d ON (d.refobjid = e.oid)
INNER JOIN pg_catalog.pg_proc AS p ON (p.oid = d.objid)
INNER JOIN pg_catalog.pg_namespace AS ne ON (ne.oid = e.extnamespace)
INNER JOIN pg_catalog.pg_namespace AS np ON (np.oid = p.pronamespace)
WHERE d.deptype = 'e' AND e.extname like '%aws%'
ORDER BY 1, 3;
Result:
extname | extschema | proname | proschema
-------------+-----------+------------------------+-------------
aws_commons | public | create_aws_credentials | aws_commons
aws_commons | public | create_s3_uri | aws_commons
aws_s3 | public | table_import_from_s3 | aws_s3
aws_s3 | public | table_import_from_s3 | aws_s3
(4 rows)
Finally, Dropping and recreating the extension aws_s3 did not work.
More info: PostgreSQL 12.3 on x86_64-pc-linux-gnu.
Can anyone please confirm that this function is actually missing? In other words, can anyone use this function?
Try upgrading to Postgres 12.4. I'm having a similar problem and that's what AWS support told me (response pasted below). [edited]
Update
Initially I hadn't fully got this working, but can confirm upgrading to Postgres 12.4, and dropping and recreating the extension worked.
DROP EXTENSION aws_s3 CASCADE;
DROP EXTENSION aws_commons CASCADE;
CREATE EXTENSION aws_s3 CASCADE;
Original response from AWS Support:
Based on output of describe-db-engine-versions[1] I can see that only
the below specific engine versions support s3Export feature. Hence
version 12.2 does not support export to S3 feature.
[
{
"Engine": "postgres",
"EngineVersion": "10.14",
"SupportedFeatureNames": [
"s3Import",
"s3Export"
]
},
{
"Engine": "postgres",
"EngineVersion": "11.9",
"SupportedFeatureNames": [
"s3Import",
"s3Export"
]
},
{
"Engine": "postgres",
"EngineVersion": "12.4",
"SupportedFeatureNames": [
"s3Import",
"s3Export"
]
} ]
Upgrading the RDS instance to 12.4 and executing the following worked for me.
DROP EXTENSION aws_s3;
DROP EXTENSION aws_commons;
CREATE EXTENSION aws_s3 CASCADE;

Postgresql grant command syntax

I'm having trouble with the syntax to grant a developer the ability to create or replace a function. The syntax guide doesn't seem to show how to do this. Can anyone provide the correct syntax?
Sorry, still earning the reps for comments.
Can you please share your GRANT statement and the complete error message?
I created a test table with a few rows of data, and a test function that returns a rowcount on the test table.
I granted execute to user2 with no errors:
grant execute on function f_totrecords() to user2 with grant option ;
It's in the docs - search the page for "function" and notice the "with grant option" in the syntax:
GRANTS
GRANT { EXECUTE | ALL [ PRIVILEGES ] }
ON { FUNCTION function_name ( [ [ argmode ] [ arg_name ] arg_type [, ...] ] ) [, ...]
| ALL FUNCTIONS IN SCHEMA schema_name [, ...] }
TO { [ GROUP ] role_name | PUBLIC } [, ...] [ WITH GRANT OPTION ]

psql SQL Interpolation in a code block

In some of my scripts I use SQL Interpolation feature of psql utility:
basic.sql:
update :schema.mytable set ok = true;
> psql -h 10.0.0.1 -U postgres -f basic.sql -v schema=myschema
Now I need bit more complicated scenario. I need to specify schema name (and desirebly some other things) inside PL/pgSQL code block:
pg.sql
do
$$
begin
update :schema.mytable set ok = true;
end;
$$
But unfortunately this does not work, since psql does not replace :variables inside $$.
Is there a way to workaround it in general? Or more specifically, how to substitute schema names into pgSQL code block or function definition?
in your referenced docs:
Variable interpolation will not be performed within quoted SQL
literals and identifiers. Therefore, a construction such as ':foo'
doesn't work to produce a quoted literal from a variable's value (and
it would be unsafe if it did work, since it wouldn't correctly handle
quotes embedded in the value).
it does not matter if quotes are double dollar sign or single quote - it wont work, eg:
do
'
begin
update :schema.mytable set ok = true;
end;
'
ERROR: syntax error at or near ":"
to pass variable into quoted statement other way you can try using shell variables, eg:
MacBook-Air:~ vao$ cat do.sh; export schema_name='smth' && bash do.sh
psql -X so <<EOF
\dn+
do
\$\$
begin
execute format ('create schema %I','$schema_name');
end;
\$\$
;
\dn+
EOF
List of schemas
Name | Owner | Access privileges | Description
----------+----------+-------------------+------------------------
public | vao | vao=UC/vao +| standard public schema
| | =UC/vao |
schema_a | user_old | |
(2 rows)
DO
List of schemas
Name | Owner | Access privileges | Description
----------+----------+-------------------+------------------------
public | vao | vao=UC/vao +| standard public schema
| | =UC/vao |
schema_a | user_old | |
smth | vao | |
(3 rows)

convert output of postgres query to utf8

I used postgresql
in my attachment table the filename_attachmnt column contains character with wrong format .
for example it contains : تكلي٠الزميل. احمدالوردي.pdf
the correct name it should be this : تكليف الزميل. احمدالوردي.pdf
I try without success using this query :
select encode(convert(cast(filename_attachmnt as bytea),'UTF8'),'escape') from attachment
updated :
this is the config of my database :
CREATE DATABASE "mobiltyDatabase"
WITH OWNER = postgres
ENCODING = 'SQL_ASCII'
TABLESPACE = pg_default
LC_COLLATE = 'C'
LC_CTYPE = 'C'
CONNECTION LIMIT = -1;
GRANT CONNECT, TEMPORARY ON DATABASE "mobiltyDatabase" TO public;
GRANT ALL ON DATABASE "mobiltyDatabase" TO postgres;

How to create session level table in PostgreSQL?

I am working on an application using Spring, Hibernate, and PostgreSQL 9.1. The requirement is user can upload bulk data from the browser.
Now the data getting uploaded by each user is very crude and requires lots of validation before it can be put into the actual transaction table. I want a temporary table to be created whenever a user uploads; after data is successfully dumped into this temp table, I will call a procedure to perform the actual work of validating and taking the data from the temp table to the transaction table. If anywhere error is encountered then I will dump logs to any other table so the user can know the status of their upload from the browser.
In PostgreSQL do we have anything like temporary, session-level table?
From the 9.1 manual:
CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] table_name ( [
{ column_name data_type [ COLLATE collation ] [ column_constraint [ ... ] ]
| table_constraint
| LIKE parent_table [ like_option ... ] }
[, ... ]
] )
[ INHERITS ( parent_table [, ... ] ) ]
[ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE tablespace ]
The key word here is TEMPORARY although it is not necessary to the table to be temporary. It could be a permanent table that you truncate before inserting. The whole operation (inserting and validating) would have to be wrapped in a transaction.