Granting privilige in postgres table - postgresql

I am absolutely new to postgresSQL database. Using PhpPgAdmin, I was able to create database, user and a table. I am trying to insert a row into the table in my php file with the following codes:
$db = pg_connect( "$host $port $dbname $credentials" );
if($db){
$psql = "INSERT INTO LOGINS (mid, name,ip,date) VALUES ($usid,'$naam','$ipad','$dte')";
$ret = pg_query($db, $psql);
$tot = pg_affected_rows($ret);
}
I am getting the error:
Warning: pg_query(): Query failed: ERROR: permission denied for relation..
I understand that some privileges are to be declared, but where and how?

Use GRANT to give privileges to users in Postgres

Related

Azure python create user database

I am trying some azure documentation Quickstart tutorial to create a resource group with one SQL Server and one database. The code runs just fine and I am able to create all the resource. Now I was curious how can I run in the same script the code to create a readonly user inside the database I am creating?
This is the code I have:
import os
from azure.common.credentials import ServicePrincipalCredentials
from azure.mgmt.resource import ResourceManagementClient
from azure.mgmt.sql import SqlManagementClient
REGION = 'westus'
GROUP_NAME = 'resource-group-name'
SERVER_NAME = 'server-name'
DATABASE_NAME = 'sample-db'
def run_example():
subscription_id = os.environ.get(
'AZURE_SUBSCRIPTION_ID',
'11111-11-1111-11111-111111') # your Azure Subscription Id
credentials = ServicePrincipalCredentials(
client_id='my-client-id',
secret='my-secret',
tenant='tenant'
)
resource_client = ResourceManagementClient(credentials, subscription_id)
sql_client = SqlManagementClient(credentials, subscription_id)
# You MIGHT need to add SQL as a valid provider for these credentials
# If so, this operation has to be done only once for each credentials
resource_client.providers.register('Microsoft.Sql')
# Create Resource group
print('Create Resource Group')
resource_group_params = {'location': 'westus'}
print_item(resource_client.resource_groups.create_or_update(
GROUP_NAME, resource_group_params))
# Create a SQL server
print('Create a SQL server')
server = sql_client.servers.create_or_update(
GROUP_NAME,
SERVER_NAME,
{
'location': REGION,
'version': '12.0', # Required for create
'administrator_login': 'server-login', # Required for create
'administrator_login_password': 'pass-word' # Required for create
}
)
print_item(server)
print('\n\n')
# Get SQL server
print('Get SQL server')
server = sql_client.servers.get_by_resource_group(
GROUP_NAME,
SERVER_NAME,
)
print_item(server)
print("\n\n")
# List SQL servers by resource group
print('List SQL servers in a resource group')
for item in sql_client.servers.list_by_resource_group(GROUP_NAME):
print_item(item)
print("\n\n")
# List SQL servers by subscription
print('List SQL servers in a subscription')
for item in sql_client.servers.list():
print_item(item)
print("\n\n")
# List SQL servers usage
print('List SQL servers usage')
for item in sql_client.servers.list_usages(GROUP_NAME, SERVER_NAME):
print_metric(item)
print("\n\n")
# Create a database
print('Create SQL database')
async_db_create = sql_client.databases.create_or_update(
GROUP_NAME,
SERVER_NAME,
DATABASE_NAME,
{
'location': REGION
}
)
# Wait for completion and return created object
database = async_db_create.result()
print_item(database)
print("\n\n")
# Get SQL database
print('Get SQL database')
database = sql_client.databases.get(
GROUP_NAME,
SERVER_NAME,
DATABASE_NAME
)
print_item(database)
print("\n\n")
# List SQL databases by server
print('List SQL databases in a server')
for item in sql_client.databases.list_by_server(GROUP_NAME, SERVER_NAME):
print_item(item)
print("\n\n")
# List SQL database usage
print('List SQL database usage')
for item in sql_client.databases.list_usages(GROUP_NAME, SERVER_NAME, DATABASE_NAME):
print_metric(item)
print("\n\n")
def print_item(group):
"""Print an Azure object instance."""
print("\tName: {}".format(group.name))
print("\tId: {}".format(group.id))
print("\tLocation: {}".format(group.location))
if hasattr(group, 'tags'):
print("\tTags: {}".format(group.tags))
if hasattr(group, 'properties'):
print_properties(group.properties)
def print_metric(group):
"""Print an SQL metric."""
print("\tResource Name: {}".format(group.resource_name))
print("\tName: {}".format(group.display_name))
print("\tValue: {}".format(group.current_value))
print("\tUnit: {}".format(group.unit))
def print_properties(props):
"""Print a ResourceGroup properties instance."""
if props and props.provisioning_state:
print("\tProperties:")
print("\t\tProvisioning State: {}".format(props.provisioning_state))
print("\n\n")
if __name__ == "__main__":
run_example()
I am missing this last bit where I want to create this readonly user inside the database I am creating. Thank you very much for your time and help guys
Create user in Azure SQL database is very different with create database instance. It needs the admin account or the enough permission, and the user binds the login, the login must be created in master DB, and the user must be created in current user D, then alter the database role to the user. The code you user is not suitable for create the user.
Even with pyodbc script, you still need the connection string, specify the database/user,/password. The limit is you can't access master DB and user database with one connection string or SQL database connection.
I'm afraid to say we can't do that with the code.

AWS RDS PostgreSQL - copying from/to csv files on EC2 instance

I've run into problem that I can't fix for a few days.
The thing is - I have following architecture:
Two EC2 instances which are nodes running Trifacta application (some kind of application for data scientists),
AWS RDS PostgreSQL instance.
Since the newest version this Trifacta application is using new schema in database which performs some database migrations at the start of application. During the startup some tables are copied into *.csv files and then copied back into tables from this *csv files.
It's all okay when it's run on local database because superuser role in postgresql allows for such actions.
When it comes to performing it on AWS RDS PostgreSQL instance it falls in following errors:
Error running query COPY (select "id" from workspaces) TO '/tmp/workspaces.csv' DELIMITER ',' CSV HEADER; error: must be superuser to COPY to or from a file
at Connection.parseE (/opt/trifacta/migration-framework/node_modules/pg/lib/connection.js:614:13)
at Connection.parseMessage (/opt/trifacta/migration-framework/node_modules/pg/lib/connection.js:413:19)
at Socket.<anonymous> (/opt/trifacta/migration-framework/node_modules/pg/lib/connection.js:129:22)
at Socket.emit (events.js:315:20)
at addChunk (_stream_readable.js:295:12)
at readableAddChunk (_stream_readable.js:271:9)
at Socket.Readable.push (_stream_readable.js:212:10)
at TCP.onStreamRead (internal/stream_base_commons.js:186:23) {
length: 178,
severity: 'ERROR',
code: '42501',
detail: undefined,
hint: "Anyone can COPY to stdout or from stdin. psql's \\copy command also works for anyone.",
position: undefined,
internalPosition: undefined,
internalQuery: undefined,
where: undefined,
schema: undefined,
table: undefined,
column: undefined,
dataType: undefined,
constraint: undefined,
file: 'copy.c',
line: '905',
routine: 'DoCopy'
}
It's just first one, there are a lot of them. I made a research on that and figured why it's happening. AWS is using rds_superuser role instead of superuser and privilleges of this role aren't sufficient for copying from/to local filesystem.
From psql console it can be done with using \copy instead of copy but in my case it isn't any helpful because the way Trifacta does it is executing SQL queries from their *.js files and as far as I know it isn't possible to run \copy query from anywhere else than psql CLI.
With a suggestion of #IMSoP I am adding the code of Trifacta *.js file where the actions are performed:
ConnectUtils.copyQuery = function(query, connection, options = {}) {
ensure.notNull(connection.base.DriverName, 'connection driver name');
ensure.notNull(options.tableName, 'table name');
const table = options.tableName;
const filePath = ConnectUtils.getOutputFilePath(table, options);
if (connection.base.DriverName === DATABASE_JS_TYPE[MYSQL]) {
return `${query} INTO OUTFILE \'${filePath}\' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n'`;
} else if (connection.base.DriverName === DATABASE_JS_TYPE[POSTGRES]) {
return `COPY (${query}) TO '${filePath}' DELIMITER ',' CSV HEADER;`;
} else if (connection.base.DriverName === DATABASE_JS_TYPE[SQLITE]) {
return query;
}
return;
};
ConnectUtils.loadQuery = function(connection, options = {}) {
ensure.notNull(connection.base.DriverName, 'connection driver name');
ensure.notNull(connection.base.Database, 'connection database');
ensure.notNull(options.tableName, 'table name');
const table = options.tableName;
const filePath = ConnectUtils.getOutputFilePath(table, options);
if (connection.base.DriverName === DATABASE_JS_TYPE[MYSQL]) {
return `LOAD DATA INFILE \'${filePath}\' INTO TABLE ${
connection.base.Database
}.${table} FIELDS TERMINATED BY ',' ENCLOSED BY '\"' LINES TERMINATED BY '\n' IGNORE 1 ROWS;`;
} else if (connection.base.DriverName === DATABASE_JS_TYPE[POSTGRES]) {
return `COPY ${table} FROM '${filePath}' DELIMITER ',' CSV HEADER;`;
}
return;
};
${filePath} is path on EC2 instance and ${table} are the tables on AWS RDS EC2 instance. From your answers before editing my question I assume there is no way to workaround this as this script is trying to reach ${filePath} as a path on AWS RDS instance. Right?
Thanks for reading.

storage migration check error: error="pq: permission denied for table vault_kv_store"

Here is my vault.config file.
ui = true
backend "postgresql" {
connection_url = "postgres://user:pwd#192.168.1.1:5432/vault?sslmode=disable"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = 1
}
disable_mlock = true
I have also created table vault_kv_store and vault_ha_locks under public schema in the vault database as per vault storage doc.
We need help to fix this problem.
Thank You.
you used a different user when you created the table, run the following command after connecting to the vault database:
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO user;

How to destroy Postgres databases owned by other users in RDS using Terraform?

I managed to nicely get Terraform creating databases and roles in an RDS Postgres database but due to the stripped down permissions of the rds_superuser I can't see an easy way to destroy the created databases that are owned by another user.
Using the following config:
resource "postgresql_role" "app" {
name = "app"
login = true
password = "foo"
skip_reassign_owned = true
}
resource "postgresql_database" "database" {
name = "app_database"
owner = "${postgresql_role.app.name}"
}
(For reference the skip_reassign_owned is required because the rds_superuser group doesn't get the necessary permissions to reassign ownership)
leads to this error:
Error applying plan:
1 error(s) occurred:
* postgresql_database.database (destroy): 1 error(s) occurred:
* postgresql_database.database: Error dropping database: pq: must be owner of database debug_db1
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Using local-exec provisioners I was able to grant the role that owned the database to the admin user and the application user:
resource "aws_db_instance" "database" {
...
}
provider "postgresql" {
host = "${aws_db_instance.database.address}"
port = 5432
username = "myadminuser"
password = "adminpassword"
sslmode = "require"
connect_timeout = 15
}
resource "postgresql_role" "app" {
name = "app"
login = true
password = "apppassword"
skip_reassign_owned = true
}
resource "postgresql_role" "group" {
name = "${postgresql_role.app.name}_group"
skip_reassign_owned = true
provisioner "local-exec" {
command = "PGPASSWORD=adminpassword psql -h ${aws_db_instance.database.address} -U myadminuser postgres -c 'GRANT ${self.name} TO myadminuser, ${postgresql_role.app.name};'"
}
}
resource "postgresql_database" "database" {
name = "mydatabase"
owner = "${postgresql_role.group.name}"
}
which seems to work compared to setting ownership only for the app user. I do wonder if there's a better way I can do this without having to shell out in a local-exec though?
After raising this question I managed to raise a pull request with the fix that was released in in version 0.1.1 of the Postgresql provider so now works fine in the latest release of the provider.

Creating postgres schemas using psycopg cur.execute

My python application allows users to create schemas of their naming. I need a way to protect the application from sql injections.
The SQL to be executed reads
CREATE SCHEMA schema_name AUTHORIZATION user_name;
The psycopg documentation (generally) recommends passing parameters to execute like so
conn = psycopg2.connect("dbname=test user=postgres")
cur = conn.cursor()
query = 'CREATE SCHEMA IF NOT EXISTS %s AUTHORIZATION %s;'
params = ('schema_name', 'user_name')
cur.execute(query, params)
But this results in a query with single quotes, which fails:
CREATE SCHEMA 'schema_name' AUTHORIZATION 'user_name';
> fail
Is there a way to remove the quotes, or should I just settle for stripping non-alphanumeric characters from the schema name and call it a day? The later seems kind of ugly, but should still work.
To pass identifiers use AsIs. But that exposes to SQL injection:
import psycopg2
from psycopg2.extensions import AsIs
conn = psycopg2.connect(database='cpn')
cursor = conn.cursor()
query = """CREATE SCHEMA %s AUTHORIZATION %s;"""
param = (AsIs('u1'), AsIs('u1; select * from user_table'))
print cursor.mogrify(query, param)
Output:
CREATE SCHEMA u1 AUTHORIZATION u1; select * from user_table;
Here's a boilerplate that might help. I've used environment variables but you can use a .conf or whatever you like.
Store your connection variables in a .env file:
db_host = "localhost"
db_port = "5432"
db_database = "postgres"
db_user = "postgres"
db_password = "postgres"
db_schema = "schema2"
Load params in your app.py and assign them to variables, then use the variables where required:
import psychopg2
from dotenv import load_dotenv
import database
# Load your environment variables here:
load_dotenv()
db_host = os.environ["db_host"]
db_port = os.environ["db_port"]
db_database = os.environ["db_database"]
db_user = os.environ["db_user"]
db_password = os.environ["db_password"]
db_schema = os.environ["db_schema"]
# Build Connection:
connection = psycopg2.connect(host=db_host,
port=db_port,
database=db_database,
user=db_user,
password=db_password
)
# Build Query Strings:
CREATE_SCHEMA = f"CREATE SCHEMA IF NOT EXISTS {schema};"
CREATE_TABLE1 = f"CREATE TABLE IF NOT EXISTS {schema}.table1 (...);"
CREATE_TABLE2 = f"CREATE TABLE IF NOT EXISTS {schema}.table2 (...);"
# Create Schema and Tables:
with connection:
with connection.cursor() as cursor:
cursor.execute(CREATE_SCHEMA)
cursor.execute(CREATE_TABLE1)
cursor.execute(CREATE_TABLE2)
As of psycopg2 >= 2.7, psycopg2.sql can be used to compose dynamic statements, which also guards from SQL injection.