Insufficient Lake Formation Permissions - amazon-redshift

I am trying to create an external table from within dbeaver on Redshift Spectrum. I get the error QL Error [XX000]: ERROR: Insufficient Lake Formation permission(s): Required Create Table on mydatabasename.
Below is the create statement:
CREATE EXTERNAL TABLE spectrum_schema.imauditmetrics_mongodb_invoices(
purchase_id varchar(40),
invoice_id varchar(40),
invoice_number varchar(40),
po_number varchar(40),
invoice_amount float8,
purchase_id varchar(40),
auditor varchar(40),
flag_comment varchar(40),
client_number varchar(40),
resubmitted_date timestamp,
last_update_time timestamp,
note_response varchar(255),
note varchar(255))
PARTITIONED BY (
date varchar(40))
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'
LINES TERMINATED BY '\n' STORED
AS
textfile LOCATION 's3://mybucket/invoices/';
Thanks in advance.

Related

External table 'logdata' is not accessible because location does not exist or it is used by another proces

I am try to create an external table on Azure Synapse by once I run select * from logdata I get the error "External table 'logdata' is not accessible because location does not exist or it is used by another proces"
below is my code
CREATE DATABASE appdb;
CREATE MASTER KEY ENCRYPTION BY PASSWORD =<>
CREATE DATABASE SCOPED CREDENTIAL SasToken
WITH IDENTITY='SHARED ACCESS SIGNATURE',
SECRET=<>
CREATE EXTERNAL DATA SOURCE log_data
WITH (
LOCATION='https://<>.dfs.core.windows.net/data',
CREDENTIAL=SasToken
)
CREATE EXTERNAL FILE FORMAT TextFileFormat WITH(
FORMAT_TYPE=DELIMITEDTEXT,
FORMAT_OPTIONS(
FIELD_TERMINATOR=',',
FIRST_ROW=2
)
)
CREATE EXTERNAL TABLE logdata(
[Id] INT,
[Correlationid] VARCHAR (200),
[Operationname] VARCHAR (200),
[Status] VARCHAR (200),
[Eventcategory] VARCHAR (200),
[Level] VARCHAR (200),
[Time] DATETIME,
[Subscription] VARCHAR (200),
[Eventinitiatedby] VARCHAR (1000),
[Resourcetype] VARCHAR (1000),
[Resourcegroup] VARCHAR (1000)
)
WITH(
LOCATION='/Log.csv',
DATA_SOURCE=log_data,
FILE_FORMAT=TextFileFormat
)
-- drop EXTERNAL table logdata ;
SELECT * from logdata;
```
I tried changing the access levels but couldn't work either.
You need the necessary access rights to the file in order to fix this problem. The simplest method is to give yourself the "Storage Blob Data Contributor" role on the storage account that you are trying to query.
And also check the details you entered while creating the external table in synapse. The location's URL is case-sensitive. and make sure your file is present at the destination.
Correct Syntax/code to create External Table:
CREATE MASTER KEY ENCRYPTION BY PASSWORD ='Welcome#Pratik123';
CREATE DATABASE SCOPED CREDENTIAL MyCred
WITH IDENTITY='SHARED ACCESS SIGNATURE',
SECRET='SAS Token';
CREATE EXTERNAL DATA SOURCE MyDs2
WITH (
LOCATION='abfss://containername#storageaccountname.dfs.core.windows.net/foldername if any',
CREDENTIAL=MyCred
)
CREATE EXTERNAL FILE FORMAT MyFile2 WITH(
FORMAT_TYPE=DELIMITEDTEXT,
FORMAT_OPTIONS(
FIELD_TERMINATOR=',',
FIRST_ROW=2
)
)
CREATE EXTERNAL TABLE MyData3(
[Id] varchar(20),
[NAME] VARCHAR (200),
[ADDRESS] VARCHAR (200)
)
WITH(
LOCATION='/dataaddress.csv',
DATA_SOURCE=MyDs2,
FILE_FORMAT=MyFile2
)
SELECT * from MyData3;
My output:
refer this similar error example by- Mike Stephenson

postgresql logs dumping in table

I am dumping postgresql .csv file into a table but getting below error
COPY postgres_log FROM 'C:\Program Files\PostgreSQL\13\data\log\postgresql-2021-06-15_191640.csv' WITH csv;
ERROR: extra data after last expected column
CONTEXT: COPY postgres_log, line 1: "2021-06-15 19:16:40.261 IST,,,5532,,60c8af3f.159c,1,,2021-06-15 19:16:39 IST,,0,LOG,00000,"ending lo..."
SQL state: 22P04
I have followed below postgresql document
https://www.postgresql.org/docs/10/runtime-config-logging.html
Please suggest, how to dump in table
You're following instructions from Postgres 10, in which the CSV log format has one fewer column than Postgres 13, which is the version you're actually running. That's why you get that error message. To fix the problem, update your postgres_log table definition as per the PG 13 docs:
https://www.postgresql.org/docs/13/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-CSVLOG
CREATE TABLE postgres_log
(
log_time timestamp(3) with time zone,
user_name text,
database_name text,
process_id integer,
connection_from text,
session_id text,
session_line_num bigint,
command_tag text,
session_start_time timestamp with time zone,
virtual_transaction_id text,
transaction_id bigint,
error_severity text,
sql_state_code text,
message text,
detail text,
hint text,
internal_query text,
internal_query_pos integer,
context text,
query text,
query_pos integer,
location text,
application_name text,
backend_type text,
PRIMARY KEY (session_id, session_line_num)
);

Troubleshooting ERROR: extra data after last expected column

I'm trying to add CSV data to my Postgres table:
CREATE TABLE IF NOT EXISTS Review (
review_id BIGSERIAL NOT NULL,
product_id_Metadata INTEGER,
date DATE,
summary VARCHAR(255),
body VARCHAR(500),
rating INTEGER,
recommend BOOLEAN,
reported BOOLEAN
reviewer_name VARCHAR(50),
reviewer_email VARCHAR(50),
response VARCHAR(255),
helpfulness INTEGER,
FOREIGN KEY product_id(product_id) REFERENCES Metadata(product_id)
);
Running the CMD:
COPY review (product_id_Metadata, summary, body, rating, recommend, reported, reviewer_name, reviewer_email, response, helpfulness) FROM '/Users/luke82601/Desktop/csv/reviews.csv' DELIMITER ',' CSV HEADER;
The CSV file has the same number of columns as my Postgres table, so I'm not sure why this error occurs

IBM Db2 on Cloud script creating tables in the wrong schema

On IBM Db2 on Cloud I have imported a script. I created a new schema under which I want to have the new tables created, but when I run the script, it keeps trying to create the tables in a previous schema. Not sure how to get the scripts to create the tables in the new schema.
I have tried the below script without the .SQL_GROUPING_SORTING and it tries to add the tables to a different schema. I have changed the default schema in the Run SQL window within db2 to SQL_GROUPING_SORTING and am now getting the error
""KZF72118" does not have the privilege to perform operation "IMPLICIT CREATE SCHEMA".. SQLCODE=-552, SQLSTATE=42502, DRIVER=4.26.14"
DDL statement for table 'HR' database:
CREATE TABLE EMPLOYEES.SQL_GROUPING_SORTING (
EMP_ID CHAR(9) NOT NULL,
F_NAME VARCHAR(15) NOT NULL,
L_NAME VARCHAR(15) NOT NULL,
SSN CHAR(9),
B_DATE DATE,
SEX CHAR,
ADDRESS VARCHAR(30),
JOB_ID CHAR(9),
SALARY DECIMAL(10,2),
MANAGER_ID CHAR(9),
DEP_ID CHAR(9) NOT NULL,
PRIMARY KEY (EMP_ID));
CREATE TABLE JOB_HISTORY.SQL_GROUPING_SORTING (
EMPL_ID CHAR(9) NOT NULL,
START_DATE DATE,
JOBS_ID CHAR(9) NOT NULL,
DEPT_ID CHAR(9),
PRIMARY KEY (EMPL_ID,JOBS_ID));
CREATE TABLE JOBS.SQL_GROUPING_SORTING (
JOB_IDENT CHAR(9) NOT NULL,
JOB_TITLE VARCHAR(15) ,
MIN_SALARY DECIMAL(10,2),
MAX_SALARY DECIMAL(10,2),
PRIMARY KEY (JOB_IDENT));
CREATE TABLE DEPARTMENTS.SQL_GROUPING_SORTING (
DEPT_ID_DEP CHAR(9) NOT NULL,
DEP_NAME VARCHAR(15) ,
MANAGER_ID CHAR(9),
LOC_ID CHAR(9),
PRIMARY KEY (DEPT_ID_DEP));
CREATE TABLE LOCATIONS.SQL_GROUPING_SORTING (
LOCT_ID CHAR(9) NOT NULL,
DEP_ID_LOC CHAR(9) NOT NULL,
PRIMARY KEY (LOCT_ID,DEP_ID_LOC));
With the Db2 on Cloud Lite Plan
The Lite plan uses one database schema.
So the only schema you can use is the one that matches your user name. In your case this would be KZF72118
Create your tables with out a schema name, and they will be created in schema KZF72118.
You would need to use one of the other plans to remove this restriction

PostgreSQL displaying integer inserted without commas, with commas (formatted like currency or similar)

SQL Server guy, new to PostgreSQL. Created the below table, performed the below insert, then ran a SELECT to see the data, yet the row shows the integer formatted with columns to break up the integer. Is this just a formatting style in the HeidiSQL utility I'm using, or is the data actually being stored as x,xxx,xxx,xxx rather than xxxxxxxxxx.
Table:
CREATE TABLE customer (
business_partner_id INTEGER PRIMARY KEY,
first_name VARCHAR(100),
last_name VARCHAR(100),
organisation_name VARCHAR(200),
date_of_bith DATE,
gender VARCHAR(50),
customer_type VARCHAR(50),
store_joined VARCHAR(10),
store_joined_date DATE,
created_date_time TIMESTAMP,
updated_date_time TIMESTAMP);
Insert:
-- Insert a customer
INSERT INTO customer VALUES
(1029884766,'James','Matson','Unknown','1980-02-17','M','Standard','303',CURRENT_DATE,CURRENT_TIMESTAMP,CURRENT_TIMESTAMP)
Query results: