Creating View and Datatypes on redshift - amazon-redshift

Guessing this is straight forward but cant get it to run. The issue I am having is explicitly setting column data types in a view.
I need to do this as I will be unioning it to another table and need to match that tables datatypes.
Below is the code I have tried to run(I have tried without the sortkey aswell but still wont run)
DROP VIEW IF EXISTS testing.test_view;
CREATE OR REPLACE VIEW testing.test_view;
(
channel VARCHAR(80) ENCODE zstd,
trans_date TIMESTAMP ENCODE zstd
)
SORTKEY
(
trans_date
)
AS
SELECT channel,
trans_date
from (
SELECT to_date(date,'DD-MM-YYYY') as trans_date,channel
FROM testing.plan
group by date, channel
)
group by trans_date,channel;
The error message I am getting:
An error occurred when executing the SQL command: CREATE OR REPLACE
VIEW trading.trading_squads_plan_v_test ( channel , trans_date )
AS
SELECT channel VARCHAR(80) ENCODE zstd,
trans_date TIM...
Amazon Invalid operation: syntax error at or near "VARCHAR"
Position: 106;
Is this an issue with views where you cant set datatypes? If so is there a workaround?
Thanks

As Jon pointed out my error was trying to set a datatype at the view level, which is not possible as its only pulling this from the table.
So I cast the values in the select call from the table:
DROP VIEW IF EXISTS testing.test_view;
CREATE OR REPLACE VIEW testing.test_view;
(
channel,
trans_date,
source_region
)
AS
SELECT CAST(channel as varchar(80)),
CAST(trans_date as timestamp),
CAST(0 as varchar(80)) as source_region
from (
SELECT to_date(date,'DD-MM-YYYY') as trans_date,channel
FROM testing.plan
group by date, channel
)
group by trans_date,channel;

Related

How to get Databricks aes_encrypt to give the same output for the same input

I have a need to encrypt some data in Databricks. I'm currently using the built in aes_encrypt function. If I use the sql shown below, I get a distinct value for each record in the table. I do not get the same value for the encrypted value when the same input is used.
Is there a way to encrypt data in Databricks so the same input yields the same output?
drop table if exists encr;
create table encr as (
select
original_text,
base64(aes_encrypt(original_text,'abcdefabcdefabcdefabcdef')) as encrypted,
cast(aes_decrypt(unbase64(base64(aes_encrypt(original_text,'abcdefabcdefabcdefabcdef'))), 'abcdefabcdefabcdefabcdef') as string) as decrypted
from
my_table
)
;
Results:
select
count(*),
count(distinct original_text),
count(distinct encrypted)
from
encr
;
Setting the mode to 'ECB' gets the same output for the same input:
https://docs.databricks.com/sql/language-manual/functions/aes_encrypt.html
drop table if exists encr;
create table encr as (
select
original_text,
base64(aes_encrypt(original_text,'abcdefabcdefabcdefabcdef','ECB')) as encrypted,
cast(aes_decrypt(unbase64(base64(aes_encrypt(original_text,'abcdefabcdefabcdefabcdef'))), 'abcdefabcdefabcdefabcdef') as string) as decrypted
from
my_table
)
;

inserting null date in postgres

I am inserting a null date with a INSERT ... SELECT FROM statement in sql
CREATE TABLE null_date (
id bigserial PRIMARY KEY
, some_date date
);
WITH date_data (some_date) AS (
VALUES (null)
)
INSERT INTO null_date (some_date)
SELECT some_date
FROM date_data;
and it fails with
ERROR: column "some_date" is of type date but expression is of type text
LINE 5: SELECT some_date
^
HINT: You will need to rewrite or cast the expression.
However, if I try to insert it directly, it works
INSERT INTO null_date (some_date)
VALUES (null)
can somebody please help me understand what's happening here? Here is the link to db<>fiddle. Thanks
The problem is that the VALUES statement and consequently the WITH clause will treat the NULL value as type text, because PostgreSQL doesn't know which data type the NULL should be. You don't have that problem with INSERT INTO ... VALUES (...), because here PostgreSQL knows right away that the NULL value with unknown type will be inserted into a certain column, so it will resolve it to the target data type.
In cases where PostgreSQL cannot guess the data type from context, you had better use an explicit type cast:
WITH date_data (some_date) AS (
VALUES (CAST(null AS date))
)
INSERT INTO null_date (some_date)
SELECT some_date
FROM date_data;
PostgreSQL used to behave differently in cases like this, but commit 1e7c4bb0049 changed that in 2017. Read the commit message for an explanation.

Temp table column type error when values are null

I am using a temp table to hold some data while I tidy up a database table. Narrowed down, it goes something like this:
CREATE TEMP TABLE fw_temp AS SELECT time, version, device_id FROM fw_status;
DELETE FROM fw_status;
INSERT INTO fw_status SELECT * FROM fw_temp
On my INSERT statement I get some.. interesting behaviour when the values in the device_id column are null:
SQL Error [42804]: ERROR: column "device_id" is of type bigint but expression is of type text
Hint: You will need to rewrite or cast the expression.
As the error message may lead on, device_id is indeed bigint type in the table, but appears to have lost the type when we select from the temp table.
Making a general select,
CREATE TEMP TABLE fw_temp AS SELECT * FROM fw_status;
DELETE FROM fw_status;
INSERT INTO fw_status SELECT * FROM fw_temp
interestingly does not cause this error.
Now, the real temp table is a bit more complex, so sadly going with SELECT * is not an option. How can I go about this error?
Changing to
CREATE TEMP TABLE fw_temp AS SELECT time, version, device_id FROM fw_status;
DELETE FROM fw_status;
INSERT INTO fw_status (time, version, device_id) SELECT time, version, device_id FROM fw_temp
resolved the issue. Apparently, you can mix general select and general insert or specific select and specific insert, but not general select and specific insert. I am still puzzled as to what is going on. Any enlightenment would be appreciated

Postgres RLS Policy and functions

I have a RLS policy violation on a Postgres function. I believe it's because the policy relies on rows created in the function. A SELECT command is run in the function. New rows are not available because they are still in a transaction.
Here is the function:
CREATE FUNCTION public.create_message(organization_id int, content text, tags Int[])
RETURNS setof public.message
AS $$
-- insert message, return PK
WITH moved_rows AS (
INSERT INTO public.message (organization_id, content)
VALUES($1, $2)
RETURNING *
),
-- many to many relation
moved_tags AS (
INSERT INTO public.message_tag (message_id, tag_id)
SELECT moved_rows.id, tagInput.tag_id
FROM moved_rows, UNNEST($3) as tagInput(tag_id)
RETURNING *
)
SELECT moved_rows.* FROM moved_rows LEFT JOIN moved_tags ON moved_rows.id = moved_tags.message_id
$$ LANGUAGE sql VOLATILE STRICT;
Here is the policy:
CREATE POLICY select_if_organization
on message_tag
for select
USING ( message_id IN (
SELECT message.id
FROM message
INNER JOIN organization_user ON (organization_user.organization_id = message.organization_id)
INNER JOIN sessions ON (sessions.user_id = organization_user.user_id)
WHERE sessions.session_token = current_user_id()));
Ideas:
Add a field to the joining table to simplify the policy, but it violates normal form.
Return user input instead of running the SELECT, but input may be escaped and I should be able to run a SELECT command
Split into two functions. Create the message row, then add the message_tag. I'm running postgraphile, so two mutations. I have foreign key relations setup between the two. I don't know if graphile will do that automatically.
Error message:
ERROR: new row violates row-level security policy for table "message_tag"
CONTEXT: SQL function "create_message" statement 1
I receive the error when I run the function. I want the function to run successfully, insert one row in the message table, and turning the input array into rows for the message_tag table with message_tag.message_id=message.id, the last inserted id. I need a policy in place so users from that join relation only see their own organization's message_tag rows.
Here is another policy on the INSERT command. It allows INSERT if a user is logged in:
create policy insert_message_tag_if_author
on message_tag
for insert
with check (EXISTS (SELECT * FROM sessions WHERE sessions.session_token = current_user_id()));
According to the error message, this part of your SQL statement causes the error:
INSERT INTO public.message_tag (message_id, tag_id)
SELECT moved_rows.id, tagInput.tag_id
FROM moved_rows, UNNEST($3) as tagInput(tag_id)
RETURNING *
You need to add another policy FOR INSERT with an appropriate WITH CHECK clause.
I ended up adding a field to the joining table, and creating a policy with that. That way, RLS validation does not require a row which would be created in a middle of a function.

How to insert JPEG into a SQL Server 2000 database field of image type using Transact SQL

I'm trying to figure out how to insert a .JPG file into a SQL Server 2000 database field of type image using Transact SQL. Thanks.
Use OPENROWSET:
INSERT MyTable (ImageColumnName)
SELECT BulkColumn FROM OPENROWSET (BULK 'c:\myjpeg.jpg', SINGLE_BLOB) AS X
EDITED Whoops, you're using 2000--the previous solution is not supported. You have to use WRITETEXT:
CREATE TABLE MyTable
(
ID INT PRIMARY KEY IDENTITY (1,1),
ImageColumnName IMAGE NULL
)
GO
-- must insert a dummy value into the image column for TEXTPTR
-- to work in next bit
DECLARE #RowId INT
INSERT MyTable (ImageColumnName) VALUES (0xFFFFFFFF)
SELECT #RowId = SCOPE_IDENTITY()
-- get a pointer value to the row+column you want to
-- write the image to
DECLARE #Pointer_Value varbinary(16)
SELECT #Pointer_Value = TEXTPTR(ImageColumnName)
FROM MyTable
WHERE Id = #RowId
-- write the image to the row+column pointer
WRITETEXT MyTable.ImageColumnName #Pointer_Value 'c:\myjpeg.jpg'
There is a tool called textcopy.exe
You can find it under MSSQL\Binn or get it with SQL Server 2000 SP4
Alexander Chigrik wrote a nice stored procedure for usinig it with SQL query:
http://www.mssqlcity.com/Articles/KnowHow/Textcopy.htm
The stored procedure found in this tutorial worked for me:
Brief tutorial on text, ntext, and image