format SQL date string - postgresql

I'm trying to get a date formatted between quotes on a select format() query:
select format('CREATE TABLE temporary_table AS
SELECT id FROM table WHERE created >=%I ORDER BY 1 ASC LIMIT 1','2021-04-01');
I'm getting this:
CREATE TABLE temporary_table AS
SELECT table FROM table WHERE created >="2021-04-01" ORDER BY 1 ASC LIMIT 1;
And I wanted to actually get the date between single quotes (as this won't work on an execute)
How can I achieve this?

For single-quoted values you need the specifier %L for format(). Like:
SELECT format('CREATE TABLE temporary_table AS
SELECT id FROM table WHERE created >= %L ORDER BY 1 LIMIT 1','2021-04-01');
See:
Insert text with single quotes in PostgreSQL
Of course that only makes sense if you parameterize the input. You wouldn't bother to use format() for a constant date to begin with.

Related

How to properly insert Timestamp using plpgsql and EXECUTE format...USING dynamic SQL

I have a dynamic SQL statement I'm building up that is something like
my_interval_var := interval - '60 days';
sql_query := format($f$ AND NOT EXISTS (SELECT 1 FROM some_table st WHERE st.%s = %s.id AND st.expired_date >= %L )$f$, var1, var2, now() - my_interval_var)
Regarding my first question, it seemed to insert the Timestamp correctly (it seems) after the now() - my_interval_var computation. However, I just want to make sure I don't need to cast anything or something, because the only way I could get it work was if I used %L, which is the string literal Identifer. Or does postgres allow direct comparisons with Strings that represent Time without a cast?, like
some_column <= '2021-12-31 00:00:00'; // is ::timestamp cast needed?
Second of all, regarding the sql_query variable that I concatenated an SQL String into above, I actually wanted to skip the Format I did, and directly inject this sql_query variable into an EXECUTE...FORMAT...USING statement.
I couldn't get it to work, but something like this:
EXECUTE format($f$ SELECT *
FROM %I tbl_alias
WHERE tbl_alias.%s = %L
%s ) USING var1, var2, var3, sql_query;
Is it possible to leave the Dynamic SQL Identifiers %I %L and %s inside the variable and FORMAT it at the EXECUTE... level? Something tells me this isn't possible, but it would be really cool.
Also last question I didn't want to add, but I feel someone might have a quick answer.
I was using the ]
FOR temprecord IN
SELECT myCol1, myCol2, myCol3
FROM %I tbl',var1)
LOOP
EXECUTE temprecord.someColumnOnMyTbl;
END LOOP;
...but I could not for the life of get the EXECUTE temprecord.someColumnOnMyTbl statement to work when I made the query dynamic. I tried everything identifier, using FORMAT, USING...
I thought columns were strings like %s because I do that for columns all the time when they are aliased like alias.%s = 'some string literal'
ANyway, I couldn't get it to work, I wanted to make the column name dynamic but tried all these things
EXECUTE format($f$ %I.%s $f$, var1, var2);
EXECUTE format($f$ %$1.%$2 $f$) USING var1, var2;
EXECUTE format($f$ %I.someColumnOn%s $f$, var1, var2);
EXECUTE format($f$ $1.someColumnOn$2 $f$) USING var1, var2;
Anyway, I tried more stuff than that, but I actually got some data from the DB when I made the temprecord variable an %I but I am Selecting 3 columns and it looked like sommething got jacked up with the second identifier because I got a syntax error and it looked like it was trying to concatenate all 3 columns of the query results...
I did try hardcoding it and that worked fine... any help appreciated!
String literal is unknown type value. Postgres always does cast to some target binary format. The type is deduced from context. When you use function format, and %L placeholder, then any binary value is converted to string, and escaped to Postgres's string literal (protection against syntax errors, and SQL injection). When you use USING clause, then the binary value is passed directly to executor. It is little bit faster, and there is not possibility to lost some information under cast to string. Without these points, the real effect of %L and USING clause is almost same.
Your type of variable is timestamp. Probably type of expired_date column is date type. So some conversion timestamp->date is necessary.
Function format is just string function. It just make string. For better readability it supports placeholders, that ensure correct escaping and correct result SQL string. %L is same like calling function quote_literal and %I is same like quote_ident (for column, table names). %s inserts string without escaping and quoting. The result of format function (when you use it in EXECUTE command) should be valid SQL statement. You can use it in RAISE NOTICE command, and you can print result to debug output. Usually it is good idea
DECLARE
query text;
x date DEFAULT current_date
y int;
BEGIN
query := format('.... WHERE inserted = $1', ...);
RAISE NOTICE 'dynamic query will be: %', query);
EXECUTE query USING x INTO y;
...
Clause USING allows using parameters in dynamic SQL (EXECUTE clause). Usually, the format's placeholdres should be used for table or column names, and USING for any other.
For types date and timestamp (scalars basic types) the following execution will be on 99.99% same:
EXECUTE format('select count(*) from foo where inserted = %L', current_date) INTO ..
EXECUTE 'select count(*) from foo where inserted = $1' USING current_date INTO ..
You cannot to use query parameters on column name or table name positions. This is limit of USING clause. But for any other cases, this clause should be used primary.

Postgresql not supporting timestamp value on query

DECLARE
MAX_upd_date_Entity_Incident timestamp without time zone;
MAX_upd_date_Entity_Incident := (SELECT LAST_UPDATE_DATE::timestamp FROM
test.table_1 where TABLE_NAME='mac_incidents_d');
execute 'insert into test.table2 (column_name,schema_name,tablename)
values(''col1'',''col2'',''col3'') from test.table3 X where X.dl_upd_ts::timestamp > '||
MAX_upd_date_Entity_Incident;
The above query is not getting executed, getting an error on the timestamp value, it's unable to read the variable MAX_upd_date_Entity_Incident.
Please suggest me if we need to cast timestamp to another format.
I am able to execute query in the SQL editor, but not using execute.
Don't pass values as string literals. Use placeholders and pass them as "native" values.
Additionally you can't use the values clause if you want to select the source values from a table. An insert statement is either insert into tablename (column_one, column_two) values (value_one, value_two) or insert into (column_one, column_two) select c1, c2 from some_table
I also don't see the need for dynamic SQL to begin with:
insert into test.table2 (column_name,schema_name,tablename)
select col1,col2,col3
from test.table3 X
where X.dl_upd_ts::timestamp > MAX_upd_date_Entity_Incident;
If you oversimplified your example and you indeed need dynamic SQL you should use something like this:
execute 'insert into test.table2 (column_name,schema_name,tablename)
select col1,col2,col3) from test.table3 X where X.dl_upd_ts::timestamp > $1'
using MAX_upd_date_Entity_Incident;

How to update column based on column name in postgres?

I've narrowed it down to two possibilities - DynamicSQL and using a case statement.
However, I've failed with both of these.
I simply don't understand dynamicSQL, and how I would use it in my case.
This is my attempt using case statements; one of many failed variations.
SELECT column_name,
CASE WHEN column_name = 'address' THEN (**update statement gives syntax error within here**)
END
FROM information_schema.columns
WHERE table_name = 'employees';
As an overview, I'm using Axios to talk to my Node server, which is making calls to my Heroku database using Massivejs.
Maybe this isn't the way to go - so here's my main problem:
I've ran into troubles because the values I'm planning on using as column names are sent to my server as strings. The exact call that I've been trying to use is
update employees
set $1 = $2
where employee_id = $3;
Once again, I'm passing into those using massive.
I get the error back { error: syntax error at or near "'address'"} because my incoming values are strings. My thought process was that the above statement would allow me to use variables because 'address' is encapsulated by quotes.
But alas, my thought process has failed me.
This seems to be close to answering my question, but I can't seem to figure out what to do in my case if using dynamic SQL.
How to use dynamic column names in an UPDATE or SELECT statement in a function?
Thanks in advance.
I will show you a way to do this by using a function.
First we create the employees table :
CREATE TABLE employees(
id BIGSERIAL PRIMARY KEY,
column1 TEXT,
column2 TEXT
);
Next, we create a function that requires three parameters:
columnName - the name of the column that needs to be updated
columnValue - the new value to which the column needs to be updated
employeeId - the id of the employee that will be updated
By using the format function we generate the update query as a string and use the EXECUTE command to execute the query.
Here is the code of the function.
CREATE OR REPLACE FUNCTION update_columns_on_employee(columnName TEXT, columnValue TEXT, employeeId BIGINT)
RETURNS VOID AS
$$
DECLARE update_statement TEXT := format('UPDATE EMPLOYEES SET %s = ''%s'' WHERE id = %L',columnName, columnValue, employeeId);
BEGIN
EXECUTE update_statement;
end;
$$ LANGUAGE plpgsql;
Now, lets insert some data into the employees table
INSERT INTO employees(column1, column2) VALUES ('column1_start_value','column2_start_value');
So now we currently have an employee with an id value of 1 who has 'column1_start_value' value for the column1, and 'column2_start_value' value for column2.
If we want to update the value of column2 from 'column2_start_value' to 'column2_new_value' all we have to do is execute the following call
SELECT * FROM update_columns_on_employee('column2','column2_new_value',1);

Postgres: Can I create an index to use in the SELECT clause?

I have defined a function that determines the timezone from table tz_world for a set of lon, lat values:
create function get_timezone(numeric, numeric)
returns character varying(30) as $$
select tzid from tz_world where ST_Contains(geom, ST_MakePoint($1, $2));
$$ language SQL immutable;
Now I would like to use this function in the SELECT clause of a query on a different table:
select get_timezone(lon, lat) from event where...;
The function is rather slow, so I tried using an index to speed things up:
create index event_timezone_idx on event (get_timezone(event.lon, event.lat));
While this speeds up queries where the function is used in the WHERE clause, it has no effect on the variant above where get_timezone(lon, lat) is used in the SELECT clause.
Is it possible to rephrase the query and/or index to speed up the timezone determination?
Update
Thank you for the answers!! I decided to include an extra column for the timezone in the end and populate it when creating/updating the events.
I would recommend you create a local temporary table of the part of the select where you want to create the index on and then create an index on the temporary one:
CREATE LOCAL TEMPORARY TABLE temp_table AS (
select
.
.
.
);
CREATE INDEX temp_table idx
ON temp_table
USING btree
(col1,col2,....);
Otherwise write what you want your WHERE condition to be, indexes only work on WHERE clauses and values for the index should be exactly the ones you are trying to filter on.

Split Cell using SSIS

I'm looking to use SSIS to transform the data held from a single source table. One of the cells has a string of characters. For example:
##/\/\/\/\/\##HHHHHHBBBB##/\/\/\/\/\
There's also another cell on the same row which contains a date.
Basically I want a each character within that string to be transferred to a new table as a row on it's own. The first two characters represent the date given in the other cell. The next two characters represent the following day and so on. So as well as having each character on it's own I would also want to increment the data and store that too.
Any idea how I would go about doing this or even if SSIS is the correct tool to be using.
Many Thanks
I wonder if you'd be better running this through a split-string function in SQL first? That way you'l be getting rows for each character along-side the date, and then you can just output it straight to a destination.
I've created a function to facilitate this:
CREATE FUNCTION [dbo].[udf_SplitStringIntoRows](#text varchar(max))
RETURNS #tbl TABLE ([value] char(1) NOT NULL)
AS
BEGIN
WHILE len(#text) > 0
BEGIN
INSERT INTO #tbl
SELECT left(#text,1)
SET #text = RIGHT(#text,len(#text)-1)
END
RETURN
END
Then, to test the data i created a quick temp table with your data in:
DECLARE #source as TABLE([value] varchar(max), [date] datetime)
INSERT INTO #source
SELECT '##/\/\/\/\/\##HHHHHHBBBB##/\/\/\/\/\', getdate()
UNION
SELECT '##/\/\/\/\/\##HHHHHHBBBB##/\/\/\/\/\', getdate()+1
UNION
SELECT '##/\/\/\/\/\##HHHHHHBBBB##/\/\/\/\/\', getdate()+2
Then cross applied the function to this dataset:
SELECT d.[value], s.date
FROM #source s
CROSS APPLY dbo.[udf_SplitStringIntoRows](s.value) d
Which should give you the source dataset you require to further process in SSIS.