Using a value from a for loop in a function - postgresql

Hi i am trying to create a function which loops from 1 to 12 and then use that loop value to calculate a date here is my code so far for the loop which is where the problem is.
FOR i IN 1..12 LOOP
r.duedate := alert_start_period + interval 'i month';
RAISE NOTICE 'Month % gives date of %', i, r.duedate;
END LOOP;
I know the problem is how i am referencing the value i. How do i correctly reference i from the loop so the interval is correct please.
The Error being thrown is
ERROR: invalid input syntax for type interval: "i month"

You can't substitute a var into a string like that. Without using dynamic SQL, the cleanest way is a multiplier:
r.duedate := alert_start_period + i * INTERVAL '1' MONTH;
However, this whole idea is ugly. The loop is unnecessary. Use:
generate_series(alert_start_period, alert_start_period + INTERVAL '12' MONTH, INTERVAL '1' MONTH)
to generate the intervals. If you must, you can loop over that, but I generally find that everything not requiring exception handling is better done as normal set operations rather than pl/pgsql loops.

Related

PostgreSQL: How do you set a "counter" variable as an array index in a while loop statement?

I am trying to implement a while loop that will create new table rows for each element in an array.
In the code below, prc_knw is a user-input expression that is structured as an array. I want to take each of the elements in prc_knw and separate them into new rows in my table processknowledgeentry where each row has the same prc_id taken from another table. Each time the statement loops, I want it to move to the next prc_knw array element, and I want the variable counter to indicate the index.
do $$
declare
counter integer := 1;
begin
while counter <= (select array_length(prc_knw,1)) loop
INSERT INTO processknowledgeentry (prc_id,knw_id, pke_delete_ind)
VALUES ((SELECT prc_id FROM process
WHERE prc_id = (SELECT MAX(p.prc_id) FROM process AS p)),prc_knw[counter], False);
counter := counter + 1;
end loop;
end$$;
I'm stuck on how to format prc_knw[counter]. The error I get is: SyntaxError: syntax error at or near "[" LINE 14: ...ECT MAX(p.prc_id) FROM process AS p)),ARRAY[1,2,3][counter],...
I've tried formatting it like prc_knw['%',counter], and the error I receive is: IndexError: list index out of range (because I'm building this app in Python Dash and it's connected to a database in pgAdmin 4).
Hope you can help, thank you! Also feel free to let me know if there is a better approach to this.

PostgreSQL: Multiple Row Result Using WHILE LOOP

I have data as seen below.
Those data show the fuel tank level on each timestamp, ftakhir is ga.ts's fuel tank level and ftawal is gb.ts's fuel tank level. As you can see gb.ts and ga.ts have a gap between them. I want to know the value of fuel tank level for each timestamp being recorded between ga.ts and gb.ts.
I use WHILE LOOP
do $$
declare
t timestamp := '2020-11-30 13:53:10.596645';
s timestamp := '2020-11-30 14:04:10.797056';
id varchar := '05b92dc749ed3a09b35273cac3f3d68aabfcf737';
begin
while t <= s loop
PERFORM g.time FROM gpsapi g
WHERE g.idalat = id AND g.time >= t AND g.time <= t + '1 minute 10 seconds'::INTERVAL;
t := t + '1 minutes 10 seconds'::INTERVAL;
end loop;
end$$;
Using While Loop, I want it to give multiple result of timestamp between ga.ts and gb.ts, then I need to calculate the traveled distance between the known timestamp.
How to make it as a function so it can give multiple rows of result? Because I am a bit confused how to use the select then while-loop in postgresql

How to avoid multiple insert in PostgreSQL

In my query im using for loop. Each and every time when for loop is executed, at the end some values has to be inserted into table. This is time consuming because for loop has many records. Due to this each and every time when for loop is executed, insertion is happening. Is there any other way to perform insertion at the end after the for loop is executed.
For i in 1..10000 loop ....
--coding
insert into datas.tb values(j,predictednode); -- j and predictednode are variables which will change for every loop
End loop;
Instead of inserting each and every time i want the insertion should happen at the end.
If you show how the variables are calculated it could be possible to build something like this:
insert into datas.tb
select
calculate_j_here,
calculate_predicted_node_here
from generate_series(1, 10000)
One possible solution is to build a large VALUES String. In Java, something like
StringBuffer buf = new StringBuffer(100000); // big enough?
for ( int i=1; i<=10000; ++i ) {
buf.append("(")
.append(j)
.append(",")
.append(predicted_node)
.append("),"); // whatever j and predict_node are
}
buf.setCharAt(buf.length()-1, ' '); // kill last comma
String query = "INSERT INTO datas.tb VALUES " + buf.toString() + ";"
// send query to DB, just once
The fact j and predict_node appear to be constant has me a little worried, though. Why are you putting a constant in 100000 times?
Another approach is to do the predicting in a Postgres procedural language, and have the DB itself calculate the value on insert.

Print all dates of current year on SQL Server 2008R2

The following codes will print all dates throughout the current year on SQL Server 2008R2
with x (dy, yr)
as (
select dy, year (dy) yr
from (
select getdate () - datepart (dy, getdate ()) + 1 dy
-- the first date of the current year
) tmp1
union all
select dateadd (dd, 1, dy), yr
from x
where year (dateadd (dd, 1, dy)) = yr
)
select x.dy
from x
option (maxrecursion 400)
But there are some points that I cannot understand
As far as I can see, the first date should've been printed 400 times, are all the repetitions filtered out?
when I change 400 to less than 364, the following error returns:
[Err] 42000 - [SQL Server]The statement terminated. The maximum
recursion 363 has been exhausted before statement completion.
But how does the processor know when the statement is going to complete?
What you are dealing with here is a recursive CTE. You should probably just read more about how it works.
Basically,
It obtains the first row set from the anchor part (the first SELECT, the left part of UNION ALL).
That row set becomes aliased as x in the second SELECT (the right part of UNION ALL), called the recursive part.
The recursive part produces another row set based on x, which becomes a new x at the next iteration. That is, not the combined row set of the initial x and the last result set becomes a new x, but only the last result set.
The previous step is repeated again against the new x, and the cycle goes on until either of these is true:
another iteration produces no result set;
the MAXRECURSION limit is reached.
The final result set consists of all the partial result sets obtained from both parts of the recursive CTE.
Applying the above to your particular query:
The first SELECT produces one row containing this year's 1st of January (the date), and that becomes the first x table.
For every row of x the second SELECT produces a row containing the corresponding next date if it belongs to the same year. So, the recursive part's first iteration effectively gives us the 2nd of January. According to the above description, the result set becomes a new x.
The following iteration results in the 3rd of January, the next one produces the 4th and so on.
If the MAXRECURSION option value has safely allowed us to arrive at the moment when x contains the 31st of December, then another iteration will reveal that the next day in fact belongs to a different year. That will result in an empty row set produced, which in turn will result in termination of the recursive CTE's execution.
This is not an answer this is just another way of writing your sql. Andriy M has left you with a cool answer, you should give him the credit for the right answer.
;with x (dy)
as (
select dateadd(year, datediff(year, 0, getdate()), 0) dy
union all
select dy + 1
from x
where year (dy) = year(dy+1)
)
select x.dy
from x
option (maxrecursion 400)

Unexpected SQL results: string vs. direct SQL

Working SQL
The following code works as expected, returning two columns of data (a row number and a valid value):
sql_amounts := '
SELECT
row_number() OVER (ORDER BY taken)::integer,
avg( amount )::double precision
FROM
x_function( '|| id || ', 25 ) ca,
x_table m
WHERE
m.category_id = 1 AND
m.location_id = ca.id AND
extract( month from m.taken ) = 1 AND
extract( day from m.taken ) = 1
GROUP BY
m.taken
ORDER BY
m.taken';
FOR r, amount IN EXECUTE sql_amounts LOOP
SELECT array_append( v_row, r::integer ) INTO v_row;
SELECT array_append( v_amount, amount::double precision ) INTO v_amount;
END LOOP;
Non-Working SQL
The following code does not work as expected; the first column is a row number, the second column is NULL.
FOR r, amount IN
SELECT
row_number() OVER (ORDER BY taken)::integer,
avg( amount )::double precision
FROM
x_function( id, 25 ) ca,
x_table m
WHERE
m.category_id = 1 AND
m.location_id = ca.id AND
extract( month from m.taken ) = 1 AND
extract( day from m.taken ) = 1
GROUP BY
m.taken
ORDER BY
m.taken
LOOP
SELECT array_append( v_row, r::integer ) INTO v_row;
SELECT array_append( v_amount, amount::double precision ) INTO v_amount;
END LOOP;
Question
Why does the non-working code return a NULL value for the second column when the query itself returns two valid columns? (This question is mostly academic; if there is a way to express the query without resorting to wrapping it in a text string, that would be great to know.)
Full Code
http://pastebin.com/hgV8f8gL
Software
PostgreSQL 8.4
Thank you.
The two statements aren't strictly equivalent.
Assuming id = 4, the first one gets planned/prepared on each pass, and behaves like:
prepare dyn_stmt as '... x_function( 4, 25 ) ...'; execute dyn_stmt;
The other gets planned/prepared on the first pass only, and behaves more like:
prepare stc_stmt as '... x_function( $1, 25 ) ...'; execute stc_stmt(4);
(The loop will actually make it prepare a cursor for the above, but that's besides the point for our sake.)
A number of factors can make the two yield different results.
Search path changes before calling the procedure will be ignored by the second call. In particular if this makes x_table point to something different.
Constants of all kinds and calls to immutable functions are "hard-wired" in the second call's plan.
Consider this as an illustration of these side-effects:
deallocate all;
begin;
prepare good as select now();
prepare bad as select current_timestamp;
execute good; -- yields the current timestamp
execute bad; -- yields the current timestamp
commit;
execute good; -- yields the current timestamp
execute bad; -- yields the timestamp at which it was prepared
Why the two aren't returning the same results in your case would depend on the context (you only posted part of your pl/pgsql function, so it's hard to tell), but my guess is you're running into a variation of the above kind of problem.
From Tom Lane:
I think the problem is that you're assuming "amount" will refer to a table column of the query, when actually it's a local variable of the plpgsql function. The second interpretation will take precedence unless you qualify the column reference with the table's name/alias.
Note: PG 9.0 will throw an error by default when there is an ambiguity of this type.