Extracting Double Values from Blob/Object to Rows - jboss

I have a query that is related to this topic:
https://developer.jboss.org/thread/277610
Prior to reaching the comma separated values stage, the values are actually stored as a blob.
There is a function fetchBlobtoString(Blob, string, VARIADIC start_end integer) returns String that actually takes the blob input and then converts to comma separated values as seen on the post.
The issue with this is string is limited to 4000 characters, hence it will decimate the data and not all values show up. What would be the best way to extract the values that are double and convert it to rows similar to the post.
Would converting it in to an object instead of string improve performance using following function as an example:
fetchElementValueFromBlob(protobufBlob Blob, origName string) returns object
I have tried iterating items in blob using getItem function, add to temp table, but its slow and I get following error If i go more that 15-20 iterations:
Error: TEIID30504 Remote org.teiid.core.TeiidProcessingException: TEIID30504 petrelDS: TEIID60000 javax.resource.ResourceException: IJ000453: Unable to get managed connection for java:/petrelDS
SQLState: 50000
ErrorCode: 30504
BEGIN
DECLARE integer VARIABLES.counter = 0;
DECLARE integer VARIABLES.pts = 100;
WHILE (VARIABLES.counter < VARIABLES.pts)
BEGIN
select wellbore_uwi,getItem(fetchBlob(data, 'md'),VARIABLES.counter) INTO TEMP from DirectionalSurvey where wellbore_uwi='1234567890';
VARIABLES.counter = (VARIABLES.counter + 1);
END
SELECT TEMP.wb_uwi,TEMP.depth FROM TEMP;
END
If I remove the getItem() function, the error goes away.

Related

PostgreSQL: How do you set a "counter" variable as an array index in a while loop statement?

I am trying to implement a while loop that will create new table rows for each element in an array.
In the code below, prc_knw is a user-input expression that is structured as an array. I want to take each of the elements in prc_knw and separate them into new rows in my table processknowledgeentry where each row has the same prc_id taken from another table. Each time the statement loops, I want it to move to the next prc_knw array element, and I want the variable counter to indicate the index.
do $$
declare
counter integer := 1;
begin
while counter <= (select array_length(prc_knw,1)) loop
INSERT INTO processknowledgeentry (prc_id,knw_id, pke_delete_ind)
VALUES ((SELECT prc_id FROM process
WHERE prc_id = (SELECT MAX(p.prc_id) FROM process AS p)),prc_knw[counter], False);
counter := counter + 1;
end loop;
end$$;
I'm stuck on how to format prc_knw[counter]. The error I get is: SyntaxError: syntax error at or near "[" LINE 14: ...ECT MAX(p.prc_id) FROM process AS p)),ARRAY[1,2,3][counter],...
I've tried formatting it like prc_knw['%',counter], and the error I receive is: IndexError: list index out of range (because I'm building this app in Python Dash and it's connected to a database in pgAdmin 4).
Hope you can help, thank you! Also feel free to let me know if there is a better approach to this.

Text and jsonb concatenation in a single postgresql query

How can I concatenate a string inside of a concatenated jsonb object in postgresql? In other words, I am using the JSONb concatenate operator as well as the text concatenate operator in the same query and running into trouble.
Or... if there is a totally different query I should be executing, I'd appreciate hearing suggestions. The goal is to update a row containing a jsonb column. We don't want to overwrite existing key value pairs in the jsonb column that are not provided in the query and we also want to update multiple rows at once.
My query:
update contacts as c set data = data || '{"geomatch": "MATCH","latitude":'||v.latitude||'}'
from (values (16247746,40.814140),
(16247747,20.900840),
(16247748,20.890570)) as v(contact_id,latitude) where c.contact_id = v.contact_id
The Error:
ERROR: invalid input syntax for type json
LINE 85: update contacts as c set data = data || '{"geomatch": "MATCH...
^
DETAIL: The input string ended unexpectedly.
CONTEXT: JSON data, line 1: {"geomatch": "MATCH","latitude":
SQL state: 22P02
Character: 4573
You might be looking for
SET data = data || ('{"geomatch": "MATCH","latitude":'||v.latitude||'}')::jsonb
-- ^^ jsonb ^^ text ^^ text
but that's not how one should build JSON objects - that v.latitude might not be a valid JSON literal, or even contain some injection like "", "otherKey": "oops". (Admittedly, in your example you control the values, and they're numbers so it might be fine, but it's still a bad practice). Instead, use jsonb_build_object:
SET data = data || jsonb_build_object('geomatch', 'MATCH', 'latitude', v.latitude)
There are two problems. The first is operator precedence preventing your concatenation of a jsonb object to what is read a text object. The second is that the concatenation of text pieces requires a cast to jsonb.
This should work:
update contacts as c
set data = data || ('{"geomatch": "MATCH","latitude":'||v.latitude||'}')::jsonb
from (values (16247746,40.814140),
(16247747,20.900840),
(16247748,20.890570)) as v(contact_id,latitude)
where c.contact_id = v.contact_id
;

How to embed a function returning a string in a Q query?

I'm using Q.f to format column fields from integer to float with 4 digits precision:
fmt_price:{[val] .Q.f[4;](val*0.0001)}
select fmt_price[price] from mytable
The fmt_price works well at the q prompt, but if I embed the function in a query I get this error:
An error occurred during execution of the query. The server sent the
response: `type
The fmt_price call works if I return a float or integer variable, rather than the result of Q.f.
You need to do an each over the list. Currently you are passing a list of values to .Q.f, when it expects an atom. Something like the following is what you need:
fmt_price:{[val] .Q.f[4;] each (val*0.0001)}

Trying to create aggregate function in PostgreSQL

I'm trying to create new aggregate function in PostgreSQL to use instead of the sum() function
I started my journey in the manual here.
Since I wanted to create a function that takes an array of double precision values, sums them and then does some additional calculations I first created that final function:
takes double precision as input and gives double precision as output
DECLARE
v double precision;
BEGIN
IF tax > 256 THEN
v := 256;
ELSE
v := tax;
END IF;
RETURN v*0.21/0.79;
END;
Then I wanted to create the aggregate function that takes an array of double precision values and puts out a single double precision value for my previous function to handle.
CREATE AGGREGATE aggregate_ee_income_tax (float8[]) (
sfunc = array_agg
,stype = float8
,initcond = '{}'
,finalfunc = eeincometax);
What I get when I run that command is:
ERROR: function array_agg(double precision, double precision[]) does
not exist
I'm somewhat stuck here, because the manual lists array_agg() as existing function. What am I doing wrong?
Also, when I run:
\da
List of aggregate functions
Schema | Name | Result data type | Argument data types | Description
--------+------+------------------+---------------------+-------------
(0 rows)
My installation has no aggregate functions at all? Or does only list user defined functions?
Basically what I'm trying to understand:
1) Can I use an existing functions to sum up my array values?
2) How can I find out about input and ouptut data types of functions? Docs claim that array_agg() takes any kind of input.
3) What is wrong with my own aggregate function?
Edit 1
To give more information and clearer picture of what I'm trying to achieve:
I have one huge query over several tables which goes something like this:
SELECT sum(tax) ... from (SUBQUERY) as foo group by id
I want to replace that sum function with my own aggregate function so I don't have to do additional calculations on backend - since they can all be done on database level.
Edit 2
Accepted Ants's answer. Since final solution comes from comments I post it here for reference:
CREATE AGGREGATE aggregate_ee_income_tax (float8)
(
sfunc = float8pl
,stype = float8
,initcond = '0.0'
,finalfunc = eeincometax
);
Array agg is an aggregate function not a regular function, so it can't be used as a state transition function for a new aggregate. What you want to do is to create an aggregate function which has a state transition function that is identical to array_agg and a custom final func.
Unfortunately the state transition function of array_agg is defined in terms of an internal datatype so it can't be reused. Fortunately there is an existing function in core that already does what you want.
CREATE AGGREGATE aggregate_ee_income_tax (float8)(
sfunc = array_append,
stype = float8[],
initcond = '{}',
finalfunc = eeincometax);
Also note that you had your types mixed up, you probably want aggregate a set of floats to an array, not a set of arrays to a float.
In addition to #Ants excellent advice:
1.) Your final function could be simplified to:
CREATE FUNCTION eeincometax(float8)
RETURNS float8 LANGUAGE SQL AS
$func$
SELECT (least($1, 256) * 21) / 79
$func$;
2.) It seems like you are dealing with money? In this case I would strongly advise to use the type numeric (preferred) or money for the purpose. Floating point operations are often not precise enough.
3.) The initial condition of the aggregate can simply be just 0:
CREATE AGGREGATE aggregate_ee_income_tax(float8)
(
sfunc = float8pl
,stype = float8
,initcond = 0
,finalfunc = eeincometax
);
4.) In your case (least(sum(tax), 256) * 21) / 79 is probably faster than your custom aggregate. Aggregate functions provided by PostgreSQL are written in C and optimized for performance. I would use that instead.

calculation in postgresql function

I'm trying to create function that serves as replacement for postgresql sum function in my query.
query itself it LOOONG and something like this
SELECT .... sum(tax) as tax ... from (SUBQUERY) as bar group by id;
I want to replace this sum function with my own which alters the end value by tax calculation.
So I created pgsql function like this:
DECLARE
v double precision;
BEGIN
IF val > 256 THEN
v := 256;
ELSE
v := val;
END IF;
RETURN (v*0.21)/0.79;
END;
That function uses double precision as input. Using this function gives me error though:
column "bar.tax" must appear in the GROUP BY clause or be used in an aggregate function
So I tried to use double precision[] as input type for function - in that case the error was telling me that there was no postgresql function matching the name and given input type.
So is there a way to replace sum function in my query by using pgsql functions or not - I don't want to change any other parts of my query unless I really, Really, REALLY have to.
Look into CREATE AGGREGATE for aggregate functions. And in your expression you can do something like sum(LEAST(tax,256)*21/79) if I read you right.