Oracle 12c: use JSON_QUERY in a trigger - triggers

I have a CLOB Json in a column of a table with the following structure:
{"sources": [1,2,4]}
I'm trying to write a trigger that read the array [1,2,4] and performs some checks:
I'm trying using:
DECLARE
TYPE source_type IS TABLE OF NUMBER;
SOURCES source_type;
[...]
json_query(:NEW.COL, '$.sources') BULK COLLECT INTO SOURCES FROM dual;
but I got the error:
Row 1: ORA-01722: invalid number
Any ideas?

Related

ADF Lookup query create schema and select

I am trying to run a create Shema/table query in the ADF lookup activity, with a dummy select in the end.
CREATE SCHEMA [schemax] AUTHORIZATION [auth1];
SELECT 0 AS dummyValue
but I got the below error
A database operation failed with the following error: 'Parse error at line: 2, column: 1: Incorrect syntax near 'SELECT'.',Source=,''Type=System.Data.SqlClient.SqlException,Message=Parse error at line: 2, column: 1: Incorrect syntax near 'SELECT'.,Source=.Net SqlClient Data Provider,SqlErrorNumber=103010,Class=16,ErrorCode=-2146232060,State=1
Data factory pipeline
I was able to run a similar query without SELECT query in the end but got another error mandating lookup must return a value.
You can only write select statements in lookup activity query settings.
To create schema or table, use copy data activity pre-copy script in sink settings. You can select a dummy table for the source and sink dataset and write your create script in pre-copy activity as below.
Source settings: (using dummy table which pulls 0 records)
Sink settings:
Pre-copy script: CREATE SCHEMA test1 AUTHORIZATION [user]

Unable to convert the trigger from Oracle to PostgreSQL

I'm trying to convert the below trigger which is written Oracle to PostgreSQL:
create or replace TRIGGER "SCHEMA1".table1_ta
after delete
on table1
for each row
begin
if :old.hid = 0 or :old.hid = -1
then
err.raise(-12345, 'Error Message');
end if;
end table1_ta;
Below is the code which is getting generated as part of the schema conversion process of the AWS Schema Conversion Tool (SCT) but getting an error when I'm trying to apply it to the target PostgreSQL database.
CREATE TRIGGER table1_ta
AFTER DELETE
ON schema1.table1
FOR EACH ROW
EXECUTE PROCEDURE schema1.table1_ta$table();
Below is the error that I'm getting when AWS SCT is trying to create this trigger:
ERROR: function schema1.table1_ta$table(); does not exists.
How can I fix this?

Unable to ingest JSON data with MemSQL PIPELINE INTO PROCEDURE

I am facing issue while ingesting a JSON data via PIPELINE to a table using Store Procedure.
I see NULL values are getting inserted in the table.
Stored Procedure SQL:
DELIMITER //
CREATE OR REPLACE PROCEDURE ops.process_users(GENERIC_BATCH query(GENERIC_JSON json)) AS
BEGIN
INSERT INTO ops.USER(USER_ID,USERNAME)
SELECT GENERIC_JSON::USER_ID, GENERIC_JSON::USERNAME
FROM GENERIC_BATCH;
END //
DELIMITER ;
MemSQL Pipeline Command used:
CREATE OR REPLACE PIPELINE ops.tweet_pipeline_with_sp AS LOAD DATA KAFKA ‘<KAFKA_SERVER_IP>:9092/user-topic’
INTO PROCEDURE ops.process_users FORMAT JSON ;
JSON Data Pushed to Kafka topic: {“USER_ID”:“111”,“USERNAME”:“Test_User”}
Table DDL Statement: CREATE TABLE ops.USER (USER_ID INTEGER, USERNAME VARCHAR(255));
It looks like you're getting help in the MemSQL Forums at https://www.memsql.com/forum/t/unable-to-ingest-json-data-with-pipeline-into-procedure/1702/3 In particular it looks like a difference of :: (which yields JSON) and ::$ (which converts to SQL types).
Got solution from MemSQL forum!
Below are the Pipeline and Stored Procedure scripts that worked for me,
CREATE OR REPLACE PIPELINE OPS.TEST_PIPELINE_WITH_SP
AS LOAD DATA KAFKA '<KAFKA_SERVER_IP>/TEST-TOPIC'
INTO PROCEDURE OPS.PROCESS_USERS(GENERIC_JSON <- %) FORMAT JSON ;
DELIMITER //
CREATE OR REPLACE PROCEDURE ops.process_users(GENERIC_BATCH query(GENERIC_JSON json)) AS
BEGIN
INSERT INTO ops.USER(USER_ID,USERNAME)
SELECT GENERIC_JSON::USER_ID, json_extract_string(GENERIC_JSON,'USERNAME')
FROM GENERIC_BATCH;
END //
DELIMITER ;

Postgres: update value of TEXT column (CLOB)

I have a column of type TEXT which is supposed to represent a CLOB value and I'm trying to update its value like this:
UPDATE my_table SET my_column = TEXT 'Text value';
Normally this column is written and read by Hibernate and I noticed that values written with Hibernate are stored as integers (perhaps some internal Postgres reference to the CLOB data).
But when I try to update the column with the above SQL, the value is stored as a string and when Hibernate tries to read it, I get the following error: Bad value for type long : ["Text value"]
I tried all the options described in this answer but the result is always the same. How do I insert/update a TEXT column using SQL?
In order to update a cblob created by Hibernate you should use functions to handling large objects:
the documentation can be found in the following links:
https://www.postgresql.org/docs/current/lo-interfaces.html
https://www.postgresql.org/docs/current/lo-funcs.html
Examples:
To query:
select mytable.*, convert_from(loread(lo_open(mycblobfield::int, x'40000'::int), x'40000'::int), 'UTF8') from mytable where mytable.id = 4;
Obs:
x'40000' is corresponding to read mode (INV_WRITE)
To Update:
select lowrite(lo_open(16425, x'60000'::int), convert_to('this an updated text','UTF8'));
Obs:
x'60000' = INV_WRITE + INV_READ is corresponding to read and write mode (INV_WRITE + IV_READ).
The number 16425 is an example loid (large object id) which already exists in a record in your table. It's that integer number you can see as value in the blob field created by Hinernate.
To Insert:
select lowrite(lo_open(lo_creat(-1), x'60000'::int), convert_to('this is a new text','UTF8'));
Obs:
lo_creat(-1) generate a new large object a returns its loid

Json parsing errors while using json_extract_path_text() function in where clause

I have a Redshift table that contains columns with json objects. I am running into json parser failures while trying to execute queries on this table that applies specific filters on the json object content.
While I am able to use the json_extract_path_text() in the select statements, the same fails when used in where clause.
Following is the error I see:
Amazon Invalid operation: JSON parsing error;
When I look at the STL_ERROR table for more details, this is what I see in the error details:
errcode: 8001
context: JSON parsing error
error: invalid json object null
Following is an example of the content in one such json column:
{"att1":"att1_val","att2":"att2_val","att3":[{"att3_sub1_1":"att3_sub1_1_val","att3_sub1_2":"att3_sub1_2_val"},{"att3_sub2_1":"att3_sub2_1_val","att3_sub2_2":"att3_sub2_2_val"}],"att4":"att4_val","att5":"att5_val"}
Now when I run the following query, it executes without any issues:
select
json_extract_path_text(col_with_json_obj,'att4') as required_val
from table_with_json_data;
Now when I use the json_extract_path_text() in the where clause it fails with the above error:
select
json_extract_path_text(col_with_json_obj,'att4') as required_val
from table_with_json_data
where json_extract_path_text(col_with_json_obj,'att4') = 'att4_val';
Is there anything that I am using incorrectly or missing here?
P.S: I have another table with similar schema and the same queries run just fine on that. The only difference between the two tables is the way the data is loaded - one uses jsonpaths file in the copy options and the other uses json 'auto'.
This is the error you would receive if table_with_json_data contained even a single row in which the value of col_with_json_obj was the four-character string "null".
To avoid errors like this in general I'd recommend creating a Redshift UDF for validating JSON. The is_json() method described at http://discourse.snowplowanalytics.com/t/more-robust-json-parsing-in-redshift-with-python-udfs/197 has worked well for me:
create or replace function is_json(j varchar(max))
returns boolean
stable as $$
import json
try:
json_object = json.loads(j)
except ValueError, e:
return False
return True
$$ language plpythonu;
Then you could add a and where is_json(col_with_json_obj) clause to your query and this class of errors can be avoided entirely.