I am trying to insert a nested json file in PostgreSQL DB, Below is the sample data of the json file.
[
{
"location_id": 11111,
"recipe_id": "LLLL324",
"serving_size_number": 1,
"recipe_fraction_description": null,
"description": "1/2 gallon",
"recipe_name": "DREXEL ALMOND MILK 32 OZ",
"marketing_name": "Almond Milk",
"marketing_description": null,
"ingredient_statement": "Almond Milk (ALMOND MILK (FILTERED WATER, ALMONDS), CANE SUGAR, CONTAINS 2% OR LESS OF: VITAMIN AND MINERAL BLEND (CALCIUM CARBONATE, VITAMIN E ACETATE, VITAMIN A PALMITATE, VITAMIN D2), SEA SALT, SUNFLOWER LECITHIN, LOCUST BEAN GUM, GELLAN GUM.)",
"allergen_attributes": {
"allergen_statement_not_available": null,
"contains_shellfish": "NO",
"contains_peanut": "NO",
"contains_tree_nuts": "YES",
"contains_milk": "NO",
"contains_wheat": "NO",
"contains_soy": "NO",
"contains_eggs": "NO",
"contains_fish": "NO",
"contains_added_msg": "UNKNOWN",
"contains_hfcs": "UNKNOWN",
"contains_mustard": "UNKNOWN",
"contains_celery": "UNKNOWN",
"contains_sesame": "UNKNOWN",
"contains_red_yellow_blue_dye": "UNKNOWN",
"gluten_free_per_fda": "UNKNOWN",
"non_gmo_claim": "UNKNOWN",
"contains_gluten": "NO"
},
"dietary_attributes": {
"vegan": "YES",
"vegetarian": "YES",
"kosher": "YES",
"halal": "UNKNOWN"
},
"primary_attributes": {
"protein": 7.543,
"total_fat": 19.022,
"carbohydrate": 69.196,
"calories": 463.227,
"total_sugars": 61.285,
"fiber": 5.81,
"calcium": 3840.228,
"iron": 3.955,
"potassium": 270.768,
"sodium": 1351.208,
"cholesterol": 0.0,
"trans_fat": 0.0,
"saturated_fat": 1.488,
"monounsaturated_fat": 11.743,
"polyunsaturated_fat": 4.832,
"calories_from_fat": 171.195,
"pct_calories_from_fat": 36.957,
"pct_calories_from_saturated_fat": 2.892,
"added_sugars": null,
"vitamin_d_(mcg)": null
},
"secondary_attributes": {
"ash": null,
"water": null,
"magnesium": 120.654,
"phosphorous": 171.215,
"zinc": 1.019,
"copper": 0.183,
"manganese": null,
"selenium": 1.325,
"vitamin_a_(IU)": 5331.357,
"vitamin_a_(RAE)": null,
"beta_carotene": null,
"alpha_carotene": null,
"vitamin_e_(A-tocopherol)": 49.909,
"vitamin_d_(IU)": null,
"vitamin_c": 0.0,
"thiamin_(B1)": 0.0,
"riboflavin_(B2)": 0.449,
"niacin": 0.979,
"pantothenic_acid": 0.061,
"vitamin_b6": 0.0,
"folacin_(folic_acid)": null,
"vitamin_b12": 0.0,
"vitamin_k": null,
"folic_acid": null,
"folate_food": null,
"folate_DFE": null,
"vitamin_a_(RE)": null,
"pct_calories_from_protein": 6.514,
"pct_calories_from_carbohydrates": 59.751,
"biotin": null,
"niacin_(mg_NE)": null,
"vitamin_e_(IU)": null
}
}
]
When tried to copy the data using below postgres query
\copy table_name 'location of thefile'
got below error
ERROR: invalid input syntax for type integer: "["
CONTEXT: COPY table_name, line 1, column location_id: "["
I tried below approach as well but no luck
INSERT INTO json_table
SELECT [all key fields]
FROM json_populate_record (NULL::json_table,
'{
sample data
}'
);
What is the best simple way to insert this type of nested json files in postegreSQL tables. Is there a query which we can use to insert any nested json files ?
Insert json to a table. In fact I don't what is your expect.
yimo=# create table if not exists foo(a int,b text);
CREATE TABLE
yimo=# insert into foo select * from json_populate_record(null::foo, ('[{"a":1,"b":"3"}]'::jsonb->>0)::json);
INSERT 0 1
yimo=# select * from foo;
a | b
---+---
1 | 3
(1 row)
Related
I have a problem with apache kafka and the output of connector.
when i try to create a stream from the topic i get some errors.
the data of the topic is this one ( without a schema, in json format ):
key:
{
"payload": {
"sourceName": "HotPump",
"jobName": "pollingHotPump"
}
}
value:
{
"payload": {
"fields": {
"cw": [
4657,
0,
0,
0,
0,
0,
0,
0,
0,
0,
13108,
16637,
0,
0,
0
]
},
"timestamp": 1638540457655,
"expires": null,
"connection-name": "Condensator"
}
}
the ksql query for create a stream is this one:
CREATE STREAM s_devices
(
key struct<payload struct<sourceName string>> ,
value struct<payload struct<fields struct<cw array>>>,
ts struct<payload struct<timestamp bigint>>
)
WITH (KAFKA_TOPIC='devices',
VALUE_FORMAT='JSON', KEY_FORMAT='JSON');
The result of ksql client is: "Failed to prepare statement: Cannot resolve unknown type: ARRAY"
When i try to create the stream only with key struct<payload struct<sourceName string>>
the select select key->payload->sourceName, value->payload->timestamp from s_devices;
working correctly and the value is showed;
when i try only with ts struct<payload struct<timestamp bigint>> the table is created but when i try to select the value is null select value->payload->timestamp from s_devices;
where is the error?
thanks
I have Lookup "Fetch Customers" with SQL statement:
Select Count(CustomerId) As 'Row_count' ,Min(sales_amount) as 'Min_Sales' From [sales].
[Customers]
It returns value
10, 5000
Next I have Lookup "Update Min Sales" with SQL statement, but getting error:
Update Sales_Min_Sales
SET Row_Count = #activity('Fetch Customers').output.Row_count,
Min_Sales = #activity('Fetch Customers').output.Min_Sales
Select 1
Same error occurs even I set Lookup to
Select #activity('Fetch Fetch Customers').output.Row_count
Error:
A database operation failed with the following error: 'Must declare the scalar variable
"#activity".',Source=,''Type=System.Data.SqlClient.SqlException,Message=Must declare the
scalar variable "#activity".,Source=.Net SqlClient Data
Provider,SqlErrorNumber=137,Class=15,ErrorCode=-2146232060,State=2,Errors=
[{Class=15,Number=137,State=2,Message=Must declare the scalar variable "#activity".,},],'
I have similar set up as yours. Two lookup activities.
First look up brings min ID and Max ID as shown
{
"count": 1,
"value": [
{
"Min": 1,
"Max": 30118
}
],
"effectiveIntegrationRuntime": "DefaultIntegrationRuntime (East US)",
"billingReference": {
"activityType": "PipelineActivity",
"billableDuration": [
{
"meterType": "AzureIR",
"duration": 0.016666666666666666,
"unit": "DIUHours"
}
]
},
"durationInQueue": {
"integrationRuntimeQueue": 22
}
}
in my second lookup i am using the below expression
Update saleslt.customer set somecol=someval where CustomerID=#{activity('Lookup1').output.Value[0]['Min']}
Select 1 as dummy
Just that we have to access lookup output using indices as mentioned and place the activity output inside {}.
I'm trying to connect to the Survey Monkey API via a hard coded connection set in a variable but the connection is giving me such error:
QVX_UNEXPECTED_END_OF_DATA: HTTP protocol error 400 (Bad Request):
{
"error":
{
"docs": "https://developer.surveymonkey.com/api/v3/#error-codes",
"message": "Invalid URL parameters.", "id": "1003", "name": "Bad Request",
"http_status_code": 400
}
}
Although, if i try the same but while getting surveys bulk, it works
vID is equal to a survey id
let vURL2 = 'https://api.surveymonkey.com/v3/surveys/$(vID)/details';
RestConnectorMasterTable_SurveryFullDetails:
SQL SELECT
"response_count",
"page_count",
"date_created",
"folder_id",
"nickname",
"id" AS "id_u3",
"question_count" AS "question_count_u0",
"category",
"preview",
"is_owner",
"language",
"footer",
"date_modified",
"analyze_url",
"summary_url",
"href" AS "href_u1",
"title" AS "title_u0",
"collect_url",
"edit_url",
"__KEY_root",
(SELECT
"done_button",
"prev_button",
"exit_button",
"next_button",
"__FK_buttons_text"
FROM "buttons_text" FK "__FK_buttons_text"),
(SELECT
"__FK_custom_variables"
FROM "custom_variables" FK "__FK_custom_variables"),
(SELECT
"href" AS "href_u0",
"description" AS "description_u0",
"title",
"position" AS "position_u2",
"id" AS "id_u2",
"question_count",
"__KEY_pages",
"__FK_pages",
(SELECT
"sorting",
"family",
"subtype",
"visible" AS "visible_u1",
"href",
"position" AS "position_u1",
"validation",
"id" AS "id_u1",
"forced_ranking",
"required",
"__KEY_questions",
"__FK_questions",
(SELECT
"text",
"amount",
"type",
"__FK_required"
FROM "required" FK "__FK_required"),
(SELECT
"__KEY_answers",
"__FK_answers",
(SELECT
"visible",
"text" AS "text_u0",
"position",
"id",
"__FK_rows"
FROM "rows" FK "__FK_rows"),
(SELECT
"description",
"weight",
"visible" AS "visible_u0",
"id" AS "id_u0",
"is_na",
"text" AS "text_u1",
"position" AS "position_u0",
"__FK_choices"
FROM "choices" FK "__FK_choices")
FROM "answers" PK "__KEY_answers" FK "__FK_answers"),
(SELECT
"heading",
"__FK_headings"
FROM "headings" FK "__FK_headings")
FROM "questions" PK "__KEY_questions" FK "__FK_questions")
FROM "pages" PK "__KEY_pages" FK "__FK_pages")
FROM JSON (wrap on) "root" PK "__KEY_root"
WITH CONNECTION(Url "$(vURL2)");
Have you checked out this fairly exhaustive SurveyMonkey how to guide on the Qlik Community? Might we worth checking you've followed all those steps, including giving the user permission to access the API.
It is not enough to hardcode URL, you also need to specify authorization header
WITH CONNECTION (
Url "$(vURL2)",
HTTPHEADER "Authorization" "bearer YOUR_TOKEN"
);
My compound types when returned from PLPGSQL function are converted to text:
dev=# select * from app.user_query_test(3);
user_record | udata_record
-----------------+-------------------
(3,875227490,t) | (3,3,"Bob Smith")
(1 row)
dev=#
I don't want this, I want to receive them on the client side as nested object of data, like this:
{
"user_record": {
"user_id": 3,
"identity_id": 875227490,
"utype": t
},
"udata_record": {
"udata_id": 3,
"user_id": 3,
"full_name": "Bob Smith"
}
}
But, I also don't want JSON, because decoding/encoding to JSON format will require processing time and will affect the performance of my App. So how do I achieve this? I mean how do I put the data on the Client in the exact structure as it is returned by PLPGSQL function without any decoding/encoding process?
My source files are:
DROP TYPE IF EXISTS app.user_reply_t CASCADE;
CREATE TYPE app.user_reply_t AS (
user_id integer,
identity_id integer,
utype boolean
);
DROP TYPE IF EXISTS app.udata_reply_t CASCADE;
CREATE TYPE app.udata_reply_t AS (
udata_id integer,
user_id integer,
full_name varchar(64)
);
DROP TYPE IF EXISTS app.user_info_t CASCADE;
CREATE TYPE app.user_info_t AS (
user_record app.user_reply_t,
udata_record app.udata_reply_t
);
CREATE OR REPLACE FUNCTION app.user_query_test(p_user_id integer)
RETURNS app.user_info_t AS
$$
DECLARE
rec app.user_info_t;
BEGIN
SELECT user_id,identity_id,utype FROM "comp-158572724".users WHERE user_id=p_user_id INTO rec.user_record;
SELECT udata_id,user_id,full_name FROM "comp-158572724".udata WHERE user_id=p_user_id INTO rec.udata_record;
RETURN rec;
END;
$$ LANGUAGE plpgsql;
Tested with Node.js:
src $ node usertest.js
result={ command: 'SELECT',
rowCount: 1,
rows:
[ { user_record: '(3,875227490,t)',
udata_record: '(3,3,"Bob Smith")' } ],
fields:
[ { name: 'user_record', dataTypeID: 19862 },
{ name: 'udata_record', dataTypeID: 19865 } ] }
^C
src $
Source of Client code:
src $ cat usertest.js
const util = require('util');
pg = require('pg').native
var Pool = pg.Pool
var Client = pg.Client
var pool=new Pool({
user: 'dev_user',
password: 'dev',
host: 'localhost',
database: 'dev'
});
pool.query('select * from app.user_query_test(3)',function(err, result) {
console.log('result=' + util.inspect(result));
}
);
function wait() {
console.log('wating...');
setTimeout(wait,3000);
}
setTimeout(wait,3000);
src $
To use json/jsonb data type ecto suggets to use fragments.
In my case, I've to use PostgreSQL ? operator to see if the map has such key, this however it will become something like:
where(events, [e], e.type == 1 and not fragment("???", e.qualifiers, "?", "2"))
but of course fragment reads the PostgreSQL ? as a placeholder. How can I check if the map has such key?
You need to escape the middle ? and pass a total of three arguments to fragment:
fragment("? \\? ?", e.qualifiers, "2")
Demo:
iex(1)> MyApp.Repo.insert! %MyApp.Food{name: "Foo", meta: %{price: 1}}
iex(2)> MyApp.Repo.insert! %MyApp.Food{name: "Foo", meta: %{}}
iex(3)> MyApp.Repo.all from(f in MyApp.Food, where: fragment("? \\? ?", f.meta, "price"))
[debug] SELECT f0."id", f0."name", f0."meta", f0."inserted_at", f0."updated_at" FROM "foods" AS f0 WHERE (f0."meta" ? 'price') [] OK query=8.0ms
[%MyApp.Food{__meta__: #Ecto.Schema.Metadata<:loaded>, id: 1,
inserted_at: #Ecto.DateTime<2016-06-19T03:51:40Z>, meta: %{"price" => 1},
name: "Foo", updated_at: #Ecto.DateTime<2016-06-19T03:51:40Z>}]
iex(4)> MyApp.Repo.all from(f in MyApp.Food, where: fragment("? \\? ?", f.meta, "a"))
[debug] SELECT f0."id", f0."name", f0."meta", f0."inserted_at", f0."updated_at" FROM "foods" AS f0 WHERE (f0."meta" ? 'a') [] OK query=0.8ms
[]
I'm not sure if this is documented anywhere, but I found the method from this test.