I'm trying to connect to the Survey Monkey API via a hard coded connection set in a variable but the connection is giving me such error:
QVX_UNEXPECTED_END_OF_DATA: HTTP protocol error 400 (Bad Request):
{
"error":
{
"docs": "https://developer.surveymonkey.com/api/v3/#error-codes",
"message": "Invalid URL parameters.", "id": "1003", "name": "Bad Request",
"http_status_code": 400
}
}
Although, if i try the same but while getting surveys bulk, it works
vID is equal to a survey id
let vURL2 = 'https://api.surveymonkey.com/v3/surveys/$(vID)/details';
RestConnectorMasterTable_SurveryFullDetails:
SQL SELECT
"response_count",
"page_count",
"date_created",
"folder_id",
"nickname",
"id" AS "id_u3",
"question_count" AS "question_count_u0",
"category",
"preview",
"is_owner",
"language",
"footer",
"date_modified",
"analyze_url",
"summary_url",
"href" AS "href_u1",
"title" AS "title_u0",
"collect_url",
"edit_url",
"__KEY_root",
(SELECT
"done_button",
"prev_button",
"exit_button",
"next_button",
"__FK_buttons_text"
FROM "buttons_text" FK "__FK_buttons_text"),
(SELECT
"__FK_custom_variables"
FROM "custom_variables" FK "__FK_custom_variables"),
(SELECT
"href" AS "href_u0",
"description" AS "description_u0",
"title",
"position" AS "position_u2",
"id" AS "id_u2",
"question_count",
"__KEY_pages",
"__FK_pages",
(SELECT
"sorting",
"family",
"subtype",
"visible" AS "visible_u1",
"href",
"position" AS "position_u1",
"validation",
"id" AS "id_u1",
"forced_ranking",
"required",
"__KEY_questions",
"__FK_questions",
(SELECT
"text",
"amount",
"type",
"__FK_required"
FROM "required" FK "__FK_required"),
(SELECT
"__KEY_answers",
"__FK_answers",
(SELECT
"visible",
"text" AS "text_u0",
"position",
"id",
"__FK_rows"
FROM "rows" FK "__FK_rows"),
(SELECT
"description",
"weight",
"visible" AS "visible_u0",
"id" AS "id_u0",
"is_na",
"text" AS "text_u1",
"position" AS "position_u0",
"__FK_choices"
FROM "choices" FK "__FK_choices")
FROM "answers" PK "__KEY_answers" FK "__FK_answers"),
(SELECT
"heading",
"__FK_headings"
FROM "headings" FK "__FK_headings")
FROM "questions" PK "__KEY_questions" FK "__FK_questions")
FROM "pages" PK "__KEY_pages" FK "__FK_pages")
FROM JSON (wrap on) "root" PK "__KEY_root"
WITH CONNECTION(Url "$(vURL2)");
Have you checked out this fairly exhaustive SurveyMonkey how to guide on the Qlik Community? Might we worth checking you've followed all those steps, including giving the user permission to access the API.
It is not enough to hardcode URL, you also need to specify authorization header
WITH CONNECTION (
Url "$(vURL2)",
HTTPHEADER "Authorization" "bearer YOUR_TOKEN"
);
Related
Using of JSON_ARRAYAGG does not working for me
1) [Code: -104, SQL State: 42601] An unexpected token "ORDER" was found following "CITY)
". Expected tokens may include: ")".. SQLCODE=-104, SQLSTATE=42601, DRIVER=4.28.11
2) [Code: -727, SQL State: 56098] An error occurred during implicit system action type "2". Information returned for the error includes SQLCODE "-104", SQLSTATE "42601" and message tokens "ORDER|CITY)
|)".. SQLCODE=-727, SQLSTATE=56098, DRIVER=4.28.11
I want an output like this
{"id":901, "name": "Hansi", "addresses" :
[
{ "address":"A", "city":"B"},
{ "address":"C", "city":"D"}
]
}
I am using IBM DB2 11.1 for Linux, UNIX and Windows.
values (
json_array(
select json_object ('ID' value ID,
'NAME' value NAME,
'ADDRESSES' VALUE JSON_ARRAYAGG(
JSON_OBJECT('ADDRESS' VALUE ADDRESS,
'CITY' VALUE CITY)
ORDER BY ADDRESS)
)
FROM CUSTOMER
JOIN CUSTOMER_ADDRESS ON ADDRESS_CUSTOMER_ID = ID
GROUP BY ID, NAME
FORMAT JSON
));
Used tables are:
CUSTOMER - ID (INT), NAME (VARCHAR64)
ADDRESS - ADDRESS (VARCHAR64), CITY (VARCHAR64)
I am trying to insert a nested json file in PostgreSQL DB, Below is the sample data of the json file.
[
{
"location_id": 11111,
"recipe_id": "LLLL324",
"serving_size_number": 1,
"recipe_fraction_description": null,
"description": "1/2 gallon",
"recipe_name": "DREXEL ALMOND MILK 32 OZ",
"marketing_name": "Almond Milk",
"marketing_description": null,
"ingredient_statement": "Almond Milk (ALMOND MILK (FILTERED WATER, ALMONDS), CANE SUGAR, CONTAINS 2% OR LESS OF: VITAMIN AND MINERAL BLEND (CALCIUM CARBONATE, VITAMIN E ACETATE, VITAMIN A PALMITATE, VITAMIN D2), SEA SALT, SUNFLOWER LECITHIN, LOCUST BEAN GUM, GELLAN GUM.)",
"allergen_attributes": {
"allergen_statement_not_available": null,
"contains_shellfish": "NO",
"contains_peanut": "NO",
"contains_tree_nuts": "YES",
"contains_milk": "NO",
"contains_wheat": "NO",
"contains_soy": "NO",
"contains_eggs": "NO",
"contains_fish": "NO",
"contains_added_msg": "UNKNOWN",
"contains_hfcs": "UNKNOWN",
"contains_mustard": "UNKNOWN",
"contains_celery": "UNKNOWN",
"contains_sesame": "UNKNOWN",
"contains_red_yellow_blue_dye": "UNKNOWN",
"gluten_free_per_fda": "UNKNOWN",
"non_gmo_claim": "UNKNOWN",
"contains_gluten": "NO"
},
"dietary_attributes": {
"vegan": "YES",
"vegetarian": "YES",
"kosher": "YES",
"halal": "UNKNOWN"
},
"primary_attributes": {
"protein": 7.543,
"total_fat": 19.022,
"carbohydrate": 69.196,
"calories": 463.227,
"total_sugars": 61.285,
"fiber": 5.81,
"calcium": 3840.228,
"iron": 3.955,
"potassium": 270.768,
"sodium": 1351.208,
"cholesterol": 0.0,
"trans_fat": 0.0,
"saturated_fat": 1.488,
"monounsaturated_fat": 11.743,
"polyunsaturated_fat": 4.832,
"calories_from_fat": 171.195,
"pct_calories_from_fat": 36.957,
"pct_calories_from_saturated_fat": 2.892,
"added_sugars": null,
"vitamin_d_(mcg)": null
},
"secondary_attributes": {
"ash": null,
"water": null,
"magnesium": 120.654,
"phosphorous": 171.215,
"zinc": 1.019,
"copper": 0.183,
"manganese": null,
"selenium": 1.325,
"vitamin_a_(IU)": 5331.357,
"vitamin_a_(RAE)": null,
"beta_carotene": null,
"alpha_carotene": null,
"vitamin_e_(A-tocopherol)": 49.909,
"vitamin_d_(IU)": null,
"vitamin_c": 0.0,
"thiamin_(B1)": 0.0,
"riboflavin_(B2)": 0.449,
"niacin": 0.979,
"pantothenic_acid": 0.061,
"vitamin_b6": 0.0,
"folacin_(folic_acid)": null,
"vitamin_b12": 0.0,
"vitamin_k": null,
"folic_acid": null,
"folate_food": null,
"folate_DFE": null,
"vitamin_a_(RE)": null,
"pct_calories_from_protein": 6.514,
"pct_calories_from_carbohydrates": 59.751,
"biotin": null,
"niacin_(mg_NE)": null,
"vitamin_e_(IU)": null
}
}
]
When tried to copy the data using below postgres query
\copy table_name 'location of thefile'
got below error
ERROR: invalid input syntax for type integer: "["
CONTEXT: COPY table_name, line 1, column location_id: "["
I tried below approach as well but no luck
INSERT INTO json_table
SELECT [all key fields]
FROM json_populate_record (NULL::json_table,
'{
sample data
}'
);
What is the best simple way to insert this type of nested json files in postegreSQL tables. Is there a query which we can use to insert any nested json files ?
Insert json to a table. In fact I don't what is your expect.
yimo=# create table if not exists foo(a int,b text);
CREATE TABLE
yimo=# insert into foo select * from json_populate_record(null::foo, ('[{"a":1,"b":"3"}]'::jsonb->>0)::json);
INSERT 0 1
yimo=# select * from foo;
a | b
---+---
1 | 3
(1 row)
I am extending OTRS with an app and need to get the groups a customeruser is in. I want to do this by communicating with the SessionGet-Endpoint (https://doc.otrs.com/doc/api/otrs/6.0/Perl/Kernel/GenericInterface/Operation/Session/SessionGet.pm.html)
The Endpoint for SessionGet returns a lot of information about the user but not the groups he is in. I am not talking about agents who can login to the backend of otrs but customerusers.
I am using OTRS 6 because it was the only one available in docker. I created the REST-endpoints in the backend and everything works well. There is a new functionality why I need to get the information about the groups.
Had a look at the otrs system-config but could not figure out if it is possible to include this information in the response.
Although I am a programmer, I did not want to write perl because of ... reasons.
I had a look at the file which handles the incoming request at /opt/otrs/Kernel/GenericInterface/Operation/Session/SessionGet.pm and traced the calls to the actual file where the information is collected from the database in /opt/otrs/Kernel/System/AuthSession/DB.pm. In line 169 the SQL-statement is written so it came to my mind that I just can extend this to also get the information of the groups, because, as I said, I did not want to write perl...
A typical response from this endpoint looks like this:
{
"SessionData": [
{
"Value": "2",
"Key": "ChangeBy"
},
{
"Value": "2019-06-26 13:43:18",
"Key": "ChangeTime"
},
{
"Value": "2",
"Key": "CreateBy"
},
{
"Value": "2019-06-26 13:43:18",
"Key": "CreateTime"
},
{
"Value": "XXX",
"Key": "CustomerCompanyCity"
},
{
"Value": "",
"Key": "CustomerCompanyComment"
}
...
}
A good thing would be to just insert another Value-Key-pair with the IDs of the groups. The SQL-statement queries only one table $Self->{SessionTable} mostly called otrs.sessions.
I used the following resources to create a SQL-statement which extends the existing SQL-statement with the needed information. You can find it here:
$DBObject->Prepare(
SQL => "
(
SELECT id, data_key, data_value, serialized FROM $Self->{SessionTable} WHERE session_id = ? ORDER BY id ASC
)
UNION ALL
(
SELECT
(
SELECT MAX(id) FROM $Self->{SessionTable} WHERE session_id = ?
) +1
AS id,
'UserGroupsID' AS data_key,
(
SELECT GROUP_CONCAT(DISTINCT group_id SEPARATOR ', ')
FROM otrs.group_customer_user
WHERE user_id =
(
SELECT data_value
FROM $Self->{SessionTable}
WHERE session_id = ?
AND data_key = 'UserID'
ORDER BY id ASC
)
)
AS data_value,
0 AS serialized
)",
Bind => [ \$Param{SessionID}, \$Param{SessionID}, \$Param{SessionID} ],
);
Whoever needs to get the groups of a customeruser can replace the existing code with the one provided. At least in my case it works very well. Now, I get the expected key-value-pair:
{
"Value": "10, 11, 6, 7, 8, 9",
"Key": "UserGroupsID"
},
I used the following resources:
Adding the results of multiple SQL selects?
Can I concatenate multiple MySQL rows into one field?
Add row to query result using select
Happy coding,
Nico
This question already has an answer here:
How to add explicit WHERE clause in Kafka Connect JDBC Source connector
(1 answer)
Closed 3 years ago.
I have a JDBCSourceConnector in kafka that uses a query to stream data from database.
but I have problem with the query I wrote for selecting data.
I tested query in Postgres psql and also in DBeaver. It's working fine but in kafka config, it produces an SQL syntax error
Error
ERROR Failed to run query for table TimestampIncrementingTableQuerier{name='null', query='select "Users".* from "Users" join "SchoolUserPivots" on "Users".id = "SchoolUserPivots".user_id where school_id = 1 and role_id = 2', topicPrefix='teacher', timestampColumn='"Users".updatedAt', incrementingColumn='id'}: {} (io.confluent.connect.jdbc.source.JdbcSourceTask:221)
org.postgresql.util.PSQLException: ERROR: syntax error at or near "WHERE"
Config json
{
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"timestamp.column.name": "\"Users\".updatedAt",
"incrementing.column.name": "id",
"connection.password": "123",
"tasks.max": "1",
"query": "select \"Users\".* from \"Users\" join \"SchoolUserPivots\" on \"Users\".id = \"SchoolUserPivots\".user_id where school_id = 1 and role_id = 2",
"timestamp.delay.interval.ms": "5000",
"mode": "timestamp+incrementing",
"topic.prefix": "teacher",
"connection.user": "user",
"name": "SourceTeacher",
"connection.url": "jdbc:postgresql://ip:5432/school",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"key.converter": "org.apache.kafka.connect.json.JsonConverter"
}
You can't use "mode": "timestamp+incrementing", with a custom query that includes WHERE.
See https://www.confluent.io/blog/kafka-connect-deep-dive-jdbc-source-connector for more details, as well as https://github.com/confluentinc/kafka-connect-jdbc/issues/566. That github issue suggests one workaround, by using a subselect for your query.
I try to:
curl -v -u j123:j321 -X POST "http://localhost:8088/ari/channels/1421226074.4874/snoop?spy=SIP/695"
In response to receiving:
"message": "Invalid direction specified for spy"
I try to:
SIP/695; SIP:695, SIP#695, localhost#695, channel, channelName
It's all not working.
Call comes into the queue from sip-416 to queue_1 and distribute to 694. I need to connect 695 for wiretapping channel 1421226074.4874.
I only need to listen and not to whisper.
Help me please)
The error message is telling you what the problem is:
"message": "Invalid direction specified for spy"
The spy parameter is a direction for spying, not the channel to spy on (see reference documentation here). You've already specified the channel to snoop on in the URI path - you need to specify the direction of the media in the spy parameter.
As an aside, apparently the auto generated wiki isn't display enum values, which is unfortunate. We'll have to fix that.
For reference, here's the parameter in the Swagger JSON:
"name": "spy",
"description": "Direction of audio to spy on",
"paramType": "query",
"required": false,
"allowMultiple": false,
"dataType": "string",
"defaultValue": "none",
"allowableValues": {
"valueType": "LIST",
"values": [
"none",
"both",
"out",
"in"
]
}