selecting with where clause with CQL - select

trying to do a select from CQL and getting an error
SELECT uid, login, username FROM test.docs WHERE es_query='{ "query":{"nested":{"path":"username","query":{"term":{"username.first":"barthelemy"}}}}}' AND es_options='indices=test' ALLOW FILTERING;
Data is added
I am able to see the data from Elastic API.
the DESCRIBE on table is correct.
Query 1 ERROR: Operation failed - received 0 responses and 1 failures

Related

Caused by: com.amazon.redshift.util.RedshiftException: ERROR: Query (659514) cancelled on user's request

I am trying to save data in redshift using java code through multirow insert and getting below error.
Caused by: com.amazon.redshift.util.RedshiftException: ERROR: Query (659514) cancelled on user's request.
As per the official documentation of AWS it is mentioned
The statement_timeout value is the maximum amount of time that a query can run before Amazon Redshift terminates it. When a statement timeout is exceeded, then queries submitted during the session are aborted with the following error message:
ERROR: Query (150) cancelled on user's request
To verify whether a query was aborted because of a statement timeout, run following query:
select * from SVL_STATEMENTTEXT where text ilike '%set%statement_timeout%to%' and pid in (select pid from STL_QUERY where query = <queryid>);
I tried to run the above query with queryid but it doesn't give any result. Also statement timeout is 0 which turn off limitation of timeout.
what might be the problem?
Checking for statement timeouts is a good path to look down. The query you provided only looks for a statement_timeout set by the user with a SET command. This is not the only way this parameter can be set nor is it the only timeout. This parameter can be for all connections by a user through the ALTER USER command. If you think this is the parameter causing the issue you can "SET STATEMENT_TIMEOUT TO 0;" early in your session to remove this limit.
If this doesn't fix the issue then the problem may be elsewhere. WLM settings can timeout queries so check STL_WLM_RULE_ACTION system table to see if any were applied to your query.
Statement timeouts can also be set at the cluster level through the parameter group. So you may want to check the parameter group for a statement_timeout setting.

ADF Lookup query create schema and select

I am trying to run a create Shema/table query in the ADF lookup activity, with a dummy select in the end.
CREATE SCHEMA [schemax] AUTHORIZATION [auth1];
SELECT 0 AS dummyValue
but I got the below error
A database operation failed with the following error: 'Parse error at line: 2, column: 1: Incorrect syntax near 'SELECT'.',Source=,''Type=System.Data.SqlClient.SqlException,Message=Parse error at line: 2, column: 1: Incorrect syntax near 'SELECT'.,Source=.Net SqlClient Data Provider,SqlErrorNumber=103010,Class=16,ErrorCode=-2146232060,State=1
Data factory pipeline
I was able to run a similar query without SELECT query in the end but got another error mandating lookup must return a value.
You can only write select statements in lookup activity query settings.
To create schema or table, use copy data activity pre-copy script in sink settings. You can select a dummy table for the source and sink dataset and write your create script in pre-copy activity as below.
Source settings: (using dummy table which pulls 0 records)
Sink settings:
Pre-copy script: CREATE SCHEMA test1 AUTHORIZATION [user]

Put request failed : INSERT INTO "PARTITION_PARAMS" when executing an insert..select query with hundreds of fields

Executing an insert..select query over Tez on a Hortonworks HDP 3 cluster with hive3, I get the following error:
java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask. MetaException(message:
Put request failed : INSERT INTO "PARTITION_PARAMS" ("PARAM_VALUE","PART_ID","PARAM_KEY") VALUES (?,?,?) )
The destination table has 200 fields and it is partitioned by two fields. Performing some testing, the error dissapears when the destination table has 143 fields. If I change the names of destination table fields with shorter ones, I can get the query working without error with more fields, but I can't get it working with the 200 fields I need.
Hive Metastore is configured to use a PostgreSQL database
We where hitting HIVE-20221
We can get the query executing correctly, setting hive.stats.autogather=false

Nxlog im_dbi is not working

I am able to insert data into PostgreSQL using nxlog(om_dbi).
But I am not able to select data(or fetch data) from PostgreSQL using nxlog. I tried many options nothing is working.
And in nxlog document also for IM_DBI module description has only "FIXME" mentioned.
Document Link: http://nxlog.org/documentation/nxlog-community-edition-reference-manual-v20928#im_dbi
Please help me to solve this.
Logs:
<Input dbiin>
Module im_dbi
SavePos TRUE
SQL SELECT * FROM NEW_TABLE
Driver pgsql
Option host 127.0.0.1
Option username chitta
Option password ''
Option dbname db
</Input>
2014-10-16 14:29:17 WARNING nxlog-ce received a termination request signal, exiting...
2014-10-16 14:29:18 INFO nxlog-ce-2.8.1248 started
2014-10-16 14:29:18 ERROR im_dbi failed to execute SQL statement. ERROR: column "id" does not exist;LINE 1: SELECT * FROM NEW_TABLE WHERE id = 1;
Note:
the module will automatically prepends a "WHERE id > %d" clause.
Not an answer, but here's some help.
The most important directive is missing: SQL Select ID as id,
DateOccured as EventTime, data from logtable
Source: https://www.mail-archive.com/nxlog-ce-users#lists.sourceforge.net/msg00225.html
I'm currently in the same boat. My assumption is that your data is not formatted in a way that nxlog can interpret. Troubleshooting and will get back to you if I can find a resolution.
Also digging through the source code for the im_dbi module.
https://github.com/lamby/pkg-nxlog-ce/blob/master/src/modules/input/dbi/im_dbi.c
The answer by SoMuchToGrok is valid.
Actually the question already has this: "ERROR: column "id" does not exist".
The table must have an id column or you must use SELECT x as id so that the result set has id in it

Creating Users in PostgreSQL

I am having trouble getting my head around setting up users in PostgreSQL (with PostGIS extension) - well in all honesty I've been banging my head against a wall on this for the past week. I am currently working with a dataset which most users should only have read permissions and a small group should be able to edit/delete/insert.
I am following this article:-
http://osqa.sjsoft.com/questions/155/how-do-i-create-a-read-only-postgresql-account, and have followed a couple of other examples (basically the same content) found on the web
but each time I try to load my data in to QGIS I am getting messages like:-
Message1
Erroneous query: SELECT * FROM <schema>.<table> LIMIT 1 returns 7 [error: permissions denied for schema <schema>. Line 1: SELECT * FROM <schema>.<table> LIMIT 1
]
Message2
Unable to access the <schema>.<table> relation.
The error message fromt he database was:
ERROR: permission denied for schema <schema>
Line 1: Select * from <schema>.<Tbale> LIMIT 1
SQL: SELECT * from <schema>.<table> LIMIT 1
I am using PGADMIN III on a PostGreSQL 9.2 database. I'm pretty sure I have either missed a step or done something during Proof of Concept which is hindering me being able to set up accounts (would something like having trust set up for all access methods play a part in the issue I am getting?).
It looks like you created the user successfully, but forgot to GRANT them rights to the schema and/or the tables within it.
If the user didn't exist, you wouldn't be able to log in at all.