I am trying to create a streaming visualization from KSQLDB using Arcadia BI tool. I am able to establish a connection and see the streams and tables in KSQLDB from Arcadia. But while trying to sample data, I am getting error.Can anyone help?
Error running query: Error: b'{"#type":"statement_error","error_code":40001,"message":"Pull queries don\'t support LIMIT clauses. Refer to https://cnfl.io/queries for info on query types. If you intended to issue a push query, resubmit with the EMIT CHANGES clause\\n\\nQuery syntax in KSQL has changed. There are now two broad categories of queries:\\n- Pull queries: query the current state of the system, return a result, and terminate. \\n- Push queries: query the state of the system in motion and continue to output results until they meet a LIMIT condition or are terminated by the user.\\n\\n\'EMIT CHANGES\' is used to to indicate a query is a push query. To convert a pull query into a push query, which was the default behavior in older versions of KSQL, add `EMIT CHANGES` to the end of the statement before any LIMIT clause.\\n\\nFor example, the following are pull queries:\\n\\t\'SELECT * FROM X WHERE ROWKEY=Y;\' (non-windowed table)\\n\\t\'SELECT * FROM X WHERE ROWKEY=Y AND WINDOWSTART>=Z;\' (windowed table)\\n\\nThe following is a push query:\\n\\t\'SELECT * FROM X EMIT CHANGES;\'\\n\\nNote: Persistent queries, e.g. `CREATE TABLE AS ...`, have an implicit `EMIT CHANGES`, but we recommend adding `EMIT CHANGES` to these statements.","stackTrace":[],"statementText":"select * from table_name limit 10;","entities":[]}'
This is an issue with Arcadia and the version of KSQL that you're running.
There was a breaking change in ksqlDB 0.6 (shipped with Confluent Platform 5.4) which changed the syntax of queries. Pull queries were added and the previous "continuous queries" are known as "push queries" and denoted with a mandatory EMIT CHANGES clause.
Related
I have a view using a CTE that exceeds the maximum recursion, so I need to select from it using the hint
OPTION (MAXRECURSION 3650)
Is that possible? I do not seem to be able to find any information on it, other than the fact that is not working in the Source Query - any documentation on what you can do as far as SQL queries would be greatly appreciated.
Error message:
at Source 'Calendar': shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerException: Incorrect syntax near the keyword 'option'.
Source Query:
select * from dbo.ReportingCalendar option (maxrecursion 3650)
The above query is correct and runs on the SQL Server.
I refer to this documentation, but there isn't information about keyword 'option'. I also test it with data flow, got the same error with you. So it seems not support this keyword.
As an alternative, you can use copy activity, it supports 'option'. You can copy data from your SQL database to Azure data lake gen2( or someplace else that data flow support as source), then use that as source in data flow and do some transform.
Looking at the example for ClickHouseIO for Apache Beam the name of the output table is hard coded:
pipeline
.apply(...)
.apply(
ClickHouseIO.<POJO>write("jdbc:clickhouse:localhost:8123/default", "my_table"));
Is there a way to dynamically route a record to a table based on its content?
I.e. if the record contains table=1, it is routed to my_table_1, table=2 to my_table_2 etc.
Unfortunately the ClickHouseIO is still in development does not support this. The BigQueryIO does support Dynamic Destinations, so it is possible with Beam.
The limitation in the current ClickHouseIO is around transforming data to match the destination table schema. As a workaround, if your destination tables are known at pipeline creation time you could create a ClickHouseIO per table, then use the data to route to the correct instance of the IO.
You might want to file a feature request in the Beam bug tracker for this.
Orientdb throws on each live query subscripion using binary protocoll following Nullpointer exception:
Error executing live query subscriber. java.lang.NullPointerException at com.orientechnologies.orient.server.network.protocol.binary.OLiveCommandResultListener.onLiveResult(OLiveCommandResultListener.java:113)
at com.orientechnologies.orient.core.sql.OCommandExecutorSQLLiveSelect$2.call(OCommandExecutorSQLLiveSelect.java:134)
at com.orientechnologies.orient.core.sql.OCommandExecutorSQLLiveSelect.execInSeparateDatabase(OCommandExecutorSQLLiveSelect.java:144)
at com.orientechnologies.orient.core.sql.OCommandExecutorSQLLiveSelect.onLiveResult(OCommandExecutorSQLLiveSelect.java:131)
at com.orientechnologies.orient.core.query.live.OLiveQueryQueueThread.run(OLiveQueryQueueThread.java:69)
The Live Query is subscribed by one client: "live select from Account where CheckInDateTime like "2018-02-25%"", Orientdb returns also the live request token ID gracefully. But when another client updates Account with "update Account set CheckInDateTime = "2018-02-25 13:00:00"" the mentioned NullPointerexception is thrown. I've tried to use versions 2.2.30 and 2.2.32 Comunity, DB Administrator and Server Administrator accounts. Also loading of plugins seems not to work (even the used version is 2.2.30 and 2.2.32 where Live Query should be enabled at server by default). Nothing seems to help to get that work, also queries like "live select from account" (without where ...).
Any further ideas? Thx.
Currently live queries do not support the WHERE clause you included in the query.
You can only select entire collections or V and E (which is what I use to get all updates)
If you would like to filter using that where, you will have to write it yourself in code.
Thx for answering mitchken! Fortunately, I found the mistake. The TCP connection from client to DB was in wrong state (must be all the time in waitforreadyread).
I'm trying to understand how a java (client) application that communicates, through JDBC, with a pgSQL database (server) can "catch" the result produced by a query that will be fired (using a trigger) whenever a record is inserted into a table.
So, to clarify, via JDBC I install a trigger procedure prepared to execute a query whenever a record is inserted into a given database table, and from this query's execution will result an output (wrapped in a resultSet, I suppose). And my problem is that I have no idea how the client will be aware of those results, that are asynchronously produced.
I wonder if JDBC supports any "callback" mechanism able to catch the results produced by a query that is fired through a trigger procedure under the "INSERT INTO table" condition. And if there is no such "callback" mechanism, what is the best approach to achieve this result?
Thank you in advance :)
Triggers can't return a resultset.
There's no way to send such a result to the JDBC driver.
There are a few dirty hacks you can use to get results from a trigger to the client, but they're all exactly that. Things like:
DECLARE a cursor for the resultset, then send the cursor name as a NOTIFY payload, so the app can FETCH ALL FROM <cursorname>;
Create a TEMPORARY table and report the name via NOTIFY
It is more typical to append anything the trigger needs to communicate to the app to a table that exists for that purpose and have the app SELECT from it after the operation that fired the trigger ran.
In most cases if you need to do this, you're probably using a trigger where a regular function is a better fit.
I have a table t which is being updated in KDB in realtime. I want a query which does the subscription to the table?
Thanks.
Is it the classic tick.q setup?
If so, the following will work where h is the handle to the tickerplant, t is the table name and s is the subset of symbols that you wish to subscribe to:
/ subscribe and initialize
$[`~t;(upd .)each;(upd .)]h(".u.sub";t;s);
The above is from c.q: https://github.com/KxSystems/kdb/blob/master/tick/c.q
If both pub/sub services need to be set up you can follow tick.q as an example of how it can be done:
https://code.kx.com/q/tutorials/startingq/tick/