Crate - Records ingested in Crate but records not showing via select query - crate

I need a help for a Crate issue, I have a crate cluster of version 1.0.2. Last few days we observing strange issue related to "select query" query on Crate server. We found that via "select query" records are not showing for few specific dates though records properly ingested in Crate. We checked via JSON files present in all cluster.
Can someone give us some input how to resolve this?
Following troubleshoot performed:
a. Restart all Crate clusters.
b. Refresh the cluster all clusters.
c. Trying to delete the partition for the day in which Data not showing, but Crate not allowing us to Delete.
Thanks in advance for all your help.

CrateDB is eventual consistent. If you don't look up the newly inserted records via primary key you need to issue an refresh command: https://crate.io/docs/crate/reference/en/latest/general/dql/refresh.html

Related

How to process and insert millions of MongoDB records into Postgres using Talend Open Studio

I need to process millions of records coming from MongoDb and put a ETL pipeline to insert that data into a PostgreSQL database. However, in all the methods I've tried, I keep getting the out memory heap space exception. Here's what I've already tried -
Tried connecting to MongoDB using tMongoDBInput and put a tMap to process the records and output them using a connection to PostgreSQL. tMap could not handle it.
Tried to load the data into a JSON file and then read from the file to PostgreSQL. Data got loaded into JSON file but from there on got the same memory exception.
Tried increasing the RAM for the job in the settings and tried the above two methods again, still no change.
I specifically wanted to know if there's any way to stream this data or process it in batches to counter the memory issue.
Also, I know that there are some components dealing with BulkDataLoad. Could anyone please confirm whether it would be helpful here since I want to process the records before inserting and if yes, point me to the right kind of documentation to get that set up.
Thanks in advance!
As you already tried all the possibilities the only way that I can see to do this requirement is breaking done the job into multiple sub-jobs or going with incremental load based on key columns or date columns, Considering this as a one-time activity for now.
Please let me know if it helps.

Tableau Queries with JOINS and check for NULL are failing in ClickHouse

I am running Tableau connected to ClickHouse via ODBC driver. At first mostly any report request was failing. I have configured this tdc file https://github.com/yandex/clickhouse-odbc/blob/clickhouse-tbc/clickhouse.tdc and its actually started to work, however now some of the query requests with JOINS that contain check for NULL in ON are failing because of using IS NULL instead of isNull(id)
JOIN users ON ((users.user_id = t0.user_id) OR ((users.user_id IS NULL) AND (t0.user_id IS NULL)))
This is the correct way that works:
JOIN users ON ((users.user_id = t0.user_id) OR ((isNull(users.user_id) = 1) AND (isNull(t0.user_id) = 1 IS NULL)))
How to make tablau driver to send the right requerst?
Here are a few suggestions:
This post on the Tableau Community looks like it has similar symptoms as you describe. The suggested resolution is to wrap all fields as such IfNull([Dimension], "") thereby reducing the need, apparently, to have Clickhouse do the check of nulls.
The TDC file from Github looks pretty complete, but they might not have taken joins into consideration. The GitHub commit states that the tdc is "untested." I would message the creator of that TDC and see if they've done any work around joins and if they have any suggestions.
Here is a list of possible ODBC Customizations that can be added to or removed from your TDC file. The combination of which may take some experimentation, but they're well worth researching as a possible solution.
Create an extract before performing complex analysis. If you're able to connect initially, then it should be possible to bring all the data from Clickhouse into an extract.
Custom SQL would probably alleviate any join syntax issue because the query and any joins are purely written by you. After making the initial connection to ClickHouse, instead of choosing a table, select "Custom ODBC" and write a query that will return the joined tables of your choosing.
Finally, the Tableau Ideas Forum is a place to ask for and/or vote on upcoming connectors. I can see there is already an idea in place for ClickHouse. Feel free to vote it up.
If you can make sure not to have any NULL values in the data, you can also use this proxy that I wrote for this exact problem.
https://github.com/kfzteile24/clickhouse-proxy
It kinda worked, for most cases, but it's not bullet-proof.

DB2 updated rows since last check

I want to periodically export data from db2 and load it in another database for analysis.
In order to do this, I would need to know which rows have been inserted/updated since the last time I've exported things from a given table.
A simple solution would probably be to add a timestamp to every table and use that as a reference, but I don't have such a TS at the moment, and I would like to avoid adding it if possible.
Is there any other solution for finding the rows which have been added/updated after a given time (or something else that would solve my issue)?
There is an easy option for a timestamp in Db2 (for LUW) called
ROW CHANGE TIMESTAMP
This is managed by Db2 and could be defined as HIDDEN so existing SELECT * FROM queries will not retrieve the new row which would cause extra costs.
Check out the Db2 CREATE TABLE documentation
This functionality was originally added for optimistic locking but can be used for such situations as well.
There is a similar concept for Db2 z/OS - you have to check that out as I have not tried this one.
Of cause there are other ways to solve it like Replication etc.
That is not possible if you do not have a timestamp column. With a timestamp, you can know which are new or modified rows.
You can also use the TimeTravel feature, in order to get the new values, but that implies a timestamp column.
Another option, is to put the tables in append mode, and then get the rows after a given one. However, this option is not sure after a reorg, and affects the performance and space utilisation.
One possible option is to use SQL replication, but that needs extra tables for staging.
Finally, another option is to read the logs, with the db2ReadLog API, but that implies a development. Also, just appliying the archived logs into the new database is possible, however the database will remain in roll forward pending.

Kafka JDBC connector not picking up new commits

I am currently using a Kafka JDBC connector to poll records from an Oracle db. The connector properties are set to use timestamp mode and we have provided a simple select query in the properties (not using a where clause) - based on my understanding this should work.
However currently when instantiating the connector I can see the initial query does pull out all of the records it should and does publish them to the Kafka consumer - but any new commits to the oracle db are not picked up and the connector just sits polling without finding any new info, and maintaining its offset.
No exceptions are being thrown in the connector, and no indication of a problem other than it is not picking up the new commits in the db.
One thing of note, which i have been unable to prove makes a difference, is that the fields in the oracle db are all nullable. But i have tested changing that for the timestamp field, and it had no effect and the same behaviour continued. I have also tested in bulk mode and it works fine and does pick up new commits, though I cannot use bulk mode as we cannot duplicate the records for the system.
Does anyone have any idea why the connector is unable to pick up new commits for timestamp mode?
What does your properties file look like? You need to make sure to use an incrementing column or a time stamp column.
If you you are using a time stamp column, is it getting updated on the commit?
Regarding nulls, You can tweak your query to coalesce the null column to a value. Alternatively, I think there is a setting to allow nullable columns.

How to log a talend job result?

We have many talend jobs to transfer data from oracle (tOracleInput) to redshift (tRedshiftOutputBulkExec). I would like to store the result information into a DB table. For example:
Job name, start time, running time, rows loaded, successful or failed
I know if I turn on log4j, most those information can be derived from the log. However, saving it into DB table will make it easy to check and report the result.
I'm most interested in records loaded. I checked this link http://www.talendbyexample.com/talend-logs-and-errors-component-reference.html and manual of tRedshiftOutputBulkExec. None of them gives me such information.
Will Talend Administration Center provide such function? What is the best way to implement it?
Thanks,
After looking at the URL you provided, tLogCatcher should provide you with what you need (minus the rows loaded, which you can get with a lookup).
I started with Talend Studio Version 6.4.1. There you can set "Stats & Logs" for a job. It can log to console, files or database. When writing to a DB you set JDBC parameters and the name for three tables:
Stats Table: stores start and end timestamps of the job
Logs Table: stores error messages
Meter Table: stores the count of rows for each monitored flow
They correspond to the components tStatCatcher, tLogCatcher, tFlowMeterCatcher, where you can find the needed table schema.
To make a flow monitored select it, open tab "Component" and mark the checkbox "Monitor this connection".
To see the logged values you can use the "AMC" (Active Monitoring Console) in Studio or TAC.