In My golden gate big data for kafka. when i try to update the record am getting only updated column and primary key column in after part in json file
{"table":"MYSCHEMATOPIC.PASSPORTS","op_type":"U","op_ts":"2018-03-17 13:57:50.000000","current_ts":"2018-03-17T13:57:53.901000","pos":"00000000030000010627","before":{"PASSPORT_ID":71541893,"PPS_ID":71541892,"PASSPORT_NO":"1234567","PASSPORT_NO_NUMERIC":241742,"PASSPORT_TYPE_ID":7,"ISSUE_DATE":null,"EXPIRY_DATE":"0060-12-21 00:00:00","ISSUE_PLACE_EN":"UN-DEFINED","ISSUE_PLACE_AR":"?????? ????????","ISSUE_COUNTRY_ID":203,"ISSUE_GOV_COUNTRY_ID":203,"IS_ACTIVE":1,"PREV_PASSPORT_ID":null,"CREATED_DATE":"2003-06-08 00:00:00","CREATED_BY":-9,"MODIFIED_DATE":null,"MODIFIED_BY":null,"IS_SETTLED":0,"MAIN_PASSPORT_PERSON_INFO_ID":34834317,"NATIONALITY_ID":590},
"after":{"PASSPORT_ID":71541893,"NATIONALITY_ID":589}}
In After part in my json out i want to show all columns
How to get all columns in after part?
gg.handlerlist = kafkahandler
gg.handler.kafkahandler.type=kafka gg.handler.kafkahandler.KafkaProducerConfigFile=custom_kafka_producer.properties
#The following resolves the topic name using the short table name
gg.handler.kafkahandler.topicMappingTemplate=passports
gg.handler.kafkahandler.format=json
gg.handler.kafkahandler.BlockingSend =false
gg.handler.kafkahandler.includeTokens=false
gg.handler.kafkahandler.mode=op
#gg.handler.kafkahandler.format.insertOpKey=I
#gg.handler.kafkahandler.format.updateOpKey=U
#gg.handler.kafkahandler.format.deleteOpKey=D
#gg.handler.kafkahandler.format.truncateOpKey=T
#gg.handler.kafkahandler.format.includeColumnNames=TRUE
goldengate.userexit.timestamp=utc
goldengate.userexit.writers=javawriter
javawriter.stats.display=TRUE
javawriter.stats.full=TRUE
gg.log=log4j
gg.log.level=info
gg.report.time=30sec
Try using the Kafka Connect handler instead - this includes the full payload. This article goes through the setup process.
Hi This issue is fixed by added below change in golden gate side
ADD TRANDATA table_name ALLCOLS
Related
Has anybody had any success in applying a run_date parameter when creating a Transfer in BigQuery using the Transfer service UI ?
I'm taking a CSV file from Google Cloud storage and I want to mirror this into my ingestion date partitioned table, table_a.
Initally I set the destination table as table_a, which resulted in the following message in the job log:
Partition suffix has to be set for date-partitioned tables. Please recreate your transfer config with a valid table name. For example, to load new files to partition of the run date, specify your table name as transferTest${run_date} for daily partitioning or transferTest${run_time|"%Y%m%d%H"} for hourly partitioning.
I then set the destination to table_a$(run_date), which then issues the warning:
Invalid table name. Please use only alphabetic, numeric characters or underscore with supported parameters wrapped in brackets.
However it won't accept table_a_(run_date) either - could anyone please advise?
best wishes
Dave
Apologies - i've identified the correct syntax now
table_a_{run_date}
I've got a data stream coming from the mongo CDC connector, but the trouble is that the stream key is in the form of a JSON string.
e.g.
{"id":"{ \"$oid\" : \"5bbb0c70cd0b9c06cf06c9c1\"}"}
I know that I can use extractjsonfield method to extract the data using jsonpath, however, I can't figure out how to extract the literal dollar symbol I've tried:
$.id.$oid
$.id[\$oid]
$.id.*
each time I get a null response, any ideas?
I guess that your problem is related to issue #1403.
You can use [\\" field_name \\"] to reference the column. For example,
SELECT EXTRACTJSONFIELD(test,'$[\\"$oid\\"]') FROM testing;
I have faced the same issue with Debezium MongoDB Connector.
Using [\\" field_name \\"] as #Giorgos pointed didn't work for me with ksqlDB 0.21.0
Instead [\" field_name \"] (single back slash) works.
Symmetricds server is configured with postgresql 9.4 and client nodes have sqlite3. I recently had to alter a table at the server end and then send the schema to the client with the command symadmin send-schema --engine <server> --node <node> <table>
One of the changes in the table was the addition of default value on date field update_date date DEFAULT ('now'::text)::date
Since the changes is applied, on symmetric log I am seeing the following error message on the server side now:
ERROR [<server>] [AcknowledgeService] [qtp1874154700-1322] The outgoing batch <node>-41837 failed. ERROR: invalid input syntax for type date: "'now'::text)::date"
Is this error showing up because sqlite3 does not support 'now'::text)::date" as default value? In such case how can I propagate the changes?
OR
If it is a symmetricds issue that it is not recognizing 'now'::text)::date" as default value for update_date field?
I am suspecting due to this error all the synchronization between client and server is stopping.
Any clue is appreciated.
hope the problem is not in production
you'll need to delete the outgoing batch with the change or just the link between it and the alter table
use the command line then to send the custom native SQL DDL statement to each node from the central node or do it manually connecting remotely to each node
A batch in error will hold up all other batches in error. You can ignore the batch in error by setting the status to "IG" on that particular batch. However this would result in all change captures in the batch to not be applied on the target.
Are you sure the default value applied correctly on SQLITE. Here is an example of a table with a default date value of now.
create table forum (id int primary key, some_date date default(date('now')));
You can then send the appropriate alters or creates to your clients through the send sql feature.
References:
Open source send documentation.
Professional send documentation.
I am using a Java based program and I am writing a simple select query inside that program to retrieve data from the PostgreSQL database. The data come with the header which is an error for the rest of my codes.
How do I get rid of all column headings in an SQL query? I just want to
print out the raw data without any headings.
I am using Building Controls Virtual Test Bed (BCVTB) to connect my database to EnergyPlus. This BCVTB has a database actor that you can write a query in it and receive data and send it to your other simulation program. I decided to use PostgreSQL. however when I write Select * From mydb, it brings data with the column names (header). I just want raw data without header. what should I do?
PostgreSQL does not send table headings, not like a CSV file. The protocol (as used via JDBC) sends the rows. The driver does request a description of the rows that includes column names, but it is not part of the result set rows like the "header first" convention for CSV.
Whatever is happening must be a consequence of the BCVTB tools you are using, and I suggest pursuing it on that side of things.
I have a Job in talend that inserts data into a table.
Can I get this SQL sentences (ie "insert into tabla(a,b)values(....)")?
You can see the data inserted by adding tLogRow but if you want to see the generated insert on real time you can use the debugger.
For example, for the following job:
Above you can see the data inserted from an excel file to a mysql table. This was generated using tLogRow. But if you want the sql generated sentence, by using the debug you can see it here:
Hope to help.
You could simply place a tLogRow component either before or after your database output component to log things to the console if you are interested in seeing what data is being sent to the database.
I think it's impossible to see (it could be nice as an improvement in new releases). My problem, was when I change de source of my database output (Oracle SID to Oracle RAC), the inserts were made in the older database.
I fix it change the xml code in the "item" file. With the change older params attached to Oracle SID were stil there.
Thanks a lot!! Have a nice weekend Goon10 and ydaetskcoR!
You can check the generated JAVA code. You'll see an:
INSERT INTO (columns) VALUES (?,?,?)
thats the insert preparedStatement. Talend uses preparedStatements to do the inserts, thus only 1 insert will be generated and sent. In the main part of the component it will call
setString(value,position)
Please refer to: http://docs.oracle.com/javase/tutorial/jdbc/basics/prepared.html