Symmetricds fails to synchronize default value column - postgresql

Symmetricds server is configured with postgresql 9.4 and client nodes have sqlite3. I recently had to alter a table at the server end and then send the schema to the client with the command symadmin send-schema --engine <server> --node <node> <table>
One of the changes in the table was the addition of default value on date field update_date date DEFAULT ('now'::text)::date
Since the changes is applied, on symmetric log I am seeing the following error message on the server side now:
ERROR [<server>] [AcknowledgeService] [qtp1874154700-1322] The outgoing batch <node>-41837 failed. ERROR: invalid input syntax for type date: "'now'::text)::date"
Is this error showing up because sqlite3 does not support 'now'::text)::date" as default value? In such case how can I propagate the changes?
OR
If it is a symmetricds issue that it is not recognizing 'now'::text)::date" as default value for update_date field?
I am suspecting due to this error all the synchronization between client and server is stopping.
Any clue is appreciated.

hope the problem is not in production
you'll need to delete the outgoing batch with the change or just the link between it and the alter table
use the command line then to send the custom native SQL DDL statement to each node from the central node or do it manually connecting remotely to each node

A batch in error will hold up all other batches in error. You can ignore the batch in error by setting the status to "IG" on that particular batch. However this would result in all change captures in the batch to not be applied on the target.
Are you sure the default value applied correctly on SQLITE. Here is an example of a table with a default date value of now.
create table forum (id int primary key, some_date date default(date('now')));
You can then send the appropriate alters or creates to your clients through the send sql feature.
References:
Open source send documentation.
Professional send documentation.

Related

What could cause Firebird to silently turn calculated fields into "normal" fields?

I'm using Firebird 2.5.8 to store information for a software I designed.
A customer contacted me today to inform me of multiple errors that I couldn't understand, and I used the "IBExpert" tool to inspect its database.
To my surprise, all the calculated fields had been transformed into "standard" fields. This is clearly visible in the "DDL" tab of the database tool, which displays tables definition as SQL code.
For instance, the following table definition:
CREATE TABLE TVERSIONS (
...
PARENTPATH COMPUTED BY (((SELECT TFILES.FILEPATH FROM TFILES WHERE ID = TVERSIONS.FILEID))),
....
ISCOMPLETE COMPUTED BY ((((SELECT TBACKUPVERSIONS.ISCOMPLETE FROM TBACKUPVERSIONS WHERE ID = TVERSIONS.CVERSION)))),
CDATE COMPUTED BY (((SELECT TBACKUPVERSIONS.SERVERSTARTDATE FROM TBACKUPVERSIONS WHERE ID = TVERSIONS.CVERSION))),
DDATE COMPUTED BY (((SELECT TBACKUPVERSIONS.SERVERSTARTDATE FROM TBACKUPVERSIONS WHERE ID = TVERSIONS.DVERSION))),
...
);
has been "changed" in the client database into this:
CREATE TABLE TVERSIONS (
...
PARENTPATH VARCHAR(512) CHARACTER SET UTF8 COLLATE UNICODE,
...
ISCOMPLETE SMALLINT,
CDATE TIMESTAMP,
DDATE TIMESTAMP,
...
);
How can such a thing be possible?
I've been using Firebird for more than 10 years, and I've never seen such a behavior until now. Is it possible that it could be a corruption of RDB$FIELDS.RDB$COMPUTED_SOURCE fields?
What would you advise?
To summarize the discussion on firebird-support (and comments above):
The likely cause of this happening is that the database was backed up and restored using gbak, and the restore did not complete successfully. If this happens, gbak will have ended in an error, and the database is in single shutdown state (which means only SYSDBA or the database owner is allowed to create one connection). If the database is not currently in single shutdown mode, someone used gfix to bring the database online again in normal state.
When a database is restored using gbak, calculated fields are initially created as normal fields (though their values are not part of the backup). After data is restored successfully, those fields are altered to be calculated fields. If there are any errors before or during redefinition of the calculated fields, the restore will fail, and the database will be in single shutdown state, and the calculated fields will still be "normal" fields.
I recommend doing a structural comparison of the database to check if calculated fields are the only problem, or if other things (e.g. constraints) are missing. A simple way to do this is to export the DDL of the database and a "known-good" database, for example using ISQL (command line option -extract), and comparing them with a diff tool.
Then either fix the existing database by executing the necessary DDL to restore calculated fields (and other things), or create a new empty database, and move the data from the old to the new (using a datapump tool).
Also check if any data is missing. By default, gbak restores the data in a single transaction, so in that case either all data is present or all data is missing. However, gbak also has a "transaction-per-table" mode (-ONE_AT_A_TIME or -O), which could mean some tables have data, and others have no data.

BigQuery Transfer Service UI - run_date parameter

Has anybody had any success in applying a run_date parameter when creating a Transfer in BigQuery using the Transfer service UI ?
I'm taking a CSV file from Google Cloud storage and I want to mirror this into my ingestion date partitioned table, table_a.
Initally I set the destination table as table_a, which resulted in the following message in the job log:
Partition suffix has to be set for date-partitioned tables. Please recreate your transfer config with a valid table name. For example, to load new files to partition of the run date, specify your table name as transferTest${run_date} for daily partitioning or transferTest${run_time|"%Y%m%d%H"} for hourly partitioning.
I then set the destination to table_a$(run_date), which then issues the warning:
Invalid table name. Please use only alphabetic, numeric characters or underscore with supported parameters wrapped in brackets.
However it won't accept table_a_(run_date) either - could anyone please advise?
best wishes
Dave
Apologies - i've identified the correct syntax now
table_a_{run_date}

How to make a Cassandra connection using CQL used to create a table?

I am new to tableau, gone through the site before having this question posted, didn't found answer matching to my question.
I have connection established successfully to Cassandra using "DataStax cassandra ODBC driver 64bit windows", evrything is fine, filled all details like "keyspace name, table name as per documentation available in Datastax site".
But when I drag the available table to canvas it's keep on loading for minutes, what the database guy has told me about the data is it's more millions of data for one day, so we have 6months data and that to data gets updated for every 10 minutes, it;s for a reputed wind energy company.
My client has given me "" CQL used for creating table:
create table abc_data_test.machine_data
(machine_id text, tag text, timestamp timestamp, value double,
PRIMARY KEY((machine_id, tag), timestamp))
WITH CLUSTERING ORDER BY(timestamp DESC)
AND compression = { 'sstable_compression' : 'LZ4Compressor' };"".
Where to keep this code?
I tried to insert in connection page it's giving a error. I am getting a new custom sql error (I placed the code in "new custom sql" ) .
The time is still running, can be seen as:
processing request: connecting to datasource, Elapsed time 87:09
The error from new custom sql is
An error occured while commuicating with the datasource. [DataStax][CassandraODBC] (10) Error while executing a query in Cassandra:33562624: line 1.11 no viable alternative at input '1' (SELECT [TOP]1...)
I'm using windows 10 64bit, DataStax odbc driver 64bit-2.4.1 version,DSE is4.8 and later .
You cannot pass DDL sql into the custom sql box. If the Cassandra connection supports the Initial SQL option, you could pass it there. Then your custom sql would be some sort of select statement. Otherwise, create the table in Cassandra then connect to that table from Tableau.

Unable to export data to PostgreSQL from Oracle

I have to extract data from Oracle tables and copy them to PostgreSQL. I am able to map both the input and output files. On running the connector component I get the proper row fetching graphical image, but when I go to the table there is no such data.
This one is for PostgreSQl to PostgreSQL:
TRACE_DEBUG result
After trace debug this is what I get
Are you trying to read and write on the same table in Input and Output ? (this could be a problem).
What kind of action are you using in the output , insert, update, insert or update ?
Did you check if there is a lock on your output table ?
Depending on the settings on your database connection, you may need to turn on auto commit or add an explicit commit component at the end of the Flow.
How is the output component configured ?
operation type : insert?
is it doing a lookup?
is the table name correct ?
did you check the error code global value for the component After it finishes ?

DB2 400 drop column

I want to drop a column called id which is an auto incrementing PK.
The SQL:
alter table "CO88GT"."XGLCTL" drop column id cascade;
And I get:
Error: [SQL0952] Processing of the SQL statement ended. Reason code 10.
SQLState: 57014
ErrorCode: -952
I could be wrong but I think it has something to do with preventing the table from losing data. To get around this issue I need to create a new table without the column and copy the data from the old table into the new table and then replace the old table with the new table.
Info
AS400 is giving you a warning (inquiry message) because of possible data loss, asking you to Cancel or Ignore the requested operation. So, beacuse of this being a interactive request, over JDBC/ODBC you cannot type 'I' to ignore, and AS throws you an ErrorCode: -952 with SQLState: 57014 and Reason code 10.
In the documentation of SQL0952 says:
Message Text: Processing of the SQL statement ended. Reason code &1.
Cause Text: The SQL operation was ended before normal completion. The reason code is &1. Reason codes and their meanings are:
* 1 - An SQLCancel API request has been processed, for example from ODBC.
* 2 - SQL processing was ended by sending an exception.
* 3 - Abnormal termination.
* 4 - Activation group termination.
* 5 - Reclaim activation group or reclaim resources.
* 6 - Process termination.
* 7 - An EXIT function was called.
* 8 - Unhandled exception.
* 9 - A Long Jump was processed.
* 10 - A cancel reply to an inquiry message was received.
* 11 - Open Database File Exit Program (QIBM_QDB_OPEN).
* 0 - Unknown cause.
If you are using JDBC and the SQL error isn't self-explanatory, you can make a JDBC connection with parameter 'errors=full', which will give much more info on the error. For other connection parameters see this.
example connection string:
jdbc:as400://serverName;libraries=*libl;naming=system;errors=full;
With that connection the resulting error would be like this:
Error: [SQL0952] Processing of the SQL statement ended. Reason code 10.
Cause . . . . . : The SQL operation was ended before normal completion.
The reason code is 10.
Reason codes and their meanings are:
1 -- An SQLCancel API request has been processed, for example from ODBC.
2 -- SQL processing was ended by sending an exception.
3 -- Abnormal termination.
4 -- Activation group termination.
5 -- Reclaim activation group or reclaim resources.
6 -- Process termination.
7 -- An EXIT function was called.
8 -- Unhandled exception.
9 -- A Long Jump was processed.
10 -- A cancel reply to an inquiry message was received.
11 -- Open Database File Exit Program (QIBM_QDB_OPEN).
0 -- Unknown cause.
Recovery . . . : If the reason code is 1, a client request was made to cancel SQL processing. For all other reason codes, see previous messages to determine why SQL processing was ended.
SQLState: 57014
ErrorCode: -952
The solution
So finally, if you cannot use STRSQL, another solution is to use iSeries Navigator, to be exact its "Run SQL scripts" (it is usually here --> "%Program Files%\IBM\Client Access\Shared\cwbundbs.exe").
But first of all you have to add a system reply parameter (only once per machine)
ADDRPYLE SEQNBR(1500) MSGID(CPA32B2) RPY('I')
This is done in "green screen". This sets a deafult answer ('I') on CPA32B2 inquiry message. The CPA32B2 is an internal massage id, which is tied to an drop column operation.
(It actually doesn't have to be done in "green screen", use it like CHGJOB command. Example :
cl: ADDRPYLE SEQNBR(1500) MSGID(CPA32B2) RPY('I');
)
Now you can start "Run SQL scripts", the first command to run is:
cl: CHGJOB INQMSGRPY(*SYSRPYL);
this changes the current job parameter INQMSGRPY, to *SYSRPYL. *SYSRPYL causes to look if exists a system reply parameter when an inquiry message should be displayed.
Now you can run your alter which drops the column.
Unfortunately, I don't know how to drop a column, just using JDBC. If someone knows please let me know.
References:
Understanding What Controls the Automatic Reply Function
Replying to Run-Time Inquiry Messages
Error in dropping column (forum post)
How can I avoid SQL0952 on ALTER TABLE?
Finally I found a solution:
CALL QSYS2.QCMDEXC('ADDRPYLE SEQNBR(1500) MSGID(CPA32B2) RPY(''I'')');
CALL QSYS2.QCMDEXC('CHGJOB INQMSGRPY(*SYSRPYL)');
ALTER TABLE <tablename> DROP COLUMN <column_name>;
There isn't enough information in your error message to be sure, but dropping a primary key column is generally quite risky and the database correctly makes it difficult.
You likely have a foreign key constraint involving that column.
Don't drop the constraint and delete that column unless you're sure you know what you're doing.
According to this post: http://bytes.com/topic/db2/answers/185467-drop-column-table
It is possible to drop a column using STRSQL in the green screen environment. I have access to this and it does work, but a client with a 400 does not have the licensed program to use STRSQL. The issue is that STRSQL will prompt if this is something I really want to do.
To get at the data I'm using SQuirrel SQL client with the JT400 JDBC driver... So I guess with the system insisting on prompting (and actually no way of getting the prompt even without STRSQL) it won't let me do it.
So I guess I'm stuck doing what I'm doing... creating a new table and copying the data and then swapping the tables.
The only way I found to get jdbc to work is to first manually change the default for this message. Then run your update application. In our case we use Liquibase. I could get the java's CommandCall to call ADDRPYLE and CHGJOB INQMSGRPY(*SYSRPYL), but it never actually allowed the alter table * drop column * to not give the follow error:
Error:
10 -- A cancel reply to an inquiry message was received.
Working Command:
CHGMSGD MSGID(CPA32B2) MSGF(QSYS/QCPFMSG) DFT('I')
Reference:
http://knowledgebase.progress.com/articles/Article/9678
green screen -- STRSQL will give you error msg to answer
alter table devlibsc/trklst drop column "ST"
Change of file TRKLST may cause data to be lost. (C I)