How to make a simple auto increment field with postgresql in dbeaver 21.0? - postgresql

"SERIAL" type does not exist yet, so how can we do the basic auto increment field in DBeaver 21.O via his GUI ?

You can set "serial" as data type with DBeaver GUI
serialViaDBeaver
And why serial type does not exist yet? I can see it in documentation

Related

Unable to create db2 table with DATE data type

I am using DB2 9.7 (LUW) in a windows server, in which multiple DBs are available in a single DB instance. I just found that in one of these DBs, I am unable to add a column with DATE data type, during table creation or altering. The column been added is getting changed to timestamp instead.
Any help on this will be welcome.
Check out your Oracle Compatibility setting
Depending on that setting a date is interpreted as Timestamp(0) like in your example.
Because these settings take effect if the database has been created after setting the DB2_COMPATIBILITY_VECTOR registry variable your database can show a different behaviour.

Problems with setting the dataType in the dBeaver

Whenever I try to choose a data type for any column I get this message. I don't type by myself but choose from drop-down menu instead.
Details message as follows,
Error setting property 'dataType' value
Bad data type name specified: serial
it can be any type instead of serial, the result message will be the same.
The database is PostgreSQL.
Same problem :(
Dbeaver 5.2.5, Fedora 29, PostgreSQL 9.6

How to populated the table via Pentaho Data Integration's table_output step?

I am performing an ETL job via Pentaho 7.1.
The job is to populate a table 'PRO_T_TICKETS' in PostgreSQL 9.2 via the Pentaho Jobs and transformations?
I have mapped the table fields with respect to the stream fields
Mapped Fields
My Table PRO_T_TICKETS contains the Schema (Column Names) in UPPERCASE.
Is this the reason I can't populate the table PRO_T_TICKETS with my ETL Job?
I duplicated the step TABLE_OUTPUT to PRO_T_TICKETS and changed the Target table field to 'PRO_T_TICKETS2'. Pentaho created a new table with lowercase schema and populated the data in it.
But I want this data to be uploaded in the table PRO_T_TICKETS only and with the UPPERCASE schema if possible.
I am attaching the whole job here and the error thrown by Pentaho. Pentaho Error I have also tried my query by adding double quotes to the column names as you can see in the error. But it didn't help.
What do you think I should do?
When you create (or modify) the connection, select Advanced on the left panel and click on the Force to upper case or Force to lower case or, even better, Preserve case of reserved words.
To know which option to choose, copy the 4th line of your error log, the line starting with INSERT INTO "public"."PRO_T_TICKETS("OID"... in your SQL-developer tool and change the connection advanced parameters until it works.
Also, at debug time, don't use batch updates, don't use lazy conversion on previous steps, and try with one (1) field rather than all (25).
Just as a complement: it worked for me following the tips from AlainD and using specific configurations that I'd like to share with you. I have a transformation streaming data from MySQL to PostgreSQL using a Table Input and Output. In both of DBs I have uppercase objects.
I did the following steps to work in the right way:
In the table input (MySQL) the objects are uppercase too, but I typed in lowercase and it worked and I didn't set any special option in the DB Connection.
In the table output (PostgreSQL) I typed everything in uppercase (schema, table name and columns) and I also set "specify the database fields" (clicking on "Get fields").
In the target DB Connection (PostgreSQL) I put the options (in "Advanced" section): "Quote all in database" and "Preserve case of reserved words".
PS: Ah, the last option is because I've found out that there was one more problem with my fields: there was a column called "Admin" (yes guys, they created a camelcase column using a reserved word!) and for that reason I must to put "Preserve case of reserved words" and type it as "Admin" (without quotes and in camelcase) in the Table Output.

Unable to export data to PostgreSQL from Oracle

I have to extract data from Oracle tables and copy them to PostgreSQL. I am able to map both the input and output files. On running the connector component I get the proper row fetching graphical image, but when I go to the table there is no such data.
This one is for PostgreSQl to PostgreSQL:
TRACE_DEBUG result
After trace debug this is what I get
Are you trying to read and write on the same table in Input and Output ? (this could be a problem).
What kind of action are you using in the output , insert, update, insert or update ?
Did you check if there is a lock on your output table ?
Depending on the settings on your database connection, you may need to turn on auto commit or add an explicit commit component at the end of the Flow.
How is the output component configured ?
operation type : insert?
is it doing a lookup?
is the table name correct ?
did you check the error code global value for the component After it finishes ?

Symmetricds fails to synchronize default value column

Symmetricds server is configured with postgresql 9.4 and client nodes have sqlite3. I recently had to alter a table at the server end and then send the schema to the client with the command symadmin send-schema --engine <server> --node <node> <table>
One of the changes in the table was the addition of default value on date field update_date date DEFAULT ('now'::text)::date
Since the changes is applied, on symmetric log I am seeing the following error message on the server side now:
ERROR [<server>] [AcknowledgeService] [qtp1874154700-1322] The outgoing batch <node>-41837 failed. ERROR: invalid input syntax for type date: "'now'::text)::date"
Is this error showing up because sqlite3 does not support 'now'::text)::date" as default value? In such case how can I propagate the changes?
OR
If it is a symmetricds issue that it is not recognizing 'now'::text)::date" as default value for update_date field?
I am suspecting due to this error all the synchronization between client and server is stopping.
Any clue is appreciated.
hope the problem is not in production
you'll need to delete the outgoing batch with the change or just the link between it and the alter table
use the command line then to send the custom native SQL DDL statement to each node from the central node or do it manually connecting remotely to each node
A batch in error will hold up all other batches in error. You can ignore the batch in error by setting the status to "IG" on that particular batch. However this would result in all change captures in the batch to not be applied on the target.
Are you sure the default value applied correctly on SQLITE. Here is an example of a table with a default date value of now.
create table forum (id int primary key, some_date date default(date('now')));
You can then send the appropriate alters or creates to your clients through the send sql feature.
References:
Open source send documentation.
Professional send documentation.