How to change data type default from Decimal to Double when linking external tables in Access? - postgresql

I am using PostgreSQL backend with linked tables in Access. On using the wizard to link to the linked tables, I get errors:
Scaling of decimal value resulted in data truncation
This appears to be the wrong scale for numeric data types being chosen as the default by Access: the Postgresql data type being linked is Numeric with no precision or scale defined, and is being linked as Decimal with precision 28 and scale 6 as default.
How can I get Access to link it as Double?
I see here MS Access linked tables automatically long integers that the self-answer was:
Figured it out (and I feel dumb): When linking tables you can choose
the desired format for each field when going through the linked table
wizard steps.
But, I see no option in Access to choose the desired format during linking.

If there is anything like a "default" data type when creating an ODBC linked table in Access, that type would be Text(255). That is, if the ODBC driver reports a column with a data type that Access does not support (e.g. TIME in SQL Server) then Access will include it as a Text(255) column in the linked table.
In this case, for a PostgreSQL table
CREATE TABLE public.numeric_test_table
(
id integer NOT NULL,
text_col character varying(50),
numeric_col numeric,
CONSTRAINT numeric_test_table_pk PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
the PostgreSQL ODBC driver is actually reporting the numeric column as being numeric(28,6) as confirmed by calling OdbcConnection#GetSchema("columns") from C#
so that is what Access uses as the column type for its linked table. It is only when Access goes to retrieve the actual data that the PostgreSQL ODBC driver sends back values that won't "fit" in the corresponding column of the linked table.
So no, there is almost certainly no overall option to tell Access to treat all numeric (i.e., Decimal) columns as Double. The "best" solution would be to alter the PostgreSQL table definitions to explicitly state the precision and scale, as suggested in the PostgreSQL documentation:
If you're concerned about portability, always specify the precision and scale [of a numeric column] explicitly.
If modifying the PostgreSQL database is not feasible then another option would be to use a pass-through query in Access to explicitly convert the column to Double ...
SELECT id, text_col, numeric_col::double precision FROM public.numeric_test_table
... bearing in mind that pass-through queries always return read-only recordsets.

Related

Npgsql.PostgresException: Column cannot be cast automatically to type bytea

Using EF-Core for PostgresSQL, I have an entity with a field of type byte but decided to change it to type byte[]. But when I do migrations, on applying the migration file generated, it threw the following exception:
Npgsql.PostgresException (0x80004005): 42804: column "Logo" cannot be
cast automatically to type bytea
I have searched the internet for a solution but all I saw were similar problems with other datatypes and not byte array. Please help.
The error says exactly what is happening... In some cases PostgreSQL allows for column type changes (e.g. int -> bigint), but in many cases where such a change is non-trivial or potentially destructive, it refuses to do so automatically. In this specific case, this happens because Npgsql maps your CLR byte field as PostgreSQL smallint (a 2-byte field), since PostgreSQL lacks a 1-byte data field. So PostgreSQL refuses to cast from smallint to bytea, which makes sense.
However, you can still do a migration by writing the data conversion yourself, from smallint to bytea. To do so, edit the generated migration, find the ALTER COLUMN ... ALTER TYPE statement and add a USING clause. As the PostgreSQL docs say, this allows you to provide the new value for the column based on the existing column (or even other columns). Specifically for converting an int (or smallint) to a bytea, use the following:
ALTER TABLE tab ALTER COLUMN col TYPE BYTEA USING set_bytea(E'0', 0, col);
If your existing column happens to contain more than a single byte (should not be an issue for you), it should get truncated. Obviously test the data coming out of this carefully.

Can the foreign data wrapper fdw_postgres handle the GEOMETRY data type of PostGIS?

I am accessing data from a different DB via fdw_postgres. It works well:
CREATE FOREIGN TABLE fdw_table
(
name TEXT,
area double precision,
use TEXT,
geom GEOMETRY
)
SERVER foreign_db
OPTIONS (schema_name 'schema_A', table_name 'table_B')
However, when I query for the data_type of the fdw_table I get the following result:
name text
area double precision
use text
geom USER-DEFINED
Can fdw_postgres not handle the GEOMETRY data type of PostGIS? What does USER-DEFINED mean in this context?
From the documentation on the data_type column:
Data type of the column, if it is a built-in type, or ARRAY if it is
some array (in that case, see the view element_types), else
USER-DEFINED (in that case, the type is identified in udt_name and
associated columns).
So this is not specific to FDWs; you'd see the same definition for a physical table.
postgres_fdw can handle custom datatypes just fine, but there is currently one caveat: if you query the foreign table with a WHERE condition involving a user-defined type, it will not push this condition to the foreign server.
In other words, if your WHERE clause only references built-in types, e.g.:
SELECT *
FROM fdw_table
WHERE name = $1
... then the WHERE clause will be sent to the foreign server, and only the matching rows will be retrieved. But when a user-defined type is involved, e.g.:
SELECT *
FROM fdw_table
WHERE geom = $1
... then the entire table is retrieved from the foreign server, and the filtering is performed locally.
Postgres 9.6 will resolve this, by allowing you to attach a list of extensions to your foreign server object.
Well, obviously you are going to need any non-standard types defined at both ends. Don't forget the FDW functionality is supposed to support a variety of different database platforms, so there isn't any magic way to import remote operations on a datatype. Actually, given that one end could be running on MS-Windows and the other on ARM-based Linux there's not even a sensible way of doing it just with PostgreSQL.

Way to migrate a create table with sequence from postgres to DB2

I need to migrate a DDL from Postgres to DB2, but I need that it works the same as in Postgres. There is a table that generates values from a sequence, but the values can also be explicitly given.
Postgres
create sequence hist_id_seq;
create table benchmarksql.history (
hist_id integer not null default nextval('hist_id_seq') primary key,
h_c_id integer,
h_c_d_id integer,
h_c_w_id integer,
h_d_id integer,
h_w_id integer,
h_date timestamp,
h_amount decimal(6,2),
h_data varchar(24)
);
(Look at the sequence call in the hist_id column to define the value of the primary key)
The business logic inserts into the table by explicitly providing an ID, and in other cases, it leaves the database to choose the number.
If I change this in DB2 to a GENERATED ALWAYS it will throw errors because there are some provided values. On the other side, if I create the table with GENERATED BY DEFAULT, DB2 will throw an error when trying to insert with the same value (SQL0803N), because the "internal sequence" does not take into account the already inserted values, and it does not retry with a next value.
And, I do not want to restart the sequence each time a provided ID was inserted.
This is the problem in BenchmarkSQL when trying to port it to DB2: https://sourceforge.net/projects/benchmarksql/ (File sqlTableCreates)
How can I implement the same database logic in DB2 as it does in Postgres (and apparently in Oracle)?
You're operating under a misconception: that sources external to the db get to dictate its internal keys. Ideally/conceptually, autogenerated ids will never need to be seen outside of the db, as conceptually there should be unique natural keys for export or reporting. Still, there are times when applications will need to manage some ids, often when setting up related entities (eg, JPA seems to want to work this way).
However, if you add an id value that you generated from a different source, the db won't be able to manage it. How could it? It's not efficient - for one thing, attempting to do so would do one of the following
Be unsafe in the face of multiple clients (attempt to add duplicate keys)
Serialize access to the table (for a potentially slow query, too)
(This usually shows up when people attempt something like: SELECT MAX(id) + 1, which would require locking the entire table for thread safety, likely including statements that don't even touch that column. If you try to find any "first-unused" id - trying to fill gaps - this gets more complicated and problematic)
Neither is ideal, so it's best to not have the problem in the first place. This is usually done by having id columns be autogenerated, but (as pointed out earlier) there are situations where we may need to know what the id will be before we insert the row into the table. Fortunately, there's a standard SQL object for this, SEQUENCE. This provides a db-managed, thread-safe, fast way to get ids. It appears that in PostgreSQL you can use sequences in the DEFAULT clause for a column, but DB2 doesn't allow it. If you don't want to specify an id every time (it should be autogenerated some of the time), you'll need another way; this is the perfect time to use a BEFORE INSERT trigger;
CREATE TRIGGER Add_Generated_Id NO CASCADE BEFORE INSERT ON benchmarksql.history
NEW AS Incoming_Entity
FOR EACH ROW
WHEN Incoming_Entity.id IS NULL
SET id = NEXTVAL FOR hist_id_seq
(something like this - not tested. You didn't specify where in the project this would belong)
So, if you then add a row with something like:
INSERT INTO benchmarksql.history (hist_id, h_data) VALUES(null, 'a')
or
INSERT INTO benchmarksql.history (h_data) VALUES('a')
an id will be generated and attached automatically. Note that ALL ids added to the table must come from the given sequence (as #mustaccio pointed out, this appears to be true even in PostgreSQL), or any UNIQUE CONSTRAINT on the column will start throwing duplicate-key errors. So any time your application needs an id before inserting a row in the table, you'll need some form of
SELECT NEXT VALUE FOR hist_id_seq
FROM sysibm.sysdummy1
... and that's it, pretty much. This is completely thread and concurrency safe, will not maintain/require long-term locks, nor require serialized access to the table.

How to map a postgres money column in MS Access?

I have a postgresql database. In some tables there are columns of type money.
I'd like to use MS Access as a front-end.
But while linking the table in MS Access using postgres odbc driver, the columns of type money all have a value of 0. While trying to update the value an error is shown:
operator does not exist: money = double precision
The datatype in the linked table is NUMBER (FieldSize = DOUBLE)and it cannot be changed to currency.
I do not want to change the datatype in postgres from money to numeric. (But maybe, I have to?)
Hopefully some one can help.
Thanks.
Unfortunately, I think the best way to do this is to change the column data type in PostgreSQL to numeric. The good news is you can convert the columns without loosing precision.
I've tried it out and another way to do it is to create views for each of the tables containing money values that cast the columns to the numeric data type. You can then define update, insert, and delete INSTEAD OF triggers to make the views update-able, and cast the types in the trigger functions back to money. In this case the PostgreSQL engine is doing the conversion instead of the ODBC driver or Access. While this solution works, it is more complicated and brittle than just converting the column data types.
Both of the above solutions will make your data sheets update-able.
BTW, you can convert the numeric back to a currency on the Access side with CCur(columnname).
Try casting it to numeric:
# select 2::double precision::money;
ERROR: cannot cast type double precision to money
LINE 1: select 2.0::double precision::money;
^
# select 2::double precision::numeric::money;
money
-------
$2.00
(1 row)
# select 2::money::double precision;
ERROR: cannot cast type money to double precision
LINE 1: select 2::money::double precision;
^
# select 2::money::numeric::double precision;
float8
--------
2
(1 row)
I can recreate the issue (at least partially), and it definitely seems to be some weird interaction between the Access user interface, the Access Database Engine, and the PostgreSQL ODBC driver. I have an Access linked table named [public_table1] which is linked to the PostgreSQL table
CREATE TABLE table1
(
id serial NOT NULL,
customer character varying(255),
balance money,
CONSTRAINT pk_table1 PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE table1
OWNER TO postgres;
GRANT ALL ON TABLE table1 TO postgres;
GRANT ALL ON TABLE table1 TO public;
I can open the linked table in Datasheet View and I do see the [balance] values but I cannot edit them (same error as you). If I use the Access query builder to build an Update query like this...
UPDATE public_table1 SET public_table1.balance = 2.22
WHERE (((public_table1.[id])=4));
...I also get the error. However, the following VBA statement works fine:
Sub pgTest()
CurrentDb.Execute "UPDATE public_table1 SET balance=6.78 WHERE id=4", dbFailOnError
End Sub
So, at least in my case, it appears that I could use datasheets and bound forms to view the PostgreSQL data but not change it. Unfortunately, that limitation could significantly reduce the benefits of using Access as a front-end (depending on what sort of operations you intended to perform).

T-SQL implicit conversion between 2 varchars

I have some T-SQL (SQL Server 2008) that I inherited and am trying to find out why some of queries are running really slow. In the Actual Execution Plan I have three clustered index scans which are costing me 19%, 21% and 26%, so this seems to be the source of my problem.
The contents of the fields are usually numeric (but some job numbers have an alpha prefix)
The database design (vendor supplied) is pretty poor. The max length of a job number in their application is 12 chars, but in the tables that are joined it is defined as varchar(50) in some places and varchar(15) in others. My parameter is a varchar(12), but I get same thing if I change it to a varchar(50)
The node contains this:
Predicate: [Live_Costing].[dbo].[TSTrans].[JobNo] as [sts1].[JobNo]=CONVERT_IMPLICIT(varchar(50),[#JobNo],0)
sts1 is a derived table, but the table it pulls jobno from is a varchar(50)
I don't understand why it's doing an implicit conversion between 2 varchars. Is it just because they are different lengths?
I'm fairly new to the execution plan
Is there an easy way to figure out which node in the exec plan relates to which part of the query?
Is the predicate, the join clause?
Regards
Mark
Some variables can have collation: enter link description here
Regardless you need to verify your collations, which can be specified at server, DB, table, and column level.
First, check your collation between tempdb and the vendor supplied database. It should match. If it doesn't, it will tend to do implicit conversions.
Assuming you cannot modify the vendor supplied code base, one or more of the following should help you:
1) Predefine your temp tables and specify the same collation for the key field as in the db in use, rather than tempdb.
2) Provide collations when doing string comparisons.
3) Specify collation for key values if using "select into" with a temp table
4) Make sure your collations on your tables and columns match your database collation (VERY important if you imported only specific tables from a vendor into an existing database.)
If you can change the vendor supplied code base, I would suggest reviewing the cost for making all of your char keys the same length and NOT varchar. Varchar has an overhead of 10. The caveat is that if you create a fixed length character field not null, it will be padded to the right (unavoidable).
Ideally, you would have int keys, and only use varchar fields for user interaction/lookup:
create table Products(ProductID int not null identity(1,1) primary key clustered, ProductNumber varchar(50) not null)
alter table Products add constraint uckProducts_ProductNumber unique(ProductNumber)
Then do all joins on ProductID, rather than ProductNumber. Just filter on ProductNumber.
would be perfectly fine.