Pony ORM PostgreSQL Point type - ponyorm

Is it possible to create a data type for the geometric point type from postgres? For the point type, it is just a pair of numbers.

Pony doesn't have native support for these exotic kind of types.
But you can specify sql_type like this:
b = Required(str, sql_type='point')
Which gives you SQL:
CREATE TABLE "A" (
"id" INTEGER PRIMARY KEY AUTOINCREMENT,
"b" POINT NOT NULL
)
I just tested it in SQLite and transaction was succesful (even SQLite doesn't support point type).
But you should do your personal workaround for validating point data you are sending to database.

Related

PostgreSQL - CREATE TABLE - Apply constraints like PRIMARY KEY in attributes that are inside composite types

I want to implement an object-relational database, using PostgreSQL. I do not want to use ORACLE.
Can I create a composite type, and then use it in a table, adding a restriction for example of a primary key in one of its attributes?
Below I leave an example:
CREATE TYPE teamObj AS (
idnumeric,
name character varying,
city character varying,
estadiumName character varying,
jugadores playerObj[]
);
CREATE TABLE teamTable (
equipo equipoobj,
PRIMARY KEY (equipo.id)
);
The line PRIMARY KEY (equipo.id) gives an error, and I've read a lot of documentation of this topic, and I don't find the solution, maybe PostgreSQL has not implemented yet, or will never implement it, or I don't understand how runs PostgreSQL...
Does somebody have a solution?
Thank you.
No, you cannot do that, and I recommend that you do not create tables like that.
Your table definition should look pretty much like your type definition does. There is no need for an intermediate type definition.
Reasons:
that will not make your schema more readable
that violates the first normal form for relational databases
that does not match an object oriented design any better than the simple table definition
As a comfort, PostgreSQL will implicitly define a composite type of the same name whenever you define a table, so you can do things like
CAST(ROW(12, 'Raiders', 'Wolfschoaßing', 'Dorfwiesn', NULL) AS team)
or
CREATE FUNCTION getteam(id) RETURNS team

How to change data type default from Decimal to Double when linking external tables in Access?

I am using PostgreSQL backend with linked tables in Access. On using the wizard to link to the linked tables, I get errors:
Scaling of decimal value resulted in data truncation
This appears to be the wrong scale for numeric data types being chosen as the default by Access: the Postgresql data type being linked is Numeric with no precision or scale defined, and is being linked as Decimal with precision 28 and scale 6 as default.
How can I get Access to link it as Double?
I see here MS Access linked tables automatically long integers that the self-answer was:
Figured it out (and I feel dumb): When linking tables you can choose
the desired format for each field when going through the linked table
wizard steps.
But, I see no option in Access to choose the desired format during linking.
If there is anything like a "default" data type when creating an ODBC linked table in Access, that type would be Text(255). That is, if the ODBC driver reports a column with a data type that Access does not support (e.g. TIME in SQL Server) then Access will include it as a Text(255) column in the linked table.
In this case, for a PostgreSQL table
CREATE TABLE public.numeric_test_table
(
id integer NOT NULL,
text_col character varying(50),
numeric_col numeric,
CONSTRAINT numeric_test_table_pk PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
the PostgreSQL ODBC driver is actually reporting the numeric column as being numeric(28,6) as confirmed by calling OdbcConnection#GetSchema("columns") from C#
so that is what Access uses as the column type for its linked table. It is only when Access goes to retrieve the actual data that the PostgreSQL ODBC driver sends back values that won't "fit" in the corresponding column of the linked table.
So no, there is almost certainly no overall option to tell Access to treat all numeric (i.e., Decimal) columns as Double. The "best" solution would be to alter the PostgreSQL table definitions to explicitly state the precision and scale, as suggested in the PostgreSQL documentation:
If you're concerned about portability, always specify the precision and scale [of a numeric column] explicitly.
If modifying the PostgreSQL database is not feasible then another option would be to use a pass-through query in Access to explicitly convert the column to Double ...
SELECT id, text_col, numeric_col::double precision FROM public.numeric_test_table
... bearing in mind that pass-through queries always return read-only recordsets.

Can the foreign data wrapper fdw_postgres handle the GEOMETRY data type of PostGIS?

I am accessing data from a different DB via fdw_postgres. It works well:
CREATE FOREIGN TABLE fdw_table
(
name TEXT,
area double precision,
use TEXT,
geom GEOMETRY
)
SERVER foreign_db
OPTIONS (schema_name 'schema_A', table_name 'table_B')
However, when I query for the data_type of the fdw_table I get the following result:
name text
area double precision
use text
geom USER-DEFINED
Can fdw_postgres not handle the GEOMETRY data type of PostGIS? What does USER-DEFINED mean in this context?
From the documentation on the data_type column:
Data type of the column, if it is a built-in type, or ARRAY if it is
some array (in that case, see the view element_types), else
USER-DEFINED (in that case, the type is identified in udt_name and
associated columns).
So this is not specific to FDWs; you'd see the same definition for a physical table.
postgres_fdw can handle custom datatypes just fine, but there is currently one caveat: if you query the foreign table with a WHERE condition involving a user-defined type, it will not push this condition to the foreign server.
In other words, if your WHERE clause only references built-in types, e.g.:
SELECT *
FROM fdw_table
WHERE name = $1
... then the WHERE clause will be sent to the foreign server, and only the matching rows will be retrieved. But when a user-defined type is involved, e.g.:
SELECT *
FROM fdw_table
WHERE geom = $1
... then the entire table is retrieved from the foreign server, and the filtering is performed locally.
Postgres 9.6 will resolve this, by allowing you to attach a list of extensions to your foreign server object.
Well, obviously you are going to need any non-standard types defined at both ends. Don't forget the FDW functionality is supposed to support a variety of different database platforms, so there isn't any magic way to import remote operations on a datatype. Actually, given that one end could be running on MS-Windows and the other on ARM-based Linux there's not even a sensible way of doing it just with PostgreSQL.

Many-to-Many in Postgres?

I went with PostgreSQL because it is an ORDMBS rather than a standard relational DBMS. I have a class/object (below) that I would like to implement into the database.
class User{
int id;
String name;
ArrayList<User> friends;
}
Now, a user has many friends, so, logically, the table should be declared like so:
CREATE TABLE user_table(
id INT,
name TEXT,
friends TYPEOF(user_table)[]
)
However, to my knowledge, it is not possible to use a row of a table as a type (-10 points for postgreSQL), so, instead, my array of friends is stored as integers:
CREATE TABLE user_table(
id INT,
name TEXT,
friends INT[]
)
This is an issue because elements of an array cannot reference - only the array itself can. Added to this, there seems to be no way to import the whole user (that is to say, the user and all the user's friends) without doing multiple queries.
Am I using postgreSQL wrong? It seems to me that the only efficient way to use it is by using a relational approach.
I want a cleaner object-oriented approach similar to that of Java.
I'm afraid you are indeed using PostgreSQL wrong, and possibly misunderstanding the purpose of Object-relational databases as opposed to classic relational databases. Both classes of database are still inherently relational, but the former provides allowances for inheritance and user-defined types that the latter does not.
This answer to one of your previous questions provides you with some great pointers to achieve what you're trying to do using the Postgres pattern.
Well, first off PostgreSQL absolutely supports arrays of complex types like you describe (although I don't think it has a TYPEOF operator). How would the declaration you describe work, though? You are trying to use the table type in the declaration of the table. If what you want is a composite type in an array (and I'm not really sure that it is) you would declare this in two steps:
CREATE TYPE ima_type AS ( some_id integer, some_val text);
CREATE TABLE ima_table
( some_other_id serial NOT NULL
, friendz ima_type []
)
;
That runs fine. You can also create arrays of table types, because every table definition is a type definition in Postgres.
However, in a relational database, a more traditional model would use two tables:
CREATE TABLE persons
( person_id serial NOT NULL PRIMARY KEY
, person_name text NOT NULL
)
;
CREATE TABLE friend_lookup
( person_id integer FOREIGN KEY REFERENCES persons
, friend_id integer FOREIGN KEY REFERENCES persons(person_id)
, CONSTRAINT uq_person_friend UNIQUE (person_id, friend_id)
)
;
Ignoring the fact that the persons table has absolutely no way to prevent duplicate persons (what about misspellings, middle initials, spacing, honorifics, etc?; also two different people can have the same name), this will do what you want and allow for a simple query that lists all friends.

Database Schema for Machine Tags?

Machine tags are more precise tags: http://www.flickr.com/groups/api/discuss/72157594497877875. They allow a user to basically tag anything as an object in the format
object:property=value
Any tips on a rdbms schema that implements this? Just wondering if anyone
has already dabbled with this. I imagine the schema is quite similar to implementing
rdf triples in a rdbms
Unless you start trying to get into some optimisation, you'll end up with a table with Object, Property and Value columns Each record representing a single triple.
Anything more complicated, I'd suggested looking the documentation for Jena, Sesame, etc.
If you want to continue with the RDBMS approach then the following schema might work
CREATE TABLE predicates (
id INT PRIMARY KEY,
namespace VARCHAR(255),
localName VARCHAR(255)
)
CREATE TABLE values (
subject INT,
predicate INT,
value VARCHAR(255)
)
The table predicates holds the tag definitions and values the values.
But Mat is also right. If there are more requirements then it's probably feasible to use an RDF engine with SQL persistence support.
I ended up implementing this schema