I managed to create a model/schema and insert geo-Points into Postgis using sequelize. Since I have a lot of points (>100K) I am longing back to the old way I used to import, using ogr2ogr (gdal), which was much faster (almost instant instead of >20 minutes). As I would like to continue to work with sequelize after the initial import I still want sequelize to create the model/schema upfront, but then do import with ogr2ogr, and then continue CRUD with sequelize.
Here I found this fragment “[….] One way to get around this is to create your table structures before hand and use OGR2OGR to just load the data.” Which gave me the idea that this could work for Postgres/Postgis as well.
Sequelize creates timestamp columns for createdAt and updatedAt, which I like. But when I use ogr2ogr I get “null value in column "createdAt" violates not-null constraint” as loginfo.
Based on this slightly similar issue I tried to add a createdAt column by adding an -sql option:
ogr2ogr -f PostgreSQL PG:"host='0.0.0.0' user='root' port='5432' dbname='db' password='pw'" /home/user/geojsonImportfile.json -nln "DataPoints" -a_srs EPSG:4326 -sql "SELECT url, customfield, wkb_geometry, CURRENT_TIMESTAMP as createdAt FROM '/home/usr/geojsonImportfile.json'" -dialect 'PostgreSQL'
The error I get when running this is:
ERROR 1: SQL Expression Parsing Error: syntax error, unexpected end of string, expecting '.'. Occurred around :
home/user/geojsonImportfile0.json'
Besides my lack of SQL knowledge I am not sure if this can work at all.
How can I solve this, i.e. make the import with ogr2ogr but keep the timestamp columns?
When you create a table with sequelize.define, createdAt and updatedAt columns are created automatically as timestamp with time zone NOT NULL.
But you can rule not-null constraint in your sequelize definition script:
const Mytable = sequelize.define('mytable', {
id: {type: Sequelize.INTEGER, primaryKey: true},
createdAt: {type: Sequelize.DATE, validate: {notNull:false}}
});
Then table is created like :
CREATE TABLE mytables
(
id integer NOT NULL,
"createdAt" timestamp with time zone,
"updatedAt" timestamp with time zone NOT NULL,
CONSTRAINT mytables_pkey PRIMARY KEY (id)
)
http://docs.sequelizejs.com/manual/tutorial/models-definition.html#configuration
#JGH following your suggestion it makes sense to have a default timestamp. I can already set this up using sequelize, as discussed here:
var user = sequelize.define('user', {
createdAt: {
type: DataTypes.DATE,
field: 'beginTime',
defaultValue: sequelize.literal('NOW()')
}
}, {
timestamps: true,
});
Related
I've found a problem with an sqlc codegen app. In case, when I need an interval (postgresql) field, sqlc generates an object with int64 field. This solution looks broken and creates an error while scanning a row: Errorf("cannot convert %v to Interval", value)
sqlc.yaml:
version: "2"
overrides:
go:
overrides:
- db_type: "interval"
engine: "postgresql"
go_type:
import: "time"
package: "time"
type: "https://pkg.go.dev/time#Duration"
sql:
- queries: "./sql_queries/raffle.query.sql"
schema: "./migrations/001-init.sql"
engine: "postgresql"
gen:
go:
package: "raffle_repo"
out: "../repo/sql/raffle_repo"
sql_package: "pgx/v4"
schema.sql:
create table windowrange
(
id serial primary key,
open timestamp not null ,
duration interval not null,
created_at timestamp default now(),
updated_at timestamp default now(),
raffle_id integer not null
constraint raffle_id
references raffle
on delete cascade
);
generated model:
type Windowrange struct {
ID int32
Open time.Time
Duration int64
CreatedAt sql.NullTime
UpdatedAt sql.NullTime
RaffleID int32
}
It was quickly fixed by making this field of type time.Duration and code becomes working, but this code were codegened and it looks like bad decision.
While trying to override types by an sqlc.yaml config I have nothig, object is still creating of int64 type. Where I'm wrong and how can I fix this?
In the supported types, you will see that pg_catalog.interval is also one of the supported values for interval in Postgres.
So, if you just want to use time.Duration instead of int64, you need to change your overrides section to:
overrides:
go:
overrides:
- db_type: "pg_catalog.interval"
engine: "postgresql"
go_type:
import: "time"
type: "Duration"
Tip: if it doesn't work for the most obvious data type, you can try another one.
My project (NestJS with TypeScript) is using TypeOrm on PostgreSQL database.
My table has a column (in migration file):
new TableColumn({
name: 'managed_at',
type: 'timestamp',
isNullable: true,
}),
Associated field in entity class:
#Column({ type: 'timestamp', nullable: true })
managedAt: Date | null;
I would like the column managed_at holds value of date and time.
If I save a data to the table with:
import { Repository } from 'typeorm';
...
// repo is the Repository of typeorm
repo.update(
{ managedAt: Date.now() }
);
I get error:
QueryFailedError: date/time field value out of range: "1651495656811"
How to solve this problem that using Date.now() and hold value of data and time?
import { Repository } from 'typeorm';
...
// repo is the Repository of typeorm
repo.update(
{ managedAt: new Date() }
);
Change Date.now() -> new Date().
You need to save Date type data to column in timestamp type.
Btw, you can add this in your entity class.
It will update column before update data.
#BeforeUpdate()
updateManagedAt(): void {
this.managedAt = new Date();
}
The static Date.now() method returns the number of milliseconds elapsed since January 1, 1970 00:00:00 UTC, as per documentation here Date.now().
Whereas valid input for the time stamp types consists of the concatenation of a date and a time, followed by an optional time zone (if you are using timestamptz type instead of timestamp type), followed by an optional AD or BC. (Alternatively, AD/BC can appear before the time zone, but this is not the preferred ordering.
You can read more about Date/Time types in pgSQL here.
In your case, you can do it like this
repo.update({ managedAt: (new Date()).toISOString() });
You'd use toISOString for sharing a date with another machine or process.
I'm facing an unique constraint violation issue when doing an upsert, because the UPDATE query built by sequelize ignores the partial index constraint defined by the model (unless it doesn't matter). I'm new to node+sequelize so I might be missing something obvious, but I went through all the potential places for finding the appropriate answers, inclusive of the sequelize code, but I'm not able to find the answer I'm looking for. Really appreciate your help!
My current versions:
"pg": "7.9.0",
"sequelize": "5.21.3"
I have a model that consists of a primary key: id and two other unique indexes of which one of them is a nullable field.
module.exports.Entities = sequelize.define('entities', {
id: {type: Sequelize.UUID, defaultValue: Sequelize.UUIDV4, allowNull: false, primaryKey: true},
cId: {type: Sequelize.STRING, allowNull: false},
pId: {type: Sequelize.UUID, allowNull: false},
eKey: {type: Sequelize.INTEGER, allowNull: true}
}, {
indexes: [
{
name: 'unique_c_id_p_id',
fields: ['c_id', 'p_id'],
unique: true
},
{
name: 'unique_e_key',
fields: ['e_key'],
unique: true,
where: {
eKey: {
[Op.not]: null
}
}
}
]
})
and the table itself looks like below:
CREATE TABLE public.entities (
id UUID DEFAULT uuid_generate_v4 (),
c_id UUID NOT NULL,
p_id UUID NOT NULL,
e_key INTEGER DEFAULT NULL,
CONSTRAINT ENTITY_SERVICE_PKEY PRIMARY KEY (id),
CONSTRAINT unique_c_id_p_id UNIQUE (c_id, p_id)
);
CREATE UNIQUE INDEX unique_e_key ON public.entities (e_key) WHERE e_key IS NOT NULL;
The upsert method call looks like:
module.exports.upsert = async (Model, values) => Model.upsert(values, {returning: true})
I pass the above Entities model, and the below value as arguments to this function.
{
"id"="3169d4e2-8e2d-451e-8be0-40c0b28e2aa9",
"c_id"="00000000-0000-0000-0000-000000000000",
"p_id"="78bce392-4a15-4a8a-986b-c9398787345f",
"e_key"= null
}
Issue: SequelizeUniqueConstraintError
Sequelize tries to do an insert followed by an update query when we attempt to update an existing record using the upsert method.
The insert query shows a conflict, since the record exists already, and sequelize upsert call proceeds on to invoke the update query.
However, the query that it builds to UPDATE looks something like below:
"SQL statement UPDATE entities SET id='3169d4e2-8e2d-451e-8be0-40c0b28e2aa9',c_id='00000000-0000-0000-0000-000000000000',p_id='78bce392-4a15-4a8a-986b-c9398787345f',e_key=NULL
WHERE (id = '3169d4e2-8e2d-451e-8be0-40c0b28e2aa9'
OR e_key IS NULL
OR (c_id = '00000000-0000-0000-0000-000000000000' AND p_id = '78bce392-4a15-4a8a-986b-c9398787345f'))
RETURNING id\nPL/pgSQL function pg_temp_5.sequelize_upsert() line 1 at SQL statement"
Now, I do understand the reason why it's throwing the unique constraint violation, since in the above query's WHERE clause sequelize calls OR e_key IS NULL since e_key = null and that could potentially return more than 1 record, and the SET is trying to update the same value for all those records that were returned thereby violating the primaryKey constraints, unique constraints etc.
What I would like to understand is that:
Why does sequelize not exclude the e_key unique constraint based on the partial index defined given that it picks the WHERE clause attributes based on the constraints defined in the Model & it's indexes?
Is there anything that I could do to get past this issue?
Or, am I missing something obvious that I could fix and try?
Really appreciate you taking your time to read and respond. Thanks!
I had a table.
I added a new column.
Even though I had set default value in sequelize model, those columns still ended up empty.
So I get error Unhandled rejection SequelizeDatabaseError: column "col_name" contains null values
How do you populate new column with default values upon creation so not null constraint is not broken.
You can update the values first:
update t
set col_name = ?
where col_name is null;
Then add the not null constraint.
Even though I had set default value in sequelize model
I suspect there is a discrepancy between the migration and model. To insert the column with a migration containing a default value use defaultValue in your migration.
The following is a working example:
module.exports = {
up: (queryInterface, Sequelize) => {
return queryInterface.addColumn('tests', 'new_column', {
defaultValue: 'test',
type: Sequelize.STRING
})
}
}
Before running sequelize db:migrate
After sequelize db:migrate:
The documentation for the options object in addColumn is hard to find, it's listed for a different method
I created a table with following syntax:
create table poll(poll_id string primary key,
poll_type_id integer,
poll_rating array(object as (rating_id integer,fk_user_id string, israted_image1 integer, israted_image2 integer, updatedDate timestamp, createdDate timestamp )),
poll_question string,
poll_image1 string,
poll_image2 string
)
And I inserted a record without "poll_rating" field which is actually an array of objects fields.
Now when I try to update a poll_rating with the following commands:
update poll set poll_rating = [{"rating_id":1,"fk_user_id":-1,"israted_image1":1,"israted_image2":0,"createddate":1400067339.0496}] where poll_id = "f748771d7c2e4616b1865f37b7913707";
I'm getting an error message like this:
"SQLParseException[line 1:31: no viable alternative at input '[']; nested: ParsingException[line 1:31: no viable alternative at input '[']; nested: NoViableAltException;"
Can anyone tell me why I get this error when I try to update the array of objects fields.
Defining arrays and objects directly in SQL statement is currently not supported by our SQL parser, please use parameter substitution using placeholders instead as described here:
https://crate.io/docs/current/sql/rest.html
Example using curl is as below:
curl -sSXPOST '127.0.0.1:4200/_sql?pretty' -d#- <<- EOF
{"stmt": "update poll set poll_rating = ? where poll_id = ?",
"args": [ [{"rating_id":1,"fk_user_id":-1,"israted_image1":1,"israted_image2":0,"createddate":1400067339.0496}], "f748771d7c2e4616b1865f37b7913707" ]
}
EOF