I'm having issues with a Sails application running with a PostgreSQL database. I needed to insert an 11 digits integer, but I can't find a simple way to tweak my models for this.
Edit 1
here is an example of a model :
/**
* Phone.js
*
* #docs :: http://sailsjs.org/#!documentation/models
*/
module.exports = {
attributes: {
number: {
type: 'integer',
required: true,
minLength: 9
}
}
};
Is there a way (using the ORM) to change that integer into a BIGINT in Postgre so I don't get ERROR: integer out of rage when inserting?
According to this, yo should just be able to define "bigint" as the type
https://github.com/balderdashy/sails-postgresql/blob/master/lib/utils.js#L241
also supports float, real, smallint ect . . .
Defining type as bigint didn't work in my case. I was getting an error in validation while creating an entry in the model.
However, I gave the following properties,
type: string,
numeric: true,
minLength: 10
It was good enough but not exactly what I wanted.
The reason your value is returned as a string is because PostgreSQL's maximum value for BIGINT (2^63-1 = 9223372036854775807) is considerably bigger than Javascript's Number.MAX_SAFE_INTEGER (2^53 - 1 = 9007199254740991) so if Sails / Waterline ORM were to cast your returned value as an integer all the time there is a possibility of breaking things.
So, it's safer to return a string every time.
I was able to get it to work by setting the attributes with the following:
type: 'ref',
columnType: 'int8',
Related
I'm trying to use jOOQ's metadata API, and most columns behave the way I'd expect, but enum columns seem to be missing type and nullability information somehow.
For example, if I have a schema defined as:
CREATE TYPE public.my_enum AS ENUM (
'foo',
'bar',
'baz'
);
CREATE TABLE public.my_table (
id bigint NOT NULL,
created_at timestamp with time zone DEFAULT now() NOT NULL,
name text,
my_enum_column public.my_enum NOT NULL,
);
The following test passes:
// this is Kotlin, but hopefully pretty easy to decipher
test("something fishy going on here") {
val jooq = DSL.using(myDataSource, SQLDialect.POSTGRES)
val myTable = jooq.meta().tables.find { it.name == "my_table" }!!
// This looks right...
val createdAt = myTable.field("created_at")!!
createdAt.dataType.nullability() shouldBe Nullability.NOT_NULL
createdAt.dataType.typeName shouldBe "timestamp with time zone"
// ...but none of this seems right
val myEnumField = myTable.field("my_enum_column")!!
myEnumField.dataType.typeName shouldBe "other"
myEnumField.dataType.nullability() shouldBe Nullability.DEFAULT
myEnumField.dataType.castTypeName shouldBe "other"
myEnumField.type shouldBe Any::class.java
}
It's telling me that enum columns have Nullability.DEFAULT regardless of whether they are null or not null. For other types, Field.dataType.nullability will vary depending on whether the column is null or not null, as expected.
For any enum column, the type is Object (Any in Kotlin), and the dataType.typeName is "other". For non-enum columns, dataType.typeName gives me the correct SQL for the type.
I'm also using the jOOQ code generator, and it generates the correct types for enum columns. That is, it creates an enum class and uses that as the type for the corresponding fields, which are marked as not-nullable. The generated code for this field looks something like (reformatted to avoid long lines):
public final TableField<MyTableRecord, MyEnum> MY_ENUM_COLUMN =
createField(
DSL.name("my_enum_column"),
SQLDataType.VARCHAR
.nullable(false)
.asEnumDataType(com.example.schema.enums.MyEnum.class),
this,
""
)
So it appears that jOOQ's code generator has the type information, but how can I access the type information via the metadata API?
I'm using postgres:11-alpine and org.jooq:jooq:3.14.11.
Update 1
I tried testing this with org.jooq:jooq:3.16.10 and org.jooq:jooq:3.17.4. They seem to fix the nullability issue, but the datatype is still "other", and the type is still Object. So it appears the nullability issue was a bug in jOOQ. I'll file an issue about the type+datatype.
Update 2
This is looking like it may be a bug, so I've filed an issue.
I managed to create a model/schema and insert geo-Points into Postgis using sequelize. Since I have a lot of points (>100K) I am longing back to the old way I used to import, using ogr2ogr (gdal), which was much faster (almost instant instead of >20 minutes). As I would like to continue to work with sequelize after the initial import I still want sequelize to create the model/schema upfront, but then do import with ogr2ogr, and then continue CRUD with sequelize.
Here I found this fragment “[….] One way to get around this is to create your table structures before hand and use OGR2OGR to just load the data.” Which gave me the idea that this could work for Postgres/Postgis as well.
Sequelize creates timestamp columns for createdAt and updatedAt, which I like. But when I use ogr2ogr I get “null value in column "createdAt" violates not-null constraint” as loginfo.
Based on this slightly similar issue I tried to add a createdAt column by adding an -sql option:
ogr2ogr -f PostgreSQL PG:"host='0.0.0.0' user='root' port='5432' dbname='db' password='pw'" /home/user/geojsonImportfile.json -nln "DataPoints" -a_srs EPSG:4326 -sql "SELECT url, customfield, wkb_geometry, CURRENT_TIMESTAMP as createdAt FROM '/home/usr/geojsonImportfile.json'" -dialect 'PostgreSQL'
The error I get when running this is:
ERROR 1: SQL Expression Parsing Error: syntax error, unexpected end of string, expecting '.'. Occurred around :
home/user/geojsonImportfile0.json'
Besides my lack of SQL knowledge I am not sure if this can work at all.
How can I solve this, i.e. make the import with ogr2ogr but keep the timestamp columns?
When you create a table with sequelize.define, createdAt and updatedAt columns are created automatically as timestamp with time zone NOT NULL.
But you can rule not-null constraint in your sequelize definition script:
const Mytable = sequelize.define('mytable', {
id: {type: Sequelize.INTEGER, primaryKey: true},
createdAt: {type: Sequelize.DATE, validate: {notNull:false}}
});
Then table is created like :
CREATE TABLE mytables
(
id integer NOT NULL,
"createdAt" timestamp with time zone,
"updatedAt" timestamp with time zone NOT NULL,
CONSTRAINT mytables_pkey PRIMARY KEY (id)
)
http://docs.sequelizejs.com/manual/tutorial/models-definition.html#configuration
#JGH following your suggestion it makes sense to have a default timestamp. I can already set this up using sequelize, as discussed here:
var user = sequelize.define('user', {
createdAt: {
type: DataTypes.DATE,
field: 'beginTime',
defaultValue: sequelize.literal('NOW()')
}
}, {
timestamps: true,
});
I have the following (Grails) domain object:
class Country {
Integer id
char country_abbr
String country_name
static mapping = {
version false
id name: 'id'
table 'country'
id generator:'identity', column:'id'
}
static constraints = {
}}
The 'country_abbr' field within the 'country table' has type: character(2). However, whenever I am setting the domain object's data type (for 'country_abbr') to String, initialization is failing with the following exception
org.hibernate.HibernateException: Wrong column type in mydb.country for column country_abbr. Found: bpchar, expected: varchar(255)
On the other hand, leaving this type as a Java char would only retrieve the first character. Any ideas how may I map to this type? Also, what is bpchar exactly?
Thanks
Just to make this question answered. The solution is to change the country_abbr mapping:
country_abbr columnDefinition: 'char'
Reference here.
I had an existing PostgreSQL database with a table created like this:
CREATE TABLE product (id SERIAL PRIMARY KEY, name VARCHAR(100) DEFAULT NULL)
This table is described in a YML Doctrine2 file within a Symfony2 project:
Acme\DemoBundle\Entity\Product:
type: entity
table: product
fields:
id:
id: true
type: integer
nullable: false
generator:
strategy: SEQUENCE
name:
type: string
length: 100
nullable: true
When I run for the first time the Doctrine Migrations diff task, I should get a versioning file with no data in the up and down methods. But what I get instead is this :
// ...
class Version20120807125808 extends AbstractMigration
{
public function up(Schema $schema)
{
// this up() migration is autogenerated, please modify it to your needs
$this->abortIf($this->connection->getDatabasePlatform()->getName() != "postgresql");
$this->addSql("ALTER TABLE product ALTER id DROP DEFAULT");
}
public function down(Schema $schema)
{
// this down() migration is autogenerated, please modify it to your needs
$this->abortIf($this->connection->getDatabasePlatform()->getName() != "postgresql");
$this->addSql("CREATE SEQUENCE product_id_seq");
$this->addSql("SELECT setval('product_id_seq', (SELECT MAX(id) FROM product))");
$this->addSql("ALTER TABLE product ALTER id SET DEFAULT nextval('product_id_seq')");
}
}
Why are differences detected? How can I avoid this? I tried several sequence strategies with no success.
A little update on this question.
Using Doctrine 2.4, the solution is to use the IDENTITY generator strategy :
Acme\DemoBundle\Entity\Product:
type: entity
table: product
id:
type: integer
generator:
strategy: IDENTITY
fields:
name:
type: string
length: 100
nullable: true
To avoid DROP DEFAULT on fields that have a default value in the database, the default option on the field is the way to go. Of course this can be done with lifecycle callbacks, but it's necessary to keep the default value in the database if this database is used by other apps.
For a "DEFAULT NOW()" like default value, the solution is the following one:
Acme\DemoBundle\Entity\Product:
type: entity
table: product
id:
type: integer
generator:
strategy: IDENTITY
fields:
creation_date:
type: datetime
nullable: false
options:
default: CURRENT_TIMESTAMP
Doctrine 2.0 does not support the SQL DEFAULT keyword, and will always try to drop a postgres default value.
I have found no solution to this problem, I just let doctrine handle the sequences itself.
This is a opened bug registered here :
http://www.doctrine-project.org/jira/browse/DBAL-903
I am trying to create a Super Column Family that will replicate a structure like this.
{ 'hd':
'2008/12/12 10:03': { metric1: 'blah', metric2: 'blah'}
'2008/12/2 9:03': { metric1: 'blah', metric2: 'blah'}
'cpu':
'2008/12/12 10:03': { metric1: 'blah', metric2: 'blah'}
'2008/12/2 9:03': { metric1: 'blah', metric2: 'blah'}
}
My current try looks like this:
create column family Timestep
with column_type = 'Super'
and comparator = 'AsciiType'
and subcomparator = 'DateType'
and default_validation_class = 'DoubleType'
and key_validation_class = 'AsciiType'
and column_metadata = [
{column_name : metric1, validation_class : DoubleType}
{column_name : metric2, validation_class : DoubleType}
];
But if I try and run the above in the cassandra-cli:
java.lang.RuntimeException: org.apache.cassandra.db.marshal.MarshalException: unable to coerce 'open' to a formatted date (long)
Maybe I am not understanding what a super column family is properly, but any help would be awesome.
Thanks.
It is very strongly recommended that you not use supercolumns, especially in new design. They have never been problem-free, and now they are deprecated and much more capably replaced by composite keys.
Your data could be nicely represented like this in CQL 3, for example:
CREATE TABLE Timestep (
hardware ascii,
when timestamp,
metric1 double,
metric2 double,
PRIMARY KEY (hardware, when)
);
Or, depending on exactly what you expect to have, it may make more sense to use:
CREATE TABLE Timestep (
hardware ascii,
metricname ascii,
when timestamp,
value double,
PRIMARY KEY (hardware, metricname, when)
) WITH COMPACT STORAGE;
See this article for more information on how these translate to storage engine wide rows in Cassandra.
May I know which API u are using?
If it is Hector, then I might help you. But i personally
recommend you not to use super column because, getting sub columnlist
from a super column is a headache. Plus there are lots of performance related issues.
Moreover super columns are getting deprecated. So better go with composite keys.