loopback number precision - numbers

I'm using postgresql with loopback. I have a field in a model called amount being used as a Number. Now midway through the project, a new requirement came to hold amount value with precision. But I won't be able to store amount for example. as 1.50, as it is converted and stored as 1.5, which happens with javascript number type values.
Other approach for holding precision is to converting the property from number to string. But this change will be a major code change throughout all the modules.
Also there is a property called precision and scale for PostgreSQL connector. Is there any way in the model declaration (.json file) itself to hold precision with type number? Just wanted to be sure if there is any other approach before going through this change. Any help will be highly appreciated.

You can these fields to the model.json. I have not tested the code but i think this will work
"amount": {
"type": "Number",
"required": true,
"postgresql": {
"dataType": "numeric",
"dataPrecision": 20,
"dataScale": 0
}
}

Related

Setting Database Type in Anylogic

I have a product for my model that will go through a series of processing times. I'm assigning the processing times to each product type through an excel database which will be loaded into the model, with each processing time in a separate column (e.g. Product 1: 2.3, 4.8, 9 --> meaning that it takes 2.3-time units for process 1, 4.8-time units at process 2 and so on)
Currently, I am storing all the processing time in a List(Double) within my product (e.g. v_ProcessTime = [2.3, 4.8, 9]). However, I face an error when some columns contain purely integers instead of double values (The column value type will be recognised as integers and Anylogic prompts an error that I can't write an integer to a double list). The only workaround currently is to change the column type of the database to Double values manually.
Is it possible to use Java coding to change the value type of databases or any way to bypass this issue?
unfortunately, how the database recognize the type is out of your control... and yeah if you can't change the source itself with at least 1 value that is not an integer, then your only choice is to change the database value manually.
nevertheless, to use them in a list, you can just transform an integer into a double like this
(double)intVariable

How to transform all timestamp fields according to avro scheme when using Kafka Connect?

In our database we have over 20 fields which we need to transform from long to timestamp. Why there is no generic solution to transfer all this value ?
I know I can define:
"transforms":"tsFormat",
"transforms.tsFormat.type": "org.apache.kafka.connect.transforms.TimestampConverter$Value",
"transforms.tsFormat.target.type": "string",
"transforms.tsFormat.field": "ts_col1",
"transforms.tsFormat.field": "ts_col2",
but this is not solution for us. When we add new timestamp to db we need to update connector too
is there some generic solution to transform all fields according to avro schema ?
We are using debezium which for all timestamp fields create something like this:
{
"name": "PLATNOST_DO",
"type": {
"type": "long",
"connect.version": 1,
"connect.name": "io.debezium.time.Timestamp"
}
},
so how to find all type with connect.name = 'io.debezium.time.Timestamp' and transform it to timestamp
You'd need to write your own transform to be able to dynamically iterate of the record schema, check the types, and do the conversion.
Thus why they are called simple message transforms.
Alternatively, maybe take a closer look at the Debezium properties to see if there is a missing setting that alters how timestamps get produced.

Meltano: tag-postgres to target-postgres: data type such as uuid is converted to varchar

I'm actually working on a Meltano project where I have to extract data from one postgres database and load to the final warehouse (also a postgres database) using "Key-based Incremental Replication".
After the loading process of meltano, all table with columns of type uuid from the "tap-postgres" are changed to varchar in the target.
Could someone help to solve this issue?
Singer taps rarely capture "rich" string types such as uuid, and they output simple {"type": "string"} types without any additional metadata. Furthermore, Singer usually only supports JSONSchema Draft 4, so {"type": "string", "format": "uuid"} would not even be supported by most taps and targets.
Since Singer is an ELT analytics framework and in that context a uuid column type is not different in any meaningful way from a varchar, this is hardly an issue for most people.
If you need to replicate a database with that level of detail you may be better served by a different solution for database backup/replication.

Postgresql jsonb vs datetime

I need to store two dates valid_from, and valid_to.
Is it better to use two datetime fields like valid_from:datetime and valid_to:datatime.
Would be it better to store data in jsonb field validity: {"from": "2001-01-01", "to": "2001-02-02"}
Much more reads than writes to database.
DB: PostgresSQL 9.4
You can use daterange type :
ie :
'[2001-01-01, 2001-02-02]'::daterange means from 2001-01-01 to 2001-02-02
bound inclusive
'(2001-01-01, 2001-02-05)'::daterange means from 2001-01-01
to 2001-02-05 bound exclusive
Also :
Special value like Infinite can be use
lower(anyrange) => lower bound of range
and many other things like overlap operator, see the docs ;-)
Range Type
Use two timestamp columns (there is no datetime type in Postgres).
They can efficiently be indexed and they protect you from invalid timestamp values - nothing prevents you from storing "2019-02-31 28:99:00" in a JSON value.
If you very often need to use those two values to check if another tiemstamp values lies in between, you could also consider a range type that stores both values in a single column.

How do I determine if a Field in Salesforce.com stores integers?

I'm writing an integration between Salesforce.com and another service, and I've hit a problem with integer fields. In Salesforce.com I've defined a field of type "Number" with "Decimal Places" set to "0". In the other service, it is stored definitively as an integer. These two fields are supposed to store the same integral numeric values.
The problem arises once I store a value in the Salesforce.com variant of this field. Salesforce.com will return that same value from its Query() and QueryAll() operations with an amount of precision incorrectly appended.
As an example, if I insert the value "827" for this field in Salesforce.com, when I extract that number from Salesforce.com later, it will say the value is "827.0".
I don't want to hard-code my integration to remove these decimal values from specific fields. That is not maintainable. I want it to be smart enough to remove the decimal values from all integer fields before the rest of the integration code runs. Using the Salesforce.com SOAP API, how would I accomplish this?
I assume this will have something to with DescribeSObject()'s "Field" property where I can scan the metadata, but I don't see a way to extract the number of decimal places from the DescribeSObjectResult.
Ah ha! The number of decimal places is on a property called Scale on the Field object. You know you have an integer field if that's equal to "0".
Technically, sObject fields aren't integers, even if the "Decimal Places" property is set to 0. They are always decimals with varying scale properties. This is important to remember in APEX because the methods that are available are for Decimals aren't the same as those for integers, and you there are other potential type conversion issues (not always, but in some contexts).