Can I force an Eloquent model created_at to use unix_timestamp() for DB server time to combat time drift - eloquent

I like storing my timestamps as epoch values, so in my model I use:
protected $dateFormat = 'U';
and in my migrations I use:
$table->unsignedInteger('created_at');
$table->unsignedInteger('updated_at');
On a single server with both PHP and the DB on, this gets the result I want.
But, this uses the local clock and in a scaled install with multiple 'front end' app servers all connecting to a single 'back end' database server, if any of the front end server clocks begin to drift I'm going to see entries in the DB where the (auto incrementing) id shows one order of insertion but the created_at showing a different order. This is not what I want.
With MySQL as the DB I'm trying to find a way to have created_at use the MySQL unix_timestamp() function so that the clock on the database server is used.
Is this even possible?

Related

What could cause Firebird to silently turn calculated fields into "normal" fields?

I'm using Firebird 2.5.8 to store information for a software I designed.
A customer contacted me today to inform me of multiple errors that I couldn't understand, and I used the "IBExpert" tool to inspect its database.
To my surprise, all the calculated fields had been transformed into "standard" fields. This is clearly visible in the "DDL" tab of the database tool, which displays tables definition as SQL code.
For instance, the following table definition:
CREATE TABLE TVERSIONS (
...
PARENTPATH COMPUTED BY (((SELECT TFILES.FILEPATH FROM TFILES WHERE ID = TVERSIONS.FILEID))),
....
ISCOMPLETE COMPUTED BY ((((SELECT TBACKUPVERSIONS.ISCOMPLETE FROM TBACKUPVERSIONS WHERE ID = TVERSIONS.CVERSION)))),
CDATE COMPUTED BY (((SELECT TBACKUPVERSIONS.SERVERSTARTDATE FROM TBACKUPVERSIONS WHERE ID = TVERSIONS.CVERSION))),
DDATE COMPUTED BY (((SELECT TBACKUPVERSIONS.SERVERSTARTDATE FROM TBACKUPVERSIONS WHERE ID = TVERSIONS.DVERSION))),
...
);
has been "changed" in the client database into this:
CREATE TABLE TVERSIONS (
...
PARENTPATH VARCHAR(512) CHARACTER SET UTF8 COLLATE UNICODE,
...
ISCOMPLETE SMALLINT,
CDATE TIMESTAMP,
DDATE TIMESTAMP,
...
);
How can such a thing be possible?
I've been using Firebird for more than 10 years, and I've never seen such a behavior until now. Is it possible that it could be a corruption of RDB$FIELDS.RDB$COMPUTED_SOURCE fields?
What would you advise?
To summarize the discussion on firebird-support (and comments above):
The likely cause of this happening is that the database was backed up and restored using gbak, and the restore did not complete successfully. If this happens, gbak will have ended in an error, and the database is in single shutdown state (which means only SYSDBA or the database owner is allowed to create one connection). If the database is not currently in single shutdown mode, someone used gfix to bring the database online again in normal state.
When a database is restored using gbak, calculated fields are initially created as normal fields (though their values are not part of the backup). After data is restored successfully, those fields are altered to be calculated fields. If there are any errors before or during redefinition of the calculated fields, the restore will fail, and the database will be in single shutdown state, and the calculated fields will still be "normal" fields.
I recommend doing a structural comparison of the database to check if calculated fields are the only problem, or if other things (e.g. constraints) are missing. A simple way to do this is to export the DDL of the database and a "known-good" database, for example using ISQL (command line option -extract), and comparing them with a diff tool.
Then either fix the existing database by executing the necessary DDL to restore calculated fields (and other things), or create a new empty database, and move the data from the old to the new (using a datapump tool).
Also check if any data is missing. By default, gbak restores the data in a single transaction, so in that case either all data is present or all data is missing. However, gbak also has a "transaction-per-table" mode (-ONE_AT_A_TIME or -O), which could mean some tables have data, and others have no data.

Batch insert in PostgreSQL extremely slow (F#)

The code is in F#, but it's generic enough that it'll make sense to anyone not familiar with the language.
I have the following schema:
CREATE TABLE IF NOT EXISTS trades_buffer (
instrument varchar NOT NULL,
ts timestamp without time zone NOT NULL,
price decimal NOT NULL,
volume decimal NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_instrument ON trades_buffer(instrument);
CREATE INDEX IF NOT EXISTS idx_ts ON trades_buffer(ts);
My batches are made of 500 to 3000 records at once. To get an idea of the performance, I'm developing on a 2019 MBP (i7 CPU), running PostgreSQL in a Docker container.
Currently it will take between 20 and 80 seconds to insert a batch; while the size / time is not really linear, it scales somewhat.
I'm using this lib https://github.com/Zaid-Ajaj/Npgsql.FSharp as a wrapper around Npgsql.
This is my insertion code:
let insertTradesAsync (trades: TradeData list) : Async<Result<int, Exception>> =
async {
try
let data =
trades
|> List.map (fun t ->
[
"#instrument", Sql.text t.Instrument.Ticker
"#ts", Sql.timestamp t.Timestamp.DateTime
"#price", Sql.decimal t.Price
"#volume", Sql.decimal (t.Quantity * t.Price)
]
)
let! result =
connectionString
|> Sql.connect
|> Sql.executeTransactionAsync [ "INSERT INTO buffer_trades (instrument, ts, price, volume) VALUES (#instrument, #ts, #price, #volume)", data]
|> Async.AwaitTask
return Ok (List.sum result)
with ex ->
return Error ex
}
I checked that the connection step is extremely fast (<1ms).
pgAdmin seems to show that the PostGreSQL is mostly idle.
I ran profiling on the code and none of this code seem to take any time.
It's as if the time spent was in the driver, between my code and the database itself.
Since I'm a newbie with PostGreSQL, I could also be doing something horribly wrong :D
Edit:
I have tried a few things:
use the TimeScale plugin, made for time series
move the data from a docker volume to a local folder
run the code on a ridiculously large PostgreSQL AWS instance
and the results are the same.
What I know now:
no high CPU usage
no high ram usage
no hotspot in the profile on my code
pgAdmin shows the db is mostly idle
having an index, or not, has no impact
local or remote database gives the same results
So the issue is either:
how I interact with the DB
in the driver I'm using
Update 2:
The non async version of the connector performs significantly better.

How to make a Cassandra connection using CQL used to create a table?

I am new to tableau, gone through the site before having this question posted, didn't found answer matching to my question.
I have connection established successfully to Cassandra using "DataStax cassandra ODBC driver 64bit windows", evrything is fine, filled all details like "keyspace name, table name as per documentation available in Datastax site".
But when I drag the available table to canvas it's keep on loading for minutes, what the database guy has told me about the data is it's more millions of data for one day, so we have 6months data and that to data gets updated for every 10 minutes, it;s for a reputed wind energy company.
My client has given me "" CQL used for creating table:
create table abc_data_test.machine_data
(machine_id text, tag text, timestamp timestamp, value double,
PRIMARY KEY((machine_id, tag), timestamp))
WITH CLUSTERING ORDER BY(timestamp DESC)
AND compression = { 'sstable_compression' : 'LZ4Compressor' };"".
Where to keep this code?
I tried to insert in connection page it's giving a error. I am getting a new custom sql error (I placed the code in "new custom sql" ) .
The time is still running, can be seen as:
processing request: connecting to datasource, Elapsed time 87:09
The error from new custom sql is
An error occured while commuicating with the datasource. [DataStax][CassandraODBC] (10) Error while executing a query in Cassandra:33562624: line 1.11 no viable alternative at input '1' (SELECT [TOP]1...)
I'm using windows 10 64bit, DataStax odbc driver 64bit-2.4.1 version,DSE is4.8 and later .
You cannot pass DDL sql into the custom sql box. If the Cassandra connection supports the Initial SQL option, you could pass it there. Then your custom sql would be some sort of select statement. Otherwise, create the table in Cassandra then connect to that table from Tableau.

How to persist only time using eclipseLink?

I have a timestamp attribute in my oracle 11g database table. I generated JPA entities from table and the timestamp attribute was created as a date entity. I only want to store and retrieve time values into the database. how to do this? I am using eclipseLink2.4
You can use the #Temporal(TemporalType.TIME) annotation
#Temporal(TemporalType.TIME)
private Date value;
Then only the time portion should be stored to the database.
If you're using the JPA Criteria API to create Queries, you should make sure that the Date portion is zero on existing data. I.e. on an Oracle Database if you query the data like select to_char(timecolumn,'dd.mm.yyyy hh24:mi:ss') from sometable the result should look like "01.01.1970 XX:XX:XX". Normally if you're using the JPA to store Time Values with TemporalType.TIME it will take care of that for you. If the Date portion isn't zero you may have problems comparing the time field.

SELECT old data state with DB2

Is it possible to access old data state in an DB2 database?
Oracle has the clause select ... as of timestamp to do it. Does DB2 have something like it?
Yes, you can select a set of rows that were / will be valid in a past / future time. This is called Time Travel in DB2, but you have to configure / create the table with the extra columns in order to activate this feature. This is new in DB2 10, but I think it is not available in all editions.
For more information, take a look at this: http://www.ibm.com/developerworks/data/library/techarticle/dm-1204db2temporaldata/
Remember, there are two concepts: business time and application time, and when using both is called bi-temporal.