What happens to CKRecords of a CloudkitZone when zone is deleted - cloudkit

I can not find this in documentation but what would happen if a CloudKit zone is deleted? what happens to the CKRecords associated with that zone? are they deleted also? Is there e a constrain that you can delete a zone when has not records in it?

When a zone is deleted, the records in that zone are deleted. There is no constraint about deleting a zone - you can delete a zone with no records, or with many.

Related

Azure Data Factory fails with UPSERT for every table with a TIMESTAMP column

my azure data factory throws the error "Cannot update a timestamp column" for every table with a TIMESTAMP column.
ErrorCode=SqlOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=A database operation failed. Please search error to get more details.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Data.SqlClient.SqlException,Message=Cannot update a timestamp column.,Source=.Net SqlClient Data Provider,SqlErrorNumber=272,Class=16,ErrorCode=-2146232060,State=1,Errors=[{Class=16,Number=272,State=1,Message=Cannot update a timestamp column.,},],'
I do not want to update the column itself. But even when I delete it from column mapping, it crashes. Here it is not yet deleted:
I get that TIMESTAMP is not a simple datetime and is updated automatically whenever a another column in that row is updated.
The timestamp data type is just an incrementing number and does not preserve a date or a time.
But how do I solve this problem?
I tried to reproduce the issue, and on my ADF, if I remove the timestamp column from mapping the pipeline run with no errors.
But since this doesn't work for you, here are 2 workaround options:
Option 1 - on the source, use a query and remove the timestamp column from the query.
Option 2 - I tried to reproduce your error, and found out that it only happens on upsert. If I use insert, it runs with no error (though it ignore the insert on the timestamp column and increment the timestamp). So you can try to insert to a staging table and then update in sql only the columns you want.

Attach Partition takes more time even after adding check constraint

So basically we have a very large table in Postgres 11 DB which has hundreds of millions of data since the table was added. Now we are trying to convert it into a range based partition table based on created_at column (timestamp - not nullable).
As suggested in the Postgres Partitioning Documentation, I tried adding a check constraint on the same table before actually running the attach partition. So after that if I run attach partition, ideally it should have taken very less time as it should skip the validation due to presence of respective constraint on the table, but I see it is still taking lot more time. My partition range and the constraint looks something like this:
alter table xyz_2020 add constraint temp_check check (created_at >= '2020-01-01 00:00:00' and created_at < '2021-01-01 00:00:00');
ALTER TABLE xyz ATTACH PARTITION xyz_2020 FOR VALUES FROM ('2020-01-01 00:00:00') TO ('2021-01-01 00:00:00');
Here xyz_2020 is my existing big table which got renamed from xyz. And xyz is the new master table created like the old table. So I want to understand what could be the possible reasons that attach partition might still be taking lot more time.
Edit: We are creating a new partitioned table and trying to attach the old table as one of its partition.

Created date, last modified date fields in postgress

In PostgreSQL, is there a way to add columns that will automatically record the creation date and latest updated date of a row?
for table creation date look to event triggers
for insertion look into DEFAULT value for timestamptz column (works only if you don't explicitly define value)
for last modification, use trigger FOR EACH ROW before DELETE/UPDATE
The idea - Robust way of adding created and modified fields for data we add to database through db triggers
Update modified_by and modeified_on or modified_at for every db transaction.
Pick created_on and created_by or created_at from modified details whenever you insert a row into tables.
For trigger function, check this repo https://github.com/charan4ks/created_fields.git

Postgresql internal record id / creation date

I'm trying to determine whether or not postgresql keeps internal (but accessible via a query) sequential record ids and / or record creation dates.
In the past I have created a serial id field and a record creation date field, but I have been asked to see if Postgres already does that. I have not found any indication that it does, but I might be overlooking something.
I'm currently using Postgresql 9.5, but I would be interested in knowing if that data is kept in any version.
Any help is appreciated.
Thanks.
No is the short answer.
There is no automatic timestamp for rows in PostgreSQL.
You could create the table with a timestamp with a default.
create table foo (
foo_id serial not null unique
, created_timestamp timestamp not null
default current_timestamp
) without oids;
So
insert into foo values (1);
Gives us
You could also have a modified_timestamp column, which you could
update with an after update trigger.
Hope this helps

How to convert a postgres database from CST to GMT?

I setup a postgres db that was installed on a server in the Central Time Zone so all the timestamp columns are in Central Time Zone.
Is there a way in postgres to change all the timezone columns in a database from CST to GMT? Is it a best practice to configure databases to use GMT?
Well, best practice is to avoid using TIMESTAMP type which does not know anything about timezones and to always use TIMESTAMPTZ (short for TIMESTAMP WITH TIME ZONE) type to store your timestamps.
TIMESTAMPTZ stores both timestamp in UTC and timezone in which timestamp was originally written to (like CST, PST or GMT+6). This allows you to always manipulate and display these columns correctly, no matter what current server or client timezone setting is.
You should be able to convert your existing TIMESTAMP columns into TIMESTAMPTZ using something like:
ALTER TABLE mytable ALTER COLUMN old_tstamp TYPE TIMESTAMPTZ
Before doing this, you should experiment on small dataset or maybe small test table on how conversion from TIMESTAMP to TIMESTAMPTZ is really working for you such that time zone information is preserved on your data.
If conversion does not work correctly, you can temporarily set timezone for current session (you need it only for conversion purposes) using statement like (use it before ALTER TABLE ... COLUMN ... TYPE):
SET timezone TO 'CST6CDT';
or
SET timezone TO 'UTC+6';
This will affect subsequent operations and conversions from TIMESTAMP to TIMESTAMPTZ - just make sure to get it right.
After you have converted all timestamps to TIMESTAMPTZ, server or client timezone setting (which defaults to current operating system timezone setting) is only useful for display purposes, as data manipulation will be always correct.