Preventing deletion of noncurrent objects - google-cloud-storage

I'm storing backups in Cloud Storage. A desirable property of such a backup is to ensure the device being backed up cannot erase the backups, to protect against ransomware or similar threats. At the same time, it is desirable to allow the backup client to delete so old files can be pruned. (Because the backups are encrypted, it isn't possible to use lifecycle management to do this.)
The solution that immediately comes to mind is to enable object versioning and use lifecycle rules to retain object versions (deleted files) for a certain amount of time. However, I cannot see a way to allow the backup client to delete the current version, but not historical versions. I thought it might be possible to do this with an IAM condition, but the conditional logic doesn't seem flexible enough to parse out the object version. Is there another way I've missed?
The only other solution that comes to mind is to create a second bucket, inaccessible to the backup client, and use a Cloud Function to replicate the first bucket. The downside of that approach is the duplicate storage cost.

To answer this:
However, I cannot see a way to allow the backup client to delete the current version, but not historical versions
When you delete a live object, object versioning will retain a noncurrent version of it. When deleting the noncurrent object version, you will have to specify the object name along with its generation number.
Just to add, you may want to consider using a transfer job to replicate your data on a separate bucket.
Either way, both approach (object versioning or replicating buckets) will incur additional storage costs.

Related

Handling database changes with Azure deployment slot

I’m currently configuring an environment for a web site. As I’m using azure, I’d like to use deployment slots in order to ensure that users won’t get any downtime. While I understand the goal of deployment slots, I have difficulties to understand how they could be usable in my case.
Basically, the site is using a database that will evolve as the times goes. In other words, most of my releases will alter the schema of the database and I can’t ensure that it will always be backward compatible (indeed I might delete columns or something).
Therefore, there are two solutions in my mind . Either both DS will share the same database or they use different ones.
However, if they share the same database, after the deployment is executed in the staging DS, it can be that the production one start failing (because it’s referencing a deleted column for example). So I can’t use DS the way.
Using two separate databases seems to be a viable option... if they are synchronized. Indeed, if the db of the staging db is only there to validate a deployment, I can’t swap the site to this DS while production DS is updated, as the data of this DB won’t be up to date. Therefore I need to ensure that data are in sync BUT that when the staging gets updated, this sync is somehow paused as the schema won’t be the same anymore...
All the article I read talk about DS without really mentioning the database issue, so I don’t really understand how it is supposed to work. Can some enlighten me a bit please ?

In ATG, what is default caching for order repository?

In atg what is default caching for order repository.
As per the documentation it is simple(if my understanding is correct?).
Is it better to keep the order repository as simple, because lots of modification will happen for an order object. Is it better to keep it as distributed?
It is always better to keep the cache mode simple. Distributed adds lots of communication between instances and this may add additional overhead for not a lot of value.
It can also be set to locked cache mode so that the order will be locked using the lock server mechanism in ATG. This will prevent different server instances (or even the same instance) from making changes to the order simultaneously. You can find more information about this here - http://docs.oracle.com/cd/E41069_01/Platform.11-0/ATGRepositoryGuide/html/s1005lockedcaching01.html

Avoiding unexistent metadata in Perforce Server

My question might be simple, and the solution as well, however, i want to know, supposing that a user syncs a branch, and later delete the physical files from his local machine manually, the metadata about these files wil still exist in the server...
In the long run i'm afraid this could slow down the server.
I haven't found much about this issue, this is why i'm asking here, how do companies usually manage their Perforce metadata? A trigger that verifies the existing metadatas? a program that runs sync #none for client directories that does not exist anymore from time to time?
As i said, there might be many simple ways to solve that, but i'm looking for the best one.
Any help is appreciated.
In practice I don't think you'll have too much to worry about.
That being said, if you want to keep the workspace metadata size to a minimum, there are two things you'll need to do:
You'll need to write the sync #none script you referenced above, and also make sure to delete any workspaces that are no longer in use.
Create a checkpoint, and recreate the metadata from that checkpoint. When the metadata is recreated, that should remove any data from deleted clients. My understanding of the Perforce metadata is that it won't shrink unless it's being recreated from a checkpoint.

Backing out a Core Data migration?

I have an app with a very large Core Data database. I have versioned it many times over the past year.
The last time I versioned the database I made one simple change to an entity: I added a new optional attribute. For some reason it would not migrate using Light-weight Migration. I found out much later that this was due to a bug in Apple's Light-weight Migration code resulting from the 'renaming identifiers' that I had needed back in another versioning.
Anyway, I digress...
Because of the bug that kept me from using Light-weight migration, I created a mapping file to help with the migration, not understanding that this would was a much heavier process and would force my users to wait while the app loaded the entire database into memory while doing the migration. It turns out that this is not really an option at all with very large databases and many of my users were unable to migrate the database at all due to memory problems, etc.
So now I want to re-release my app and clear up this problem. The trouble is, some of my users have a database that is somehow marked as being 'in the middle of migrating'. Even with my new code, which gets rid of the mapping file and supports Light-weight migration, users that are in this state, 'in the middle of a migration', don't seem to get reset.
What are my options for backing out a migration?
- I can detect that I am in this state because there is a '.myDB.sqlite.migrationdestination_41b5a6b5c6e848c462a8480cd24caef3' file in the Documents directory. Deleting this file does not clear up the migration. My guess is that the database is somehow flagged as being in this state, or is already partially migrated.
- I can detect this state and then delete the database altogether. But this forces my users to re-download their data.
Any Thoughts?
Thanks for you help.
The only thing that occurs to me would be crack open the SQL store of an affected file and look for flags or something else that might signal the db being in a transitory state. You might be able to write directly to the file and alter something.
That's really ugly problem.

Best strategy for synching data in iPhone app

I am working on a regular iPhone app which pulls data from a server (XML, JSON, etc...), and I'm wondering what is the best way to implement synching data. Criteria are speed (less network data exchange), robustness (data recovery in case update fails), offline access and flexibility (adaptable when the structure of the database changes slightly, like a new column). I know it varies from app to app, but can you guys share some of your strategy/experience?
For me, I'm thinking of something like this:
1) Store Last Modified Date in iPhone
2) Upon launching, send a message like getNewData.php?lastModifiedDate=...
3) Server will process and send back only modified data from last time.
4) This data is formatted as so:
<+><data id="..."></data></+> // add this to SQLite/CoreData
<-><data id="..."></data></-> // remove this
<%><data id="..."><attribute>newValue</attribute></data></%> // new modified value
I don't want to make <+>, <->, <%>... for each attribute as well, because it would be too complicated, so probably when receive a <%> field, I would just remove the data with the specified id and then add it again (assuming id here is not some automatically auto-incremented field).
5) Once everything is downloaded and updated, I will update the Last Modified Date field.
The main problem with this strategy is: If the network goes down when I am updating something => the Last Modified Date is not yet updated => next time I relaunch the app, I will have to go through the same thing again. Not to mention potential inconsistent data. If I use a temporary table for update and make the whole thing atomic, it would work, but then again, if the update is too long (lots of data change), the user has to wait a long time until new data is available. Should I use Last-Modified-Date for each of the data field and update data gradually?
I would start by making the update routine atomic, since you'll have enough on your hands figuring out how to get the client-server communication working properly.
After that is a good time to consider tweaking it to be incremental, but only after you do some testing to figure out if it's really necessary. If you're tuning your update protocol to be as low bandwidth as possible, you might discover that even a "big" update is downloaded fast enough.
Another way to look at it is to ask yourself, how often is there going to be network trouble when an average user is doing a sync? You probably don't want to tune for unlikely scenarios.
If you are trying to optimize (minimize) the data transfer you may want to consider a different format than XML, since XML is fairly verbose. Or at least you may want to trade in XML readability for space by making each element name and attribute as small as possible, and eliminate all unnecessary whitespace.
Your basic scheme is good. The thing you need to do is to somehow make your updates idempotent so that you can restart a partially-completed transfer without risk. This is a better way to go than to try to implement some sort of true atomic commit (though you could do that too, using, eg, the SQLite database).
In our experience fairly large updates (10s of KB) can be downloaded quite rapidly, if the server is fast enough. No great need to break updates up into tiny bits. But certainly it won't hurt to try to minimize the amount of data transferred by keeping more granular info on "last update".
(And definitely you should use JSON rather than XML as your transmitted data representation.)
Wonder if you have considered using a Sync Framework to manage the synchronization. If that interests you can take a look at the open source project, OpenMobster's Sync service. You can do the following sync operations
two-way
one-way client
one-way device
bootup
Besides that, all modifications are automatically tracked and synced with the Cloud. You can have your app offline when network connection is down. It will track any changes and automatically in the background synchronize it with the cloud when the connection returns. It also provides synchronization like iCloud across multiple devices
Also, modifications in the Cloud are synched using Push notifications, so the data is always current even if it is stored locally.
In your case,
Criteria are speed (less network data exchange), robustness (data recovery in case update fails), offline access
Speed: Only the changes are sent across the network in both directions
Robustness: It stores data in a transactional store like sqlite and any failed updates are communicated in the SyncML payload. Only the successful operations are processed while the failed operations are re-tried during the next sync
Here is a link to the open source project: http://openmobster.googlecode.com
Here is a link to iPhone App Sync: http://code.google.com/p/openmobster/wiki/iPhoneSyncApp