Let's say I have a database with schema of v1, and an application which is tightly coupled to that schema of v1. i.e. SQLException is thrown if the records in the database don't match the entity classes.
How should I deploy a change which alters the database schema, and deploys the application which having a race condition. i.e. user queries the app for a field which no longer exists.
This problem actually isn't specific to kubernetes, it happens in any system with more than one server -- kubernetes just makes it more front-and-center because of how automatic the rollover is. The words "tightly coupled" in your question are a dead giveaway of the real problem here.
That said, the "answer" actually will depend on which of the following mental models are better for your team:
do not make two consecutive schemas contradictory
use a "maintenance" page that keeps traffic off of the pods until they are fully rolled out
just accept the SQLExceptions and add better retry logic to the consumers
We use the first one, because the kubernetes rollout is baked into our engineering culture and we know that pod-old and pod-new will be running simultaneously and thus schema changes need to be incremental and backward compatible for at minimum one generation of pods.
However, sometimes we just accept that the engineering effort to do that is more cost than the 500s that a specific breaking change will incur, so we cheat and scale the replicas low, then roll it out and warn our monitoring team that there will be exceptions but they'll blow over. We can do that partially because the client has retry logic built into it.
Related
I'm planning on using rxdb + hasura/postgresql in the backend. I'm reading this rxdb page for example, which off the bat requires sync-able entities to have a deleted flag.
Q1 (main question)
Is there ANY point at which I can finally hard-delete these entities? What conditions would have to be met - eg could I simply use "older than X months" and then force my app to only ever displays data for less than X months?
Is such a hard-delete, if possible, best carried out directly in the central db, since it will be the source of truth? Would there be any repercussions client-side that I'm not foreseeing/understanding?
I foresee the number of deleted's growing rapidly in my app and i don't want to have to store all this extra data forever.
Q2 (bonus / just curious)
What is the (algorithmic) basis for needing a 'deleted' flag? Is it that it's just faster to check a flag rather than to check for the omission of an object from, say, a very large list. I apologize if it's kind of a stupid question :(
Ultimately it comes down to a decision that's informed by your particular business/product with regards to how long you want to keep deleted entities in your system. For some applications it's important to always keep a history of deleted things or even individual revisions to records stored as a kind of ledger or history. You'll have to make a judgement call as to how long you want to keep your deleted entities.
I'd recommend that you also add a deleted_at column if you haven't already and then you could easily leverage something like Hasura's new Scheduled Triggers functionality to run a recurring job that fully deletes records older than whatever your threshold is.
You could also leverage Hasura's permissions system to ensure that rows that have been deleted aren't returned to the client. There is documentation and examples for ways to work with soft deletes and Hasura
For your second question it is definitely much faster to check for the deleted flag on records than to have to try and diff the entire dataset looking for things that are now missing.
Does anyone know what is a best practice to overwrite records under Google DNS Cloud, using API? https://cloud.google.com/dns/api/v1/changes/create does not help!
I could delete and create, but it is not nice ;) and could cause an outage.
Regards
The Cloud DNS API uses Changes objects to perform the update actions; you can create Changes but you don't ever delete them. In the Cloud DNS API, you never operate directly on the resource record sets. Instead, you create a Changes object with your desired additions and deletions and if that is created successfully, it applies those updates to the specified resource record sets in your managed DNS zone.
It's an unusual mental model, sort of like editing a file by specifying a diff to be applied, or appending to the commit history of a Git repository to change the contents of a file. Still, you can certainly achieve what you want to do using this API, and it is applied atomically at the authoritative servers (although the DNS system as a whole does not really do anything atomically, due to caching, so if you know you will be making changes, reduce your TTLs before you make the changes). The atomicity here is more about the updates themselves: if you have multiple applications making changes to your managed zones, and there are conflicts in changes to the particular record sets, the create operation will fail, and you will have retry the change with modified deletions (rather than having changes be silently overwritten).
Anyhow, what you want to do is to create a Changes object with deletions that specifies the current resource record set, and additions that specifies your desired replacement. This can be rather verbose, especially if you have a domain name with a lot of records of the same type. For example, if you have four A records for mydomain.example (1.1.1.1, 2.2.2.2, 3.3.3.3, and 4.4.4.4) and want to change the 3.3.3.3 address to 5.5.5.5, you need to list all four original A records in deletions and then the new four (1.1.1.1, 2.2.2.2, 4.4.4.4, and 5.5.5.5) in additions.
The Cloud DNS documentation provides example code boilerplate that you can adapt to do what you want: https://cloud.google.com/dns/api/v1/changes/create#examples, you just need to set the deletions and additions for the Changes object you are creating.
I have never used APIs for this purpose, but if you use command line i.e. gcloud to update DNS records, it binds the change in a single transaction and both tasks of deleting the record and adding the updated record are executed as a single transaction. Since transactions are atomic in nature, it shouldn't cause any outage.
Personally, I never witnessed any outage while using gcloud for updating DNS settings for my domain.
When registering libraries in Dataphor what is the difference between registering with and without reconciliation?
In my experience from learning and using this DBMS we've always registered without reconciliation. What are some example cases where we may choose one option over the other?
Registering with reconciliation means that the Data Definition Language (DDL) statements will be run against the target device(s). This is desired behavior when starting from a blank or non-existing database, where you want Dataphor to create the needed structures. Otherwise, the preferred methodology is to register without reconciliation so that any existing database is ignored, and use the DeviceReconciliationScript() operator to reconcile the changes.
We have a number of legacy systems that we're unable to make changes to - however, we want to start taking data changes from these systems and applying them automatically to other systems.
We're thinking of some form of service bus (no specific tech picked yet) sitting in the middle, and a set of bus adapters (one per legacy application) to translate between database specific concepts and general update messages.
One area I've been looking at is using Change Data Capture (CDC) to monitor update activity in the legacy databases, and use that information to construct appropriate messages. However, I have a concern - how best could I, as a consumer of CDC information, distinguish changes applied by the application vs changes applied by the bus adapter on receipt of messages - because otherwise, the first update that gets distributed by the bus will get re-distributed by every receiver when they apply that change to their own system.
If I was implementing "poor mans" CDC - i.e. triggers, then those triggers execute within the context/transaction/connection of the original DML statements - so I could either design them to ignore one particular user (the user applying incoming updates from the bus), or set and detect a session property to similar ignore certain updates.
Any ideas?
If I understand your question correctly, you're trying to define a message routing structure that works with a design you've already selected (using an enterprise service bus) and a message implementation that you can use to flow data off your legacy systems that only forward-ports changes to your newer systems.
The difficulty is you're trying to apply changes in such a way that they don't themselves generate a CDC message from the clients receiving the data bundle from your legacy systems. In fact, all you're concerned about is having your newer systems consume the data and not propagate messages back to your bus, creating unnecessary crosstalk that might exponentiate, overloading your infrastructure.
The secret is how MSSQL's CDC features reconcile changes as they propagate through the network. Specifically, note this caveat:
All the changes are logged in terms of LSN or Log Sequence Number. SQL
distinctly identifies each operation of DML via a Log Sequence Number.
Any committed modifications on any tables are recorded in the
transaction log of the database with a specific LSN provided by SQL
Server. The __$operationcolumn values are: 1 = delete, 2 = insert, 3 =
update (values before update), 4 = update (values after update).
cdc.fn_cdc_get_net_changes_dbo_Employee gives us all the records net
changed falling between the LSN we provide in the function. We have
three records returned by the net_change function; there was a delete,
an insert, and two updates, but on the same record. In case of the
updated record, it simply shows the net changed value after both the
updates are complete.
For getting all the changes, execute
cdc.fn_cdc_get_all_changes_dbo_Employee; there are options either to
pass 'ALL' or 'ALL UPDATE OLD'. The 'ALL' option provides all the
changes, but for updates, it provides the after updated values. Hence
we find two records for updates. We have one record showing the first
update when Jason was updated to Nichole, and one record when Nichole
was updated to EMMA.
While this documentation is somewhat terse and difficult to understand, it appears that changes are logged and reconciled in LSN order. Competing changes should be discarded by this system, allowing your consistency model to work effectively.
Note also:
CDC is by default disabled and must be enabled at the database level
followed by enabling on the table.
Option B then becomes obvious: institute CDC on your legacy systems, then use your service bus to translate these changes into updates that aren't bound to CDC (using, for example, raw transactional update statements). This should allow for the one-way flow of data that you seek from the design of your system.
For additional methods of reconciling changes, consider the concepts raised by this Wikipedia article on "eventual consistency". Best of luck with your internal database messaging system.
Windows Workflow Foundation has a problem that is slow when doing WF instances persistace.
I'm planning to do a project whose bussiness layer will be based on WF exposed WCF services. The project will have 20,000 new workflow instances created each month, each instance could take up to 2 months to finish.
What I was lead to belive that given WF slownes when doing peristance my given problem would be unattainable given performance reasons.
I have the following questions:
Is this true? Will my performance be crap with that load(given WF persitance speed limitations)
How can I solve the problem?
We currently have two possible solutions:
1. Each new buisiness process request(e.g. Give me a new drivers license) will be a new WF instance, and the number of persistance operations will be limited by forwarding all status request operations to saved state values in a separate database.
2. Have only a small amount of Workflow Instances up at any give time, without any persistance ofso ever(only in case of system crashes etc.), by breaking each workflow stap in to a separate worklof and that workflow handling each business process request instance in the system that is at that current step(e.g. I'm submitting my driver license reques form, which is step one... we have 100 cases of that, and my step one workflow will handle every case simultaneusly).
I'm very insterested in solution for that problem. If you want to discuss that problem pleas be free to mail me at nstjelja#gmail.com
The number of hydrated executing wokflows will be determined by environmental factors memory server through put etc. Persistence issue really only come into play if you are loading and unloading workflows all the time aka real(ish) time in that case workflow may not be the best solution.
In my current project we also use WF with persistence. We don't have quite the same volume (perhaps ~2000 instances/month), and they are usually not as long to complete (they are normally done within 5 minutes, in some cases a few days). We did decide to split up the main workflow in two parts, where the normal waiting state would be. I can't say that I have noticed any performance difference in the system due to this, but it did simplify it, since our system sometimes had problems matching incoming signals to the correct workflow instance (that was an issue in our code; not in WF).
I think that if I were to start a new project based on WF I would rather go for smaller workflows that are invoked in sequence, than to have big workflows handling the full process.
To be honest I am still investigating the performance characteristics of workflow foundation.
However if it helps, I have heard the WF team have made many performance improvements with the new release of WF 4.
Here are a couple of links that might help (if you havn't seem them already)
A Developer's Introduction to Windows Workflow Foundation (WF) in .NET 4 (discusses performance improvements)
Performance Characteristics of Windows Workflow Foundation (applies to WF 3.0)
WF on 3.5 had a performance problem. WF4 does not - 20000 WF instances per month is nothing. If you were talking per minute I'd be worried.