How can I modify attributes in Record Types after container is deployed to production? - cloudkit

I need to remove / add new attributes to one of my record type, but container is deployed to production yet. Any chance?
As you can see - I hovered over the X button my mouse - it not possible to remove attributes.

You should be able to just make modifications to the development container and then deploy that again to production. See the documentation: https://developer.apple.com/library/ios/documentation/General/Conceptual/iCloudDesignGuide/DesigningforCloudKit/DesigningforCloudKit.html
It says: Prior to deploying your app, you migrate your schema and data to the production environment using CloudKit Dashboard. When running against the production environment, the server prevents your app from changing the schema programmatically. You can still make changes with CloudKit Dashboard but attempts to add fields to a record in the production environment result in errors.
Then see the part 'Future Proofing Your Records'

Related

Temporarily disabling default services in servicefabric using powershell

The concrete question
For those who just want the direct questions:
Is there a way to temporarily disable default services on a ServiceFabric application type so that a new application can be installed (using Powershell) without automatically installing any default services?
A proposed solution here is to remove the default services from the manifest and later restoring them. I am able to write a PowerShell script to adjust the application manifest accordingly, but how do I update the application type using Powershell - assuming I already have altered the manifest?
Any solution that solves the contextual problem without requiring manual config meddling is acceptable - my proposed solution is probably not the only possible solution. We do explicitly want to avoid manual meddling.
When allowing meddling, we are already able to just comment out the default services when we need to. We're specifically looking for a solution that requires no meddling as this reduces bugs and debugging issues.
The context
I'm running into an issue with using the application manifest's default services during local development.
I am aware of the general "don't use default services" advice, and it is being followed. During CI build, the default services are removed and will not be relied upon for any of our clusters in Azure. The only exception here is local developer machines, which use default services to keep the developer F5 experience nicer by enabling all services when starting a debug session.
We have written specialized scripts that provision a new tenant (SF application) with their own set of services (SF service). Not every tenant should get every service, we want to opt-in to the services, which is what the script already does (based on a mapping that we manage elsewhere, which is not part of the current question as the provisioning script exists and works).
However, when default services are enabled, every tenant already gets every service and the actual opt-in provisioning is useless. This is the issue we're trying to fix.
This same script works in our production cluster since there are no default services configured there. The question is solely focus on the local development environment.
Essentially, we're dealing with two scenarios during local development:
When debugging, we want the default services to be on because it allows us to run all of our services by pressing F5 (and not requiring any further action)
When testing our provisioning script, we don't want default services because it gets in the way of our selective provisioning behavior
I'm aware that commenting the default services out of the manifest solves the issue, but this requires developers constantly toggling the content of the manifest and reinstalling the application type, which we'd like to avoid.
Ideally, we want to have the default services in the manifest (as is currently the case) but then have the provisioning script "disable" the default services for its own runtime (and restore the default services before exiting), as this gets us the desired behavior in both cases.
What is the solution that requires the least manual developer meddling to get the desired behavior in both scenarios?
I'm currently trying to implement it so that the provisioning script:
Copies the application manifest to a backup location
Removes the default services from the real manifest
Updates the application type using the new manifest (i.e. without default services)
Runs the provisioning logic
Restores the real manifest using the backup manifest from step 1
Updates the application type using the restored manifest (i.e. with default services)
It is specifically steps 3 and 6 that I do not know how to implement.
Consider having two sfproj projects in the solution. One with default services, one without.
Also look into using a start-service.ps1 script instead of default services. This way the two projects can use the same application manifest.
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-debugging-your-application#running-a-script-as-part-of-debugging

Is there any way to move individual entity from one server to another in Master data services?

I have master data model with some entity and it is deployed on production server.
Now i have created 2 more new entity in development server and wanted to move only these two entity.
If anyone has any idea please share with me.
Thanks !
You have two options.
Web-app(easiest): On your Dev server, go to System Administration. Click on Deployment and create a package. You then deploy this package by going on the production server, follow the same steps, but choose deploy instead of create under the 'deployment' button.
The alternative is to use the MDSModelDeploy.exe. You can find it on the server by going to the appropriate folder. Generally it's in this path: C:\Program Files\Microsoft SQL Server\130\Master Data Services\Configuration.
I recommend you use this method, as you have more control. You can choose to deploy with data, or without or clone your model. You can read more here ([https://learn.microsoft.com/en-us/sql/master-data-services/deploy-a-model-deployment-package-by-using-mdsmodeldeploy][1])
I can also recommend you consider the ModelPackageEditor when your model starts getting big. Then you have control over what you need to deploy, as in entities, views, business rules etc.
You need to have a deployment strategy in place, because if your development and production is not exactly the same, then you run into deployment errors. It normally happens when you create, for example business rules on the environment to which you are deploying and it is not on your dev environment. MDS uses copious amounts of id's and if the models are not in sync, then you run into problems.

Best Strategy for deploying test and dev app versions to Google Compute Platform

Google Compute Platform
I've got an Angular (2) app and a Node.js middleware (Loopback) running as Services in an App Engine in a project.
For the database, we have a Compute Engine running PostgreSQL in that same project.
What we want
The testing has gone well, and we now want to have a test version (for ongoing upgrade testing/demo/etc) and a release deployment that is more stable for our initial internal clients.
We are going to use a different database in psql for the release version, but could use the same server for our test and deployed apps.
Should we....?
create another GCP project and another gcloud setup on my local box to deploy to that new project for our release deployment,
or is it better to deploy multiple versions of the services to the single project with different prefixes - and how do I do that?
Cost is a big concern for our little nonprofit. :)
My recommendation is the following:
Create two projects, one for each database instance. You can mess around all you want in the test project, and don't have to worry about messing up your prod deployment. You would need to store your database credentials securely somewhere. A possible solution is to use Google Cloud Project Metadata, so your code can stay the same between projects.
When you are ready to deploy to production, I would recommend deploying a new version of your App Engine app in the production project, but not promoting it to the default.
gcloud app deploy --no-promote
This means customers will still go to the old version, but the new version will be deployed so you can make sure everything is working. After that, you can slowly (or quickly) move traffic over to the new version.
At about 8:45 into this video, traffic splitting is demoed:
https://vimeo.com/180426390
Also, I would recommend aggressively shutting down unused App Engine Flexible deployments to save costs. You can read more here.

Redeploy Service Fabric application without version change

I've read about partial upgrade, but it always requires to change some parts of the packages application. I'd like to know if there's a way to redeploy a package without version change. In a way, similar to what VS is doing when deploying to the dev cluster.
On your local dev cluster, VS simply deletes the application before it starts the re-deployment. You could do the same in your production cluster, however this results in downtime since the application is not accessible during that time.
What's the reason why you wouldn't want to use the regular monitored upgrade? It has many advantages, like automatic rollbacks and so on.

dev opsworks - updating the app, will it drainstop apache

I have an autoscaled environment at my production, which is currently a havoc when we update build on it, so we thought we better move to dev opsworks at AWS to make the process more easy for us.
We can't afford a downtime, not now not ever, never ever; a second worth of loss while updating a build and may be restarting apache costs a fortune.
We can't possibly afford to just let our machine be terminated by autoscale policy when a new update comes in with new AMI based ec2 machine, actually when autoscale terminates a machine under any circumstances it doesn't care for your running requests on that machine, it just shuts it down while what it should rather do is a graceful shutdown, by something like drainstop on apache, so it could first at least finish the work in hand.
now that opsworks is here, and we are planning to use it to update our builds more automagically, will the new update push run the recipes again, in fact this paragraph which i just read worries me more, does it mean that it won't update the build automatically on new instances.
After you have modified the app settings, you must deploy the app.
When you first deploy an app, the Deploy recipes download the code and
related files to the app server instances, which then run the local
copy. If you modify the app in the repository, you must ensure that
the updated code and related files are installed on your app server
instances. AWS OpsWorks automatically deploys the current app version
to new instances when they are started. For existing instances,
however, the situation is different:
You must manually deploy the updated app to online instances.
You do not have to deploy the updated app to offline instance
store-backed instances, including load-based and time-based instances;
AWS OpsWorks automatically deploys the latest app version when they
are restarted.
You must restart offline EBS-backed 24/7 instances and manually deploy
the app; AWS OpsWorks does not run the Deploy recipes on these
instances when they are restarted.
You cannot restart offline EBS-backed load-based and time-based
instances, so the simplest approach is to delete the offline instances
and add new instances to replace them.
Because they are now new instances, AWS OpsWorks will automatically
deploy the current app version when they are started.
First of all, let me state that I've started looking into OpsWorks just about 2 weeks ago. So I'm far from being a pro. But here's my understanding how it works:
We need to differentiate between instances that are instance store backed, and instances that are EBS backed:
The instance store disappears together with the instance once it's shut down. Therefore, bringing it up again, starts from zero. It has to download the latest app again and will deploy that.
For EBS backed instances the deployed code remains intact (persisted) exceeding the lifetime of the instance to which it is attached. Therefore, bringing an EBS backed instance back to life, will not update your app automatically. The old version remains deployed.
So your first decision needs to be what instance type to use. It is generally a good idea to have the same version of your app on all instances. Therefore I would suggest going with EBS-backed instances which will not automatically deploy new versions when booting up. In this case, deploying a new version would mean to bring up brand new instances that will be running the new code automatically (as they are new), and then destroying the old instances. You will have a very short time during which old and new code will run side by side.
However, if you prefer to have always the very latest version deployed and can afford risking discrepancies between the individual instances for an extended period of time (e.g. having different app versions deployed depending on when an instance was originally started), then instance store backed might be your choice. Every time a new instance spins up, the latest and greatest code will be deployed. If you want to update existing ones, just bring up new instances instead and kill the existing ones.
Both strategies should give you the desired effect of zero downtime. The difference is on when and how the latest code is being deployed. Combine this with HAProxy to have better control which servers will be used. You can gradually move traffic from old instances to new instances for example.