Upgrade path for service fabric from 5.7.198 to 6.0 - azure-service-fabric

Recently we started getting a message on the Azure portal that our SF version on the cluster we use will become unsupported (5.7.198). Which I interpret as that we need to upgrade to 6.0.
Has anyone done such an upgrade on a prod system with real customers and data that should be kept safe?
Is there an upgrade we should follow (i.e. go through intermediate versions)
Any issues that I should expect?
Thanks!

Related

Azure Data Factory self-hosted Integration Runtime auto update issues

I have some problem with self-hosted Integration Runtime in Azure Data Factory V2.
I have a few VMs running 4.X.X IR software. Some of them had auto update enabled in DFv2
There was an update from 4.X.X to 5.X. After this, IR is unavailable from DFv2.
Looks like the IR services running on the VMs are pointing to a wrong execute path - using still 4.0. I can fix it manually with sc config or reinstall IR, but after reboot it doesn't work again.
Is that a bug? Can I somehow fix it without removing the VMs?
Update:
What I did - I went to Data Factory V2 Integration Runtimes and picked my self-hosted IR, went to Auto update and enabled it. My Virtual Machine hosting this IR was running an older IR software (4.X.X). There was an update to 5.X.X. Everything was working fine until I rebooted the VM. After this from Data Factory V2 Integration Runtimes I was seeing an error saying that my self-hosted IR is unavailable. I logged into the hosting VM and it turned out that IR software cannot start its service dmgsvc.exe. When you go to services.msc and check the Integration Runtime service pointing to the dmgsvc.exe, the path will be incorrect. What was wrong there? It was a catalog 4.0 instead of 5.0. IR software cannot start up correctly because of that and the error is Error 2: System cannot find the file specified. So what I did? I manually fixed it and it was working. But after the first reboot of the VM it was again pointing to the 4.0 catalog. I reinstalled the software and the effect was the same.
For the upgrade to version 5.x of the Azure Data Factory self-hosted integration runtime, we require .NET Framework Runtime 4.7.2 or later. On the download page, you'll find download links for the latest 4.x version and the latest two 5.x versions.
If automatic update is on and you've already upgraded your .NET
Framework Runtime to 4.7.2 or later, the self-hosted integration
runtime will be automatically upgraded to the latest 5.x version.
If automatic update is on and you haven't upgraded your .NET
Framework Runtime to 4.7.2 or later, the self-hosted integration
runtime won't be automatically upgraded to the latest 5.x version.
The self-hosted integration runtime will stay in the current 4.x
version. You can see a warning for a .NET Framework Runtime upgrade
in the portal and the self-hosted integration runtime client.
Refer: Troubleshoot self-hosted integration runtime

Upgrading grafana v4 to v5 - or should skip and go straight to v7?

Playing catch up with Grafana versions, (Kubernetes v1.16.15 clusters)
Currently running in PRODUCTION is very out of date (v4)
I'm only upgrading now, and refactoring all my configs for the "new" provisioning.
Should I just upgrade to v5 and release in PROD, and then incrementally upgrade again to v6?
or skip v5 just straight to v7?
According to the official docs there are some functions/solutions that went deprecated from one version to another. You should take a look when upgrading to make sure that you are ready for it.
Other than that, there are two ways to get your Grafana to v7:
Step by step upgrade:
Backup Database
Backup Grafana configuration file (i.e grafana.ini)
Perform the update/upgrade depending on the chosen method.
If that won't work, you can also do a fresh install on top of an existing one or remove the current version, install the latest one and than check grafana.ini file.
If you choose the 1st option than please notice that it is considered safer to upgrade one major version at the time. Also, the database/configuration backup is always recommended.
EDIT:
If you are using Grafana Community Kubernetes Helm Charts than notice that Upgrading an existing Release to a new major version requires the Helm v3 (>= 3.1.0) starting from Grafana v6.0.0.
Thanks #WytrzymaƂy Wiktor
I'm taking the "safe route" upgrading v4->v5 first. New functions and configuration change impacts are too great (as I said system is very out of date!)
Re-factoring all my helm charts, and getting old dashboards re-imported to v5 DB,and backing up everything as you advise.
v5 will be released to PROD users, and then will start looking at v6 upgrade soon after.

How to migrate gae app to python 2.7.11

I'm getting emails from google that I should migrate my gae apps from python 2.7 to python 2.7.11. But I can't find any example how to do it. Do you know how it can be done?
I believe you're receiving communications about Python SSL Version 2.7 Shutdown and the necessary migration to SSL version 2.7.11. You'll find more information in the link above but basically the migration involves:
Updating to the latest Cloud SDK version via gcloud components update.
Updating the app.yaml for all versions of your application
Deploying your updated application

What's upgraded exactly when you upgrade a Service Fabric cluster?

As explained in the article Controlling the fabric version that runs on your Cluster, you can choose which version of Service Fabric you want Azure to create for you.
The ServiceFabric nuget package seem to have the same version numbers as the clusters, but older versions of the packages work just fine with newer versions of the cluster.
Now, the release notes for version 5.4.145 state a list of improvements, and mentions that some older versions won't be supported anymore.
What I'm failing to understand is -
Will I get the list of improvements just by upgrading my cluster, or do I also have to upgrade my nuget packages?
Similarly, does it mean I have to upgrade my nuget packages soon, otherwise I'm at risk of running deprecated code?
Would also be nice to get some clarification of what exactly is upgraded when I upgrade a cluster, what's upgraded when I upgrade my packages, and how the two upgrades relate to each other.
There's a difference between the Runtime and the SDK. When the cluster is upgraded, it gets a new runtime. Any improvements in that runtime will be available to existing services running in the cluster.
Upgrading the SDK (or the Nuget packages) will result in new functionality to become available to applications (services/actors) built on top of the cluster runtime.
I'd recommend updating Nuget packages soon after upgrading the cluster to keep them in sync.

Move projects from an older version of TeamCity server to a newer version

I have two TeamCity Servers which are running on different software versions: one server is running "TeamCity Enterprise 9.1.7" whereas the other Server is running "TeamCity Professional 7.0.2". What is the best way to perform a migration. I want to transfer the projects that exist on Server 7.0.2 to the Server 9.1.7.
I would be very grateful if you could provide me with the steps to undertake.
There are a lot of TC versions between 7.0.2 and 9.1.7, more than 4 years of updates: https://confluence.jetbrains.com/display/TW/Previous+Releases+Downloads
First of all, you should make a backup using maintainDB tool, then you can try to migrate from major to major version and test the results:
UPDATE: base on vlad-p53's comment you can migrate directly form 7.0.2 to 9.1.7, so, just follow the tutorial A Step by Step Guide to Migrating a TeamCity Instance from One Server to Another.
7.0.2 to 8.0 and test the results.
8.0 to 9.0 and test the results
9.0 to 9.1.7
If a migration to a major version does not work, you can try a previous versions and repeat the process.
Each release has a release note that explains if there is a migration issue, I recommend to you to read them.
To each migration you can follow the steps of this tutorial: A Step by Step Guide to Migrating a TeamCity Instance from One Server to Another.