How to Autopatch Azure IaaS VMs - powershell

We have a public website hosted on two Azure IaaS VMs which are behind a Network Load Balancer. What are the available solutions to auto patch and reboot without impacting site availability?
I am looking for a solution like this
Suppress the IaaS VM in NLB to stop the traffic coming to the VM. (apply a network security group to stop the traffic)
Run the monthly patches/updates on the IaaS VM
Restart the IaaS VM
Enable the IaaS VM in NLB to allow the traffic.
Move on to next server
Are there any solution available for this in Azure?
or
do we need to prepare our own PowerShell scripts to do this? if its a PowerShell script how to make it run monthly once?

Are there any solution available for this in Azure?
I suggest you could use Update Management solution in the Operations Management Suite, you can now configure an automated patching schedule for your Azure IaaS VMs.
There you can define a one time, a weekly or monthly schedule. The possibility adding different VMs to different schedules ensures that your services running on Azure IaaS VMs will be always available during an automated patching schedule.
More information please refer to this link.

Related

How to monitor AZURE VMS with OPEN SOURCE SOFTWARE (NAGIOS/SENSU)

I am looking for a monitoring tool for CPU,MEMORY resources running under AZURE VM and Need Alerts also. We are not looking for paid solutions, but we would like to run a monitoring server. In our cloud, we will launch and remove servicesI wonder what tool is best to monitor hosts and services when they will be added/removed in a daily basis?
I am considering Ganglia, Nagios, Icinga and Sensu. Any other not paid option is welcome too as long as it can monitor the described scenario.

Azure Data Factory Integration Runtime with 2 nodes in different locations

I have am trying to set up Data Factory between the on premise SQL Server on a corporate network and a SQL Server hosted in an Azure VM (not MI).
Is this possible? I set up both nodes on the IR and opened firewall port 8060 both on premise and in the Azure VM.
My goal is to copy data as needed from the on premise sql server to the sql server hosted in the azure vm.
I am getting this error
This node has some connectivity issue with the dispatcher node. Please check the connectivity between the nodes within your network.
The Integration Runtime (Self-hosted) node is trying to sync the credentials across nodes. It may take several minutes.
If this warning appears for over 10 minutes, please check the connectivity with Dispatcher node.
In my case port 8060 is being blocked by the corporate firewall, not the server firewall. Since I can't change the corporate firewall, I'll use a different work around. However I can't find a good demo of this working in two separate networks. All the demos use Azure VM's, so I was hoping to test out a real life example.

Running multiple build agents and deployment agents that service different Organisations on one Server

Is it possible to run multiple Azure Self-hosted build/deploy agents and multiple deployment agents on one server? Also, can these agents service more than one organisation or even multiple Azure AD Tenants?
I do realise the consequences with the server straining under IO bottlenecks and the like, these agents will probably never have to manage more than 3 projects being build and/or deployed at a time, but the sources can be from different projects in different organisations or possibly Tenants.
I have deployed my Deployment Agents to the servers and they function fine with a Microsoft-hosted build agent (my question is about ONE of these servers, it would apply to all of them eventually), but I am afraid to now start deploying the build agents to the same servers now.
This approach is very Do-able and is actually really cost-effective if you do not have continuous deployments or your virtual machine has the IO capacity to handle the planed traffic.
Understand the basics of an Agent. What exactly happens when you host a Windows Agent is that it creates a Windows Service which would run internally a separate new process and perform the actions for the agent.
Since these are independent processes, they are not at all impacted by the operations of other agents. As long as you are not trying to access the same files/resources this approach is actually a great approach and we should surely try this.

Running maintenance on self-hosted Azure DevOps agents

I have several self-hosted Azure DevOps agents (each installed on a dedicated on-prem server) and I need to perform reoccurring maintenance on them (i.e. patching, etc.). Is there's a good way to define those maintenance windows within Azure DevOps so that server admins could do their job without worrying to interrupt any ongoing build/release task?
There seem to be a setting related to configuring reoccurring maintenance (Organization Settings -> Agent Pools -> <Pool Name>-> Settings [tab]) but it seems as if it would apply to the whole pool and it's hard to tell which of the agents will be considered offline at which time slot.
Unfortunately I couldn't find any documentation about it and not sure if there's something that Azure DevOps would also be doing on the agent machines (i.e. running cleanup, updating agents and so on)
Currently, the process involves a person with admin permissions in Azure DevOps to disable an agent allowing a server admin to perform regular maintenance and to re-enable it back when server admin is done. It would be great if a server admin could not involve an Azure DevOps Admin every time for such routines.
Due to the fact that you have your own Azure Pipelines agents, then the maintenance should be easier and you will have total control of either having automatic maintenance or not. If you use Microsoft's hosted agents, you could not update the hosted agents from Microsoft because these agents are maintained by Microsoft exclusively.
The best way to do this is by having more than one agent on one machine instance then organize the agents on one pool. If you have multiple pools, then you can configure Azure DevOps to have different maintenance window schedule on each pool to have different time, and give some time to download and configure itself.
For example, I usually configure the maintenance window on weekend days such as Sunday early morning once a month on certain date. And for any pools I have I gave them intervals of 40 minutes on each pool to have maintenance to give enough time for the agent to download, update and restart itself.
Please consult these documentation further for detailed explanation and use cases:
For Azure DevOps Server:
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops-2019
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/pools-queues?view=azure-devops-2019
For Azure DevOps Service (on cloud TFS, formerly Visual Studio Team Services):
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/pools-queues?view=azure-devops

Azure Service Fabric-based Services: Prerequisite is always a prepared cluster?

If I've understood the docs properly, azure service fabric-based apps/microservices cannot be installed together with their service-fabric operational environment in one "packaged installer" step. For example, if I want to deploy a set of microservices on premises at a company that is running a typical windows server 2012 or VMWare IT center, then I'm out of luck? I'd have to require the company to first commit to (and execute) an installation of an azure app service fabric on several machines.
If this is the case, then the Azure Service Fabric is only an option for pure cloud operations where the service fabric cluster can be created on-demand by the provider or for companies that have already committed to azure service fabric. This means that a provider of classical "installer-based" software cannot evolve to the azure service fabric advantages since the datacenter policies of the potential customers is unknown.
What have I missed?
Yes, you always have to have a cluster to run Service Fabric Applications and Microservices. It is however not any more limited to a pure cloud environment, as of September last year the on-premise version of Azure Service Fabric for Windows Server went GA (https://azure.microsoft.com/en-us/blog/azure-service-fabric-for-windows-server-now-ga/) and that lets you run your own cluster on your own machines (whether physical or virtual, doesn't matter) or in another data center (or even at another cloud provider).
Of course, as you say, this requires your customer company to either have their own cluster or that you set one up for them (https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-creation-for-windows-server). They will also need to have the competence to manage that cluster over time. It could be argued though that this shouldn't be much more difficult than managing a VMWare farm or setting up and managing say a Docker container host(s).
For the traditional 'shrink-wrapped-DVD-installer-type' of software vendor this might not be as easy as just supplying an .exe and some system requirements, i agree with you on that. If the customer can't or don't wan't to run their own cluster and cloud is not an option then it definitely adds additional complexity to selling and delivering your solution.
The fact that you can run your own cluster on any Windows Server environment means that there is no real lock-in to Azure as a cloud platform, I think that this is a big pro for SF as a framework. Once you have a cluster to receive your applications then you can focus on developing that, this cannot be said of most other cloud-based PaaS frameworks/services.