Changing a SF clusters default RDP ports - azure-service-fabric

I've created a SF cluster from the Azure portal and by default it uses incrementing ports starting at 3389 for RDP access to the VMs. How can I change this to another range?
Additionally, or even alternatively, is there a way to specify the range when I create a cluster?
I realize this may not be so much of a SF question as a Load Balanacer or Scale Set question, but I ask in the context of SF because that is how I created this setup. IOW I did not create the load balancer or scale set myself.

You can change this with an ARM template (Azure Resource Manager).
Since you will run into situations from time to time where you want to change parts of your infrastructure, I'd recommend to create the whole cluster from an ARM template instead of through the portal. By doing so you could also create the cluster in an existing VNET, use internal load balancers, etc.
To create the cluster from an ARM template, you can either start with the Azure Quickstart template or by clicking on "Export template" in the Azure Portal right before you would actually create the cluster.
To change the inbound NAT rules for RDP in the template, change the section inboundNatPools in the template.
If you want to change your existing cluster, you can either export your existing resource group as a template or you can try to create a template which contains just the loadBalancer-resource and re-deploy just this part.
Working with ARM templates needs some getting used to, but it has many advantages. It allows you to easily change settings that can not be configured through the portal, it allows you to easily re-create the cluster for different environments, etc.

Related

Config lives with the application

I don't think this is possible. but I'll ask anyway.
I have a network deployment that creates vnet, subnets, and NSGs. I then have a separate deployment that creates an application, the app needs to update an NSG so that traffic is allowed.
But if i re-run the vnet deployment the application-specific changes are removed as they dont exist within the vnet bicep.main.
I know I can write some code that take the NSG values and re-apply after the vnet deployment, but this will create downtime.
I'm pretty sure their isn't a way persist the changes, but thought i'd ask how others do this?

Why do I see files from another GCP Kubernetes cluster in a different cluster?

I am trying to learn Google Cloud and very new to it.
I created 1 project in GCP: project1.
And I created a Kubernetes cluster in project1 with the name p1-cluster1. In this cluster, I click on the Connect button and a terminal is opened on GCP page. I created a new directory with the name development under /home/me. And I have developed a personal project under /home/me/development so that I now have a bunch of code here.
I decided to develop a second personal project. For this purpose, I created a new project with the name project2 in GCP. And then created a new Kubernetes cluster with the name p2-cluster2. And when I connect to this cluster, a terminal window opens, and I automatically end up in my home folder /home/me. I expect to see an empty folder but I instead see the development folder that I created in project1 in p1-cluster1.
Why do I see contents/files of another project (and another cluster) in a different project (and cluster)? Am I doing something wrong? If this is normal, what is the benefit of creating different projects?
I can create a new folder with the name development_2. But what if I accidentally ruin things (i.e. operative system) when working under development folder..? Do I also ruin things for the project under the development_2 folder?
And also, if I simultaneously run 2 tasks in 2 different projects, will they be competing with each other for the system resources (i.e. memory and CPU)?
This is extremely confusing and I could not make it clear to me by looking at documentation. I would really appreciate help of more experienced people in this specific domain.
I suppose you use Cloud shell to "connect" to the clusters. This is just a convenient way instead of using your own machine's shell. You get a machine which can be used for development instead of your local machine.
Therefore you have the same files and the some directory. It is not on the GKE Cluster, but a "local" machine from the perspective of GKE.
In order to actually execute something on the cluster you have to use kubectl which has a concept of kube contexts. If you have multiple clusters, then you need multiple kube contexts.
So if you "Connect" that from p1-cluster1 your kube-context will be set to that cluster.
See these articles for more detail:
https://cloud.google.com/kubernetes-engine/docs/quickstart#launch
https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl
EDIT:
So when you do
gcloud container clusters get-credentials p1-cluster1
on your local machine, you can the context to call
kubectl get pods
on that cluster. Only that command is executed on the cluster.

Using Cloudformation to build environment on new account

I'm trying to write some Cloudformation templates to setup a new account with all the resources needed for running our site. In this case we'll be setting up a UAT/test environment.
I have setup:
VPC
Security groups
ElastiCache
ALB
RDS
Auto scaling group
What I'm struggling with is, when I bring up my auto scaling group with our silver AMI, it fails health checks and the auto scaling group gets rolled back.
I have our code in a git repo, which is to be deployed via CodeDeploy, but it seems I can't add a CodeDeploy deployment without an auto scaling group and I can't setup the auto scaling group without CodeDeploy.
Should I modify our silver AMI to fake the health checks so the auto scaling group can be created? Or should I create the auto scaling group without health checks until a later step?
How can I programmatically setup CodeDeploy with Cloudformation so it pulls the latest code from our git repo?
Create the deployment app, group, etc. when you create the rest of the infrastructure, via CloudFormation.
One of the parameters to your template would be the app package already found in an S3 code deploy bucket, or the Github commit id to a working release of your app.
In addition to the other methods available to you in CodeDeploy, you can use AWS CloudFormation templates to perform the following tasks: Create applications, Create deployment groups and specify a target revision, Create deployment configurations, Create Amazon EC2 instances.
See https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-cloudformation-templates.html
With this approach you can launch a working version of your app as you create your infrastructure. Use normal health checks so you can be sure your app is properly configured.

Correct way to define k8s-user-startup-script

This is like a follow-up question of: Recommended way to persistently change kube-env variables
I was playing around with the possibility to define a k8s-user-startup-script for GKE instances (I want to install additional software to each node).
Adding k8s-user-startup-script to an Instance Group Template "Custom Metadata" works, but that is overwritten by gcloud container clusters upgrade which creates a new Instance Template without "inheriting" the additional k8s-user-startup-script Metadata from the current template.
I've also tried to add a k8s-user-startup-script to the project metadata (I thought that would be inherited by all instances of my project like described here) but that is not taken into account.
What is the correct way to define a k8s-user-startup-script that persists cluster upgrades?
Or, more general, what is the desired way to customize the GKE nodes?
Google Container Engine doesn't support custom startup scripts for nodes.
As I mentioned in Recommended way to persistently change kube-env variables you can use a DaemonSet to customize your nodes. A DaemonSet running in privileged mode can do pretty much anything that you could do with a startup script, with the caveat that it is done slightly later in the node bring-up lifecycle. Since a DaemonSet will run on all nodes in your cluster, it will be automatically applied to any new nodes that join (via cluster resize) and because it is a Kubernetes API object, it will be persisted across OS upgrades.

Azure Service Fabric join Virtual network

Is it possible to join a Azure Service Fabric Cluster to a Virtual Network in Azure ?
Not simply join an existing Virtual Network, but also join the vm's where the cluster is running on into the domain. So that the cluster can use an domain user account to access network shares etc..
Yes. The default SF template creates a VNET for the cluster, so I assume you mean join to an existing VNET.
If you take a look at the sample template for SF, you can see the subnet0Ref variable which is being used to set the network profile of the NICs that are part of the newly created scale set. You can modify the template to lookup your pre-existing subnet using the resourceid template expression function (documentation). Then you can just drop all the other resources from the template that you don't need created, like the VNET itself.
Here is the document highlighting how to use and Existing Virtual Network / Subnet that was written by our support engineer.
https://blogs.msdn.microsoft.com/kwill/2016/10/05/azure-service-fabric-common-networking-scenarios/
We are merging this content into our official documentation. sorry about the delay.