Azure Service Fabric join Virtual network - azure-service-fabric

Is it possible to join a Azure Service Fabric Cluster to a Virtual Network in Azure ?
Not simply join an existing Virtual Network, but also join the vm's where the cluster is running on into the domain. So that the cluster can use an domain user account to access network shares etc..

Yes. The default SF template creates a VNET for the cluster, so I assume you mean join to an existing VNET.
If you take a look at the sample template for SF, you can see the subnet0Ref variable which is being used to set the network profile of the NICs that are part of the newly created scale set. You can modify the template to lookup your pre-existing subnet using the resourceid template expression function (documentation). Then you can just drop all the other resources from the template that you don't need created, like the VNET itself.

Here is the document highlighting how to use and Existing Virtual Network / Subnet that was written by our support engineer.
https://blogs.msdn.microsoft.com/kwill/2016/10/05/azure-service-fabric-common-networking-scenarios/
We are merging this content into our official documentation. sorry about the delay.

Related

Calling AWS services [s3, DynamoDB, kinsesis] from ECS-fargate task which is created inside a VPC

I have an ECS-Fargate cluster created inside VPC.
If I want to access above mentioned AWS services from fargate task, what needs to be done?
I see following options from different documentations I read:
Create private link to each AWS service
Create NAT gateway
Not sure which one is correct and recommended option?
To be clear, an ECS cluster is an abstracted entity and does not dictate where you connect the workloads you are running within it. If we stick to the Fargate launch type this means that tasks could be launched either on a private subnet or on a public subnet:
If you launch them in a public subnet (and you assign a public IP to the tasks) then these tasks can reach the public endpoints of the services you mentioned and nothing else (from a networking routing perspective) is required.
If you launch them in a private subnet you have two options that are those you called out in your question.
I don't think there is a golden rule for what's best. The decision is multi-dimensional (cost, ease of setup, features, observability and control, etc). I'd argue the NAT GW route is easier to setup regardless of the number of services you need to add but you may lose a bit of visibility and all your traffic will go outside of the VPC (for some customers this is ok, for others it's not). Private Links will give you tighter control but they may be more work to setup (especially if you need to reach many services).

Can an Azure virtual machine scale set have a hidden public IP?

I have an AKS cluster with 2 nodepools (one was added later). My problem is that only the first nodepool (the one created with the cluster) has a public ip which can be found in azure portal (as a public ip resource). Is it possible to find the IP of the second nodepool somewhere in the portal? I know what the IP is because I pinged one of my servers from a pod running on that nodepool, but I need the resource (or at least it's ID). I also tried searching for it using azure resource explorer but I couldn't find anything related to it. Is it hidden?
Sorry if the question seems dumb. I hope I was clear enough.
You are probably dealing with an ephemeral external IP, so whenever there is no public ip attached to a vm it gets assigned an ephemeral one for outgoing comms. You can also read this article to get a better idea how to control that: https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-outbound-connections

How to access kubernets service of one project from pod of another project in GCP

I am facing one scenario where I have to access one Kubernetes service of GCP PROJECT X from a pod running in another GCP Project Y.
I know we can access service from one namespace in another namespace in the same project by using
servicename.namespacename.svc.cluster.local
how can I do if I have to do similar across different GCP projects?
Agree with #cperez08, but adding my 5 cents.
I think you can try Set up clusters with Shared VPC
With Shared VPC, you designate one project as the host project, and
you can attach other projects, called service projects, to the host
project. You create networks, subnets, secondary address ranges,
firewall rules, and other network resources in the host project. Then
you share selected subnets, including secondary ranges, with the
service projects. Components running in a service project can use the
Shared VPC to communicate with components running in the other service
projects.
You can use Shared VPC with both zonal and regional clusters. Clusters
that use Shared VPC cannot use legacy networks and must have Alias IPs
enabled.
You can configure Shared VPC when you create a new cluster. Google
Kubernetes Engine does not support converting existing clusters to the
Shared VPC model.
If I understood well project X and Y are completely different clusters, thus, I am not sure if that's possible, take a look to this https://kubernetes.io/blog/2016/07/cross-cluster-services/ maybe you can have re-architect your services by federating in case High Availability is needed.
On the other hand, you can always access to the resources through a public endpoint/domain if they are not in someway connected.

How to fix VPC security settings

I am fairly new to AWS, so I am sure that I am just missing something, but here is my problem:
I have created a VPC with 3 subnets and one security group linked to all of them. The security group accepts inbound from my machine. Next I have created two RDS instances (both PostgreSQL), put them into that VPC and linked them to the VPC security group. Weirdly, I can only connect to one of them, for the other one I get a generic time-out error.
Any idea on what I am missing? I can share any more details if needed.
EDIT: Both RDS instances are deployed on the same subnets and I am trying to connect from my machine on the internet.
Please verify that to fix your issue:
Both RDS instance have been deployed into the same Subnet.
If not check that both subnets are public subnets and have a route to your internet gateway
If one RDS (the not working one) is in a private subnets you should consider using a bastion to access it because by default you should not have a route to your Internet Gateway.
But still you will find a bellow a simple subnet design if you want to build something secure:
Create 2 public subnets if you want to deploy something directly accessible through internet (one good practice is to deploy only managed instance there (like load balancer)
Create 2 private subnets with NATGateway and correct route configuration to it
Create a bastion in you public subnets to be able to access your instance in private
Deploy your RDS into Private subnets and create one security group for each (or one for both if they are really linked)
You will find an AWS QuickStart which deploy all network stack for you on VPC Architecture - AWS Quick Start.

Changing a SF clusters default RDP ports

I've created a SF cluster from the Azure portal and by default it uses incrementing ports starting at 3389 for RDP access to the VMs. How can I change this to another range?
Additionally, or even alternatively, is there a way to specify the range when I create a cluster?
I realize this may not be so much of a SF question as a Load Balanacer or Scale Set question, but I ask in the context of SF because that is how I created this setup. IOW I did not create the load balancer or scale set myself.
You can change this with an ARM template (Azure Resource Manager).
Since you will run into situations from time to time where you want to change parts of your infrastructure, I'd recommend to create the whole cluster from an ARM template instead of through the portal. By doing so you could also create the cluster in an existing VNET, use internal load balancers, etc.
To create the cluster from an ARM template, you can either start with the Azure Quickstart template or by clicking on "Export template" in the Azure Portal right before you would actually create the cluster.
To change the inbound NAT rules for RDP in the template, change the section inboundNatPools in the template.
If you want to change your existing cluster, you can either export your existing resource group as a template or you can try to create a template which contains just the loadBalancer-resource and re-deploy just this part.
Working with ARM templates needs some getting used to, but it has many advantages. It allows you to easily change settings that can not be configured through the portal, it allows you to easily re-create the cluster for different environments, etc.