Add 2 network interface for few VM based on the naming convention BICEP - azure-bicep

Add 2 network interface for few VM based on the naming convention BICEP
for ex
if the VM name has xyz in the name it should apply 2 nic and for rest it should apply 1 NIC

Related

service XXX was unable to place a task because no container inst met all of its reqmnts. instance XXX is already using a port required by your task

service crm was unable to place a task because no container instance met all of its requirements. The closest matching container-instance e45856e4821149XXXXXXXXX is already using a port required by your task.
is there any way to resolve this, currently i have trying to run 4 task-definition i have referred below AWS documents not sure which solution will be ideal to resolve current issue ? dynamic porting how to do it ?
registered ports : ["22","4000","2376","2375","51678","51679"]
https://aws.amazon.com/premiumsupport/knowledge-center/dynamic-port-mapping-ecs/
https://aws.amazon.com/premiumsupport/knowledge-center/ecs-container-instance-requirement-error/
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-event-messages.html#service-event-messages-1
tried referring AWS docs for current issue, not sure how to resolve port issue.
If you create port mappings in your task definition, you will occupy the ports on the host. If you do not create port mappings in your task definition (and only specify the container port) you will receive a dynamically allocated port on the host automatically.
So: don't specify the host port in the task definition.
The target group associated with your task can be used to dynamically target the tasks from, say, a load balancer or other resources supporting target groups.
Or you can create more instances in your autoscaling group so that your task can be placed on an instance where the port is not in use. You can use capacity providers to automatically create new instances when needed. Though, this is likely far less efficient than dynamic port mapping, depending on the performance characteristics of your workloads.

Map service roles and replicas to servers with Azure DevOps Release

My project is Windows Service application which could be installed in several roles (the difference is in service name, exe path and some setting in app.config). Each role could be scaled horizontally by instances count. And all these {roles x replica counts} should be deployed over a set of servers in specific proportions for effective performance and utilization.
As an example:
ServerA
ServiceAlfa.1
ServiceAlfa.2
ServiceBravo
ServiceDelta
ServerB
ServiceBravo
ServiceCharlie
ServiceDelta.1
ServiceDelta.2
ServiceDelta.3
How can I achieve this with Azure DevOps (Dev17.M153.5) instruments?
I know brand new yaml pipeline introduces some conception of Environments and VM. It's just not available in latest stable version yet. But it's like a replacement for Deployment Groups early used for deployment to multiple machines, which I can use. I have already installed deployment agents and registered it. But I still cannot figure it out how better configure my complex mapping of instances to servers in release pipeline.
I can create a one job stage per role and link them with corresponding variable groups like
StageAlfa
ServerA:2
StageBravo
ServerA:1
ServerB:1
StageCharlie
ServerB:1
StageDelta
ServerA:1
ServerB:3
So I should check and compare the server name in my script
Or I can do the opposite: create a stage per machine and link it with variable group describing count of specific role replicas on current server. So in every stage I could select specific machine from deployment group by tag.
Looks like the second approach is simpler but they both are felt so awkward!
P.S. Windows Services on Machines not a containers in Kubernetes due to specific Windows software dependencies.
Your approaches are correct. You may consider migrating to Azure DevOps Service or upgrade to Azure DevOps Server 2020, which supports Envorinments and VM:
https://learn.microsoft.com/en-us/azure/devops/server/release-notes/azuredevops2020?view=azure-devops#continuous-deployment-in-yaml

CloudFormation for multiple parameter files and a single template

I am currently storing all my parameters in Systems Manager Parameter Store and referencing them in CloudFormation stack.
I am now stuck in a scenario where the parameters vary for the same Cloudformation template.
For instance server A, has parameters m5.large instance type, subnet 1, host name 1 and likewise server B can have m5.xlarge, subnet 2, host name 2 and so on. These 2 parameters are for the same CFN template.
How can I handle this situation in a CI/CD manner?
My current setup involves SSM Parameter store -> CloudWatch Events -> CodePipeline -> Cloudformation.
I am Assuming you use AWS CodePipeline. Each CodePipeline stage consists of multiple stage actions. On of the action configuration properties is the CloudFormation template, but also the The action can be configured to include the CloudFormation template, but also a template configuration can be provided. If you define the server name as a parameter in the CloudFormation stack then you can provide a different configuration for each CloudFormation parameter.
Assuming you define only one server in the CloudFormation stack and use the template twice in your codepipeline, then you can provide a different configuration to both stage actions . Based on this configuration you can decide which parameter in the parameter store you want to retrieve. Of course this implies that your parameter store should be parameterized as well e.g. instead of parameter instancetype you might have parameter servera/instancetype and serverb/instancetype
However I think it is best if you just define the parameter in the Template Configuration file provided to the action declaration. So for example define the parameter instancetype in your CloudFormation template and use two different configuration files ( one for each stack) where the first Template Configuration file might say instancetype: m5.large and the second configuration file instancetype: m5.xlarge. This makes your CloudFormation stack history more explicit, easier to read, and makes the use of the parameter store for non-secrets no longer necessary.

How to add new Node Type to deployed Service Fabric cluster?

I deployed a Service Fabric Cluster running with a single application and 3 Node Types of 5 machines, each with its own placement constraint.
I need to add other 2 Node types (Virtual Machine Scale sets), how can I do that from the azure portal?
The Add-AzureRmServiceFabricNodeType command can add a new node type to an existing Service Fabric cluster.
Note that the process can take roughly an hour to complete, since it creates one resource at a time starting with the cluster. It will create a new load balancer, public IP address, storage accounts, and virtual machine scale set.
$password = ConvertTo-SecureString -String 'Password$123456' -AsPlainText -Force
Add-AzureRmServiceFabricNodeType `
-ResourceGroupName "resource-group" `
-Name "cluster-name" `
-NodeType "nodetype2" `
-Capacity 2 `
-VmUserName "user" `
-VmPassword $password
Things to consider:
Check your quotas beforehand to ensure you can create the new virtual machine scale set instances or you will get an error and the whole process will roll back
Node type names have a limit of nine characters when creating a cluster via portal blade; this same restriction may apply using the PowerShell command
The command was introduced as part of v4.2.0 of the AzureRM PowerShell module, so you may need to update your module
You can also add a new node type by creating a new cluster using the Azure portal wizard and updating your DNS records, or by modifying the ARM template, but the PowerShell command is obviously the best option.
For those who read it in 2022 and later there is a newer version of PowerShell command to do that:
Add-AzServiceFabricNodeType
And there is also a AZ CLI command: az sf cluster node-type add
Another option is to use New-AzureRmResourceGroupDeployment with an updated ARM template that includes all the resources for your new node types, as well as the new node types.
What's nice about using the PS command is that it takes care of any manual work that you may need to do for creating and associating resources to the new node types.

What is called a Node in a WebSpere Network Deployment

In a installation of WebSphere Application Server with Network Deployment, a node is:
a physical machine
an instance of operative system
a logical set of WAS instances that is independent of physical machine or OS instance
Basically,
A server is a runtime environment, a process of execution.
A node is a grouping of servers that share common configuration. It is a physical machine.
A cell is a grouping of nodes into a sigle administrative domain. For websphere, it mean that if you group several servers within a cell, then you can administer them with one Websphere admin console
Hope this helps!
#ggasp Here is what I got off IBM's Information Center
A node is a logical grouping of managed servers.
A node usually corresponds to a logical or physical computer system with a distinct IP host address. Nodes cannot span multiple computers.
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.websphere.nd.multiplatform.doc/info/ae/ae/cagt_node.html
Keep in mind that usually <> always.
Since WAS 6.0 and up you usually want to setup more than one node in each physical computer, given the usual power of the server you use the node to separate logical business entities.
Like for example have 6 nodes, 3 in each of 2 machines and have 1 pair of nodes you could define 3 different clusters one for each stage (dev, qa, staging) and making each cluster be invisible to the other.
A Cell is a virtual unit that is built of a Deployment Manager and one or more Nodes. A Node is another virtual unit that is built of a Node Agent and one or more Server instances.
Here you can find more details including a diagram.