How to Set IP to Static with Powershell and Azure - powershell

I have an Azure Dev Test Lab that I am deploying to Azure via Power Shell. I am able to deploy the ARM templates and join to the test domain (not Azure AD) with no issues. The next step I would like to do is to set the IP to static. I can think of 3 ways to possibly do this. Either figure out the IP structure beforehand and deploy it with those settings. Let the DHCP assign the settings and try to problematically set them from Dynamic to Static using Powershell DSC. Or some type of preferred lease from the DHCP. These labs are meant to be stood up and torn down ad hoc. The IPs are internal and not Public. It is possible for me to know the IPs before hand. Could someone make a recommendation on what would make the most sense to pursue?

Well, there are several ways of looking at it, first of all, you can define ip at deployment time, by setting it to static, instead of dynamic:
{
"name": "xxx",
"type": "Microsoft.Network/networkInterfaces",
"apiVersion": "2016-10-01",
"location": "loc",
"properties": {
"ipConfigurations": [
{
"name": "ipconfig1",
"properties": {
"privateIPAllocationMethod": "Static",
"privateIPAddress": "ipgoeshere",
"subnet": {
"id": "subnetgoeshere"
}
}
}
]
}
but this method is only valid if you know the available IP addresses beforehand and you will have to look those up and pass to the template.
Another way of doing this is créating NIC as dynamic, getting its IP address and setting it to static. All can be done with an ARM Template. The example is a bit too much to paste here, you can check it here. look for deployments called: "[concat(variables('vmNamePrefix'),'setStaticIp')]", and "[concat(variables('vmNamePrefix'),copyIndex(1),'-primaryIp')]", and their corresponding templates: getip and setip
You can do pretty much the same with powershell, I dont have a script Handy, but the logic is the same, deploy > getip > setip

Related

Azure DevOps IP addresses

I have an application running on Web App that needs to communicate with Azure DevOps Microsoft hosted agent. I've set some IP restrictions to deny everything and now in the process of whitelisting agent's IPs. When I read this page it refers to weekly json that contains objects about everything what I need (CIDRs per region). I've parsed the json, added them to my allow list, however the agent's public IP address is not from the range mentioned in the json. The way I checked it was running bash task on the agent to curl icanhazip.com. Does anyone know if the list is complete or should I look somewhere else?
I.e. example in my case:
I use this data (since my ADO org is in West Europe):
{
"name": "AzureDevOps.WestEurope",
"id": "AzureDevOps.WestEurope",
"properties": {
"changeNumber": 1,
"region": "westeurope",
"regionId": 18,
"platform": "Azure",
"systemService": "AzureDevOps",
"addressPrefixes": [
"40.74.28.0/23"
],
"networkFeatures": null
}
}
but the agent initiates connection from the IP: 20.238.71.171, which is not in any of the CIDRs privided by that json file (checked all other regions with ADO).
Any thoughts / help?
You would need to whitelist ALL ranges from, for instance, Azure West Europe. Those are a lot of different IP ranges, as Azure DevOps hosted agents do not have a service Tag.
Since this opens up your firewall to literally every VM running in West Europe, this is usually not really desired, as it is just a bit short of opening up your App to the entire world.
Hence, what people usually do is the following:
First task in a build job, fetch the public IP address of the executing build agent, using something like ipfy.org
Use AZ CLI to add this IP as a single IP allow rule to your app
Do your deployment etc
Remove the IP rule again
If you mean MS-hosted agent:
You should use AzureCloud service tag
The IP address ranges for the hosted agents are listed in the weekly file under AzureCloud., such as AzureCloud.westus for the West US region.
Docs:
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=azure-devops&tabs=yaml#networking

Azure REST API: Network Security Group / Network Interface

I am trying to build a proof-of-concept integration with Azure Cloud into another system. I am not an Azure subject matter expert, so I am struggling with the end-to-end integration.
I am having trouble associating a "Network Security Group" to the "Network Interface". I am able to create both, but they do not not associate to each other until I manually go into the Cloud Portal and associate.
I am using the following:
API Documentation:
https://learn.microsoft.com/en-us/rest/api/compute/virtualmachines
API Explorer:
https://resources.azure.com
I am calling the following end-points in order:
publicIPAddresses
https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroup}/providers/Microsoft.Network/publicIPAddresses/{resourceName}?api-version=2018-07-01
networkInterfaces
https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroup}/providers/Microsoft.Network/networkInterfaces/{resourceName}?api-version=2018-07-01
networkSecurityGroups
https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroup}/providers/Microsoft.Network/networkSecurityGroups/{resourceName}?api-version=2018-07-01
virtualMachines : https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroup}/providers/Microsoft.Compute/virtualMachines/{resourceName}?$expand=instanceView&api-version=2018-06-01
Everything else works except the NSG associating to the NIC.
Within the "networkSecurityGroups" message, I pass in the following parameter under the properties node.
"networkInterfaces": [{
"id": "/subscriptions/" + subscriptionID + "/resourceGroups/" + resourceGroup + "/providers/Microsoft.Network/networkInterfaces/" + networkInterfaces
}
]
I've tried reversing it by referencing the NSG in the Interface REST call, but still doesn't work. Oddly enough, I use the same syntax to associate the Interface to the VM itself, and that works as expected. Variations of the same syntax work with associating the PublicIP to the Interface, disks to VM, ect.
Any thoughts?
pretty sure you need to add this under NIC properties section:
"networkSecurityGroup": {
"id": "NSG_Resource_Id"
}

Extending S/4HANA OData service to SCP

I want to extend a custom OData service created in a S/4HANA system. I added a Cloud Connector to my machine, but I don't know how to go from there. The idea is that I want people to access the service from SCP and that I don't need multiple accounts accessing the service on the S/4 system, but just the one coming from SCP. Any ideas?
Ok I feel silly doing this but it seems to work. My test is actually inconclusive because I don't have a cloud connector handy, but it works proxy-ing google.
I'm still thinking about how to make it publicly accessible. There might be people with better answers than this.
create the cloud connector destination.
make a new folder in webide
create file neo-app.json.
content:
{
"routes": [{
"path": "/google",
"target": {
"type": "destination",
"name": "google"
},
"description": "google"
}],
"sendWelcomeFileRedirect": false
}
path is the proxy in your app, so myapp.scp-account/google here. the target name is your destination. I called it just google, you'll put your cloud connector destination.
Deploy.
My test app with destination google going to https://www.google.com came out looking like this. Paths are relative so it doesn't work but google seems proxied.
You'll still have to authenticate etc.

Azure Template Deployment: What does "ContentLink cannot be null" mean?

I'm deploying a resource group to Azure consisting of a VM, a network, an Automation Account with some runbooks, among other things, using a JSON Template.
I'm getting the following errors,
New-AzureRmResourceGroupDeployment : 4:49:23 PM - Resource
Microsoft.Automation/automationAccounts/runbooks
'DeployAutomationName/AzureClassicAutomationTutorial' failed with
message '{ "code": "BadRequest", "message":
"{\"Message\":\"Invalid argument specified. Argument contentLink
cannot be null.\"}"
As well as:
New-AzureRmResourceGroupDeployment : 4:49:23 PM - Resource
Microsoft.Automation/automationAccounts/modules
'DeployAutomationName/Microsoft.WSMan.Management' failed with message
'{ "code": "BadRequest", "message": "{\"Message\":\"The
ContentLink property must be supplied in PUT or re-PUT operations.\"}"
}'
These two errors repeat for all sorts of different "Assets" ( I think that's the term) of my automation account. So for modules, runbooks, certificates, and connections.
What is a contenLink, and how can I make sure it's not Null? "ContentLink" apppears nowhere in my template, nor can I found any explanation on the internet of what exactly a contentLink is, besides this. Furthermore, I'm assuming that "PUT" or "re-PUT" is part of the rest API that delivers the template, and I have no direct control over this process either. Of what use is an error message that describes problems I have no direct control over?
This problem is synonymous with many of the difficulties I've had succesfully troubleshooting Azure templates: the error messages I get seem to be describing Azure Internals that I have no understanding of nor access too. How can I troubleshoot or debug when I don't have access to the code that is actually throwing these exception, nor the explanation of what this exception means?
Thanks! Here is my template , I would have only copied the relevant text, but I haven't a clue what is relevant and what's not:
Ok, so after poking a little bit, it looks like you are missing the runbook content (the script itself). so your runbook resource should look like this:
{
"type": "Microsoft.Automation/automationAccounts/runbooks",
"name": "[parameters('runbooks_AzureAutomationTutorial_name')]",
"apiVersion": "2015-10-31",
"location": "eastus2",
"properties": {
"runbookType": "GraphPowerShell",
"logVerbose": false,
"logProgress": false,
"publishContentLink": {
"uri": "[variables('scriptUri')]",
"version": "1.0.0.0"
}
},
"resources": [],
"dependsOn": [
"[resourceId('Microsoft.Automation/automationAccounts', parameters('automationAccounts_deployautomation_name_1'))]"
]
},
and the variable:
"variables": {
"scriptUri": "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-automation-runbook-getvms/Runbooks/Get-AzureVMTutorial.ps1",
},
I can't test the whole template, as I don't have base64 values, but I believe this should solve your issue. There might be another one after this, thou ;) who knows.
Reference data: https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-automation-runbook-getvms/azuredeploy.json
Also, you can just remove modules from the template, as they are not required, they are all the default ones, but for them the idea is the same, you are deploying the module without giving the module data.
And you might be missing other mandatory properties here and there, looks like Automation Script doesn't really work well with Azure Automation yet. You might want to resort to Powershell to provision Automation Account, as that works perfectly fine.
P.S. I have no idea what the content of the graphical runbook looks like. But i'd guess, you can export it and upload to github, and it would work.

Managing application configuration in a chef environment cookbook

I am new to chef and have been struggling to find best practices on how to configure application configuration in an environment cookbook [source #1].
The environment cookbook I'm working on should do the following:
Prepare the node for a custom application deployment by creating directories, users, etc. that are specific for this deployment only.
Add initialization and monitoring scripts specific for the application deployment.
Define the application configuration settings.
This last responsibility has been a particularly tough nut to crack.
An example configuration file of an application deployment might look as follows:
{
"server": {
"port": 9090
},
"session": {
"proxy": false,
"expires": 100
},
"redis": [{
"port": 9031,
"host": "rds01.prd.example.com"
}, {
"port": 9031,
"host": "rds02.prd.example.com"
}],
"ldapConfig": {
"url": "ldap://example.inc:389",
"adminDn": "CN=Admin,CN=Users,DC=example,DC=inc",
"adminUsername": "user",
"adminPassword": "secret",
"searchBase": "OU=BigCustomer,OU=customers,DC=example,DC=inc",
"searchFilter": "(example=*)"
},
"log4js": {
"appenders": [
{
"category": "[all]",
"type": "file",
"filename": "./logs/myapp.log"
}
],
"levels": {
"[all]": "ERROR"
}
},
"otherService": {
"basePath" : "http://api.prd.example.com:1234/otherService",
"smokeTestVariable" : "testVar"
}
}
Some parts of this deployment configuration file are more stable than others. While this may vary depending on the application and setup, things like port numbers and usernames I prefer to keep the same across environments for simplicity's sake.
Let me classify the configuration settings:
Stable properties
session
server
log4js.appenders
ldapConfig.adminUsername
ldapConfig.searchFilter
otherService.basePath
redis.port
Environment specific properties
log4js.levels
otherService.smokeTestVariable
Partial-environment specific properties
redis.host: rds01.[environment].example.com
otherService.basePath: http://api.[environment].example.com:1234/otherService
Encrypted environment specific properties
ldapConfig.adminPassword
Questions
How should I create the configuration file? Some options: 1) use a file shipped within the application deployment itself, 2) use a cookbook file template, 3) use a JSON blob as one of the attributes [source #2], 4)... other?
There is a great diversity of variability in the configuration file; how best to manage these using Chef? Roles, environments, per-node configuration, data-bags, encrypted data-bags...? Or should I opt for environment variables instead?
Some key concerns in the approach:
I would prefer there is only 1 way to set the configuration settings.
Changing the configuration file for a developer should be fairly straightforward (they are using Vagrant on their local machines before pushing to test).
The passwords must be secure.
The chef cookbook is managed within the same git repository as the sourcecode.
Some configuration settings require a great deal of flexibility; for example the log4js setting in my example config might contain many more appenders with dozens of fairly unstructured variables.
Any experiences would be much appreciated!
Sources
http://blog.vialstudios.com/the-environment-cookbook-pattern/
http://lists.opscode.com/sympa/arc/chef/2013-01/msg00392.html
http://jtimberman.housepub.org/blog/2013/01/28/local-templates-for-application-configuration/
http://realityforge.org/code/2012/11/12/reusable-cookbooks-revisited.html
Jamie Winsor gave a talk at chefconf that goes further in explaining the environment cookbook pattern's rationale and usage:
Chefcon: talking about self-contained releases, using chef
Slides
In my opinion one of the key concepts this pattern introduces is the idea of using chef environments to control the settings of each application instance. The environment is updated, using berkshelf, with the run-time version of the cookbooks being used by the application.
What is less obvious is that if you decide to reserve a chef environment for the use of a single application instance, it then it becomes safe to use that environment to configure the application's global run-time settings.
An example if given in the berkshelf-api installation instructions. There you will see production environment (for the application) being edited with various run-time settings:
knife environment edit berkshelf-api-production
In conclusion, chef gives us lots of options. I would make the following generic recommendations:
Capture defaults in the application cookbook
Create an environment for each application instance (as recommended by pattern)
Set run-time attribute over-rides in the environment
Notes:
See also the berksflow tool. Designed to make the environment cookbook pattern easier to implement.
I have made no mention of using roles. These can also be used to override attributes at run-time, but might be simpler to capture everything in a dedicated chef environment. Roles seem better suited to capturing information peculiar to a component of an application.