dataDisk.image parameter in Azure Resource Template - powershell

Is there any Azure Resource Template documentation? I am trying to recreate a VM using Resource Template and the only thing I am missing is creating a data disk from image the same way the OS disk is created. I edited the JSON template:
"dataDisks": [
{
"lun": 0,
"name": "[concat(parameters('virtualMachines_testVM_name'),'-disk-1')]",
"createOption": "FromImage",
"vhd": {
"uri": "[concat('https', '://', parameters('storageAccounts_rmtemplatetest6221copy_name'), '.blob.core.windows.net', concat('/vhds/', parameters('virtualMachines_testVM_name'),'-disk-1-201649102835.vhd'))]"
},
"caching": "ReadWrite"
}
]
But I get following error in Azure when deploying the template
Required parameter 'dataDisk.image' is missing
So far the only way I got to recreate the data disk was to delete above code from the JSON template and then use Powershell after the machine is created without the data disk, but I would like to automate deployment with resource template only.

In the Azure quick start templates you can find JSON template for creating VM using custom images, including Data disks:
https://github.com/Azure/azure-quickstart-templates/tree/master/101-vm-user-image-data-disks
Just one very important note - the targed storage account should be same account where your VHDs reside.
There is no standing documentation on the JSON Schema. The best source is to check out the Schema itself, so:
https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json
http://schema.management.azure.com/schemas/2015-08-01/Microsoft.Compute.json#/resourceDefinitions/virtualMachine
UPDATE
When you create VM based on custom image, including data disks, you must create the entire VM in the same storage account where your custom data disks reside. There is no option, as of today (2016-05-10) to instruct ARM to copy VHDs across storage accounts.
This all was, if you want to create a VM from custom image with Data Disks.
If you just want to create the VM with new, empty data disks, then you can use the following quick start template:
https://github.com/Azure/azure-quickstart-templates/tree/master/101-vm-multiple-data-disk
where you only define the desired size of the data disks and where they should be stored.

The problem you are having is that you have the template configured to make a copy of an image, and you have no image specified.
You need to either set the createOption to fromImage, and specify an image
"dataDisks": [
{
"name": "[concat(variables('vmName'),'-dataDisk')]",
"lun": 0,
"createOption": "FromImage",
"image": {
"uri": "[variables('dataDiskUrl')]"
},
"vhd": {
"uri": "[variables('dataDiskVhdName')]"
}
}
],
or, if you just want to use an existing disk, you can use attach, (you can also use empty in this configuration, and it will create an empty disk)
"dataDisks": [
{
"name": "[concat(variables('vmName'),'-dataDisk')]",
"lun": 0,
"createOption": "attach",
"vhd": {
"uri": "[variables('dataDiskVhdName')]"
}
}
],

Related

How to log with Serilog to a remote server?

I'm writing a .NET Core 6 Web API and decided to use Serilog for logging.
This is how I configured it in appsettings.json:
"Serilog": {
"Using": [ "Serilog.Sinks.File" ],
"MinimumLevel": {
"Default": "Information"
},
"WriteTo": [
{
"Name": "File",
"Args": {
"path": "../logs/webapi-.log",
"rollingInterval": "Day",
"outputTemplate": "[{Timestamp:yyyy-MM-dd HH:mm:ss.fff zzz} {CorrelationId} {Level:u3}] {Username} {Message:lj}{NewLine}{Exception}"
}
}
]
}
This is working fine, it's logging inside a logs folder in the root.
Now I've deployed my API to a Staging K8s cluster and don't want my logs to be stored on the pod but rather on the Staging server. Is it possible? I can't find many useful posts about it, so I assume there is a better way to achieve it.
Based on Panagiotis' 2nd suggestion I spent like a week to try to set up Elasticsearch with Fluentd and Kibana with no success.
Turned out, that the simplest and easiest solution was his 1st one: all I needed was a PersistentVolume and a PersistentVolumeClaim. This post helped me with the setup: How to store my pod logs in a persistent storage?

AWS CF Template for provisioning new EC2 instance with Existing Targetgroup

I'm trying to provision a new EC2 instance and attaching it to the existing targetGroup which I created earlier. I do have the targetGroupArn, and the new instance details. In all the documents, I could see the sample templates of creating a new instance and new targetgroup and referring the instance id to the creating targetgroup.
Is there any option to create only new instance and attaching it to the existing target group?
So that I can mention the targetGroupArn to which the new instances will be attaching.
Below I've mentioned the sample templates, what I'm looking for.
{
"Type": "AWS::ElasticLoadBalancingV2::TargetGroup",
"Properties": {
"TargetGroupArn": "arn:aws:elasticloadbalancing:ap-south-1:4444444444:targetgroup/MyTG/56abcccccccc56",
"Targets": [
{
"Id": {
"Ref": "ec2Instance1"
}
}
]
}
}
Is there any option to create only new instance and attaching it to the existing target group?
Yes, you can do that through a custom resources in the form of a lambda function. The function would use AWS SDK, such as boto3, to attach the instances to the existing target group.

Is there a way to read the parameter store secure variables inside the ECS JSON environment sections based on FARGATE

I am setting up ECS Services to launch my application which speaks to the RDS Database server. I need to pass the Database access properties such as username, password, dbname etc to the application codes running in the FARGATE instances. So to pass them i have created these parameters in the parameter store, but i need to find a way to get them from the parameter store and pass it to the ECS task definitons env variable properties?
In the ECS Task definitions, i have tried to modify the JSON file environment property with the parameters such as "name: and "valueFrom", but seems that the "valueFrom:" is not being accepted in the JSON file, it pops out an error saying "Cannot read property 'replace' of undefined"
"environment": [
{
"name": "POSTGRES_DB",
"valueFrom": "PROD_POSTGRES_DB"
}
],
I expect that the POSTGRES_DB parameter reads the values from the PROD_POSTGRES_DB defined in parameter store of AWS
When you use SSM Parameter Store in ECS Task Definition for the valueFrom environment variables, it creates separate secrets section under containerDefinitions. So, it will look like below.
"containerDefinitions": [
{
"secrets": [
{
"name": "POSTGRES_DB",
"valueFrom": "PROD_POSTGRES_DB"
}
],
"environment": [
{
"valueFrom": "myKey",
"name": "myValue"
}
],
}
]
For the normal value environment variables, it will be usual environment json array.
Note -
When you use SSM Parameter Store, you have to make sure Task Execution Role has necessary SSM Permissions attached to role. Reference - https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html
Also, try to provide full SSM Parameter ARN if your ECS region is different from SSM region.

Extending S/4HANA OData service to SCP

I want to extend a custom OData service created in a S/4HANA system. I added a Cloud Connector to my machine, but I don't know how to go from there. The idea is that I want people to access the service from SCP and that I don't need multiple accounts accessing the service on the S/4 system, but just the one coming from SCP. Any ideas?
Ok I feel silly doing this but it seems to work. My test is actually inconclusive because I don't have a cloud connector handy, but it works proxy-ing google.
I'm still thinking about how to make it publicly accessible. There might be people with better answers than this.
create the cloud connector destination.
make a new folder in webide
create file neo-app.json.
content:
{
"routes": [{
"path": "/google",
"target": {
"type": "destination",
"name": "google"
},
"description": "google"
}],
"sendWelcomeFileRedirect": false
}
path is the proxy in your app, so myapp.scp-account/google here. the target name is your destination. I called it just google, you'll put your cloud connector destination.
Deploy.
My test app with destination google going to https://www.google.com came out looking like this. Paths are relative so it doesn't work but google seems proxied.
You'll still have to authenticate etc.

Why does Azure Automation store individual jobs in a deployment template, and can I remove them without damaging the deployment template?

I have schedules in Azure Automation that run a PowerShell script to remove batches of rows from Azure Table Storage. I was looking at using a deployment template to add schedules for other environments and noticed that I had a large number of JSON objects with a name like:
[parameters('jobs_7d50108e_270d_456a_04da_b79cbe13ba12_name')]
This appears to be an individual instance of an automation job, as I can see the individual schedule and runbook information. It doesn't appear to have much information in it:
{
"comments": "Generalized from resource: '/subscriptions/subscription-id/resourcegroups/resource-group/providers/Microsoft.Automation/automationAccounts/AutomationInstance/jobSchedules/job-schedule-id'.",
"type": "Microsoft.Automation/automationAccounts/jobSchedules",
"name": "[parameters('jobs_7d50108e_270d_456a_04da_b79cbe13ba12_name')]",
"apiVersion": "2015-10-31",
"properties": {
"runbook": {
"name": "MyRunBook"
},
"schedule": {
"name": "MySchedule"
},
"parameters": null
},
"resources": [],
"dependsOn": [
"[resourceId('Microsoft.Automation/automationAccounts', parameters('automationAccounts_AutomationInstance_name'))]"
]
}
Why is this being added to the deployment template (perhaps just for history)? Are there potentially bad effects if I remove them from the template?
A jobSchedule in Azure Automation is an association between a runbook and a schedule. Without a jobSchedule, the ARM template, when deployed, will set up any runbooks and schedules defined in the template, but without jobSchedules in the template, no runbooks will execute automatically based on those schedules.
The [parameters('jobs_7d50108e_270d_456a_04da_b79cbe13ba12_name')] line is just used to determine what the name of that jobSchedule will be, since every resource in ARM must be referenced by a name.