I have a stack in cloudformation (ECS cluster, App LB, Autoscaling Group, launch templates, etc etc.) It all works fine and we have been using this in production and pre production environments for a while.
A problem recently arose while trying to push a stack update. I made some changes to UserData in the AWS::EC2::LaunchTemplate. If i launch a new stack from this template it works great.
BUT:
If i make a change set and apply a stack update cloudformation creates a NEW launch template version -however- the autoscaling group still references the OLD version.
Looking at the AWS docs for AWS::AutoScaling::AutoScalingGroup LaunchTemplateSpecification
I see:
"AWS CloudFormation does not support specifying $Latest, or $Default for the template version number."
Anyone wrangled w/ stack updates creating new versions of resources that need to be referenced elsewhere? I feel like i am missing something obvious.
yay, i'm dumb:
use Fn::GetAtt
ok, make fun of me for using json not yaml
...
"ECSAutoScalingGroup": {
"Type": "AWS::AutoScaling::AutoScalingGroup",
"Properties": {
"VPCZoneIdentifier": {"Ref" : "Subnets"},
"MinSize": "1",
"MaxSize": "10",
"DesiredCapacity": { "Ref": "DesiredInstanceCount" },
"MixedInstancesPolicy": {
"InstancesDistribution" :
{
"OnDemandBaseCapacity" : "0",
"OnDemandPercentageAboveBaseCapacity" : { "Ref" : "PercentOnDemand"}
},
"LaunchTemplate" : {
"LaunchTemplateSpecification" : {
"LaunchTemplateId" : {"Ref" : "ECSLaunchTemplate"},
"Version" : { "Fn::GetAtt" : [ "ECSLaunchTemplate", "LatestVersionNumber" ] }
},
"Overrides" : [ {"InstanceType": "m5.xlarge"},{"InstanceType": "t3.xlarge"},{"InstanceType": "m4.xlarge" },{"InstanceType": "r4.xlarge"},{"InstanceType": "c4.xlarge"}]
}
}
},
...
Related
I'm using Serverless to deploy my AWS cloudformation stack. On one of my tables, I enable streams via "StreamEnabled": true. When this is enabled, I get an error on deployment: Encountered unsupported property StreamEnabled.
If I remove the property, I get a validation exception: ValidationException: Stream StreamEnabled was null.
I found a git issue that was addressed and apparently fixed (here), but after upgrading to v1.3, I'm still getting the same errors on deployment.
Can anyone lend insight as to what the issue may be?
It is enabled by default. You can check it from shell:
aws dynamodbstreams list-streams
{
"Streams": [
{
"TableName": "MyTableName-dev",
"StreamArn": "arn:aws:dynamodb:eu-west-2:0000000000000:table/MyTableName-dev/stream/2018-10-26T15:06:25.995",
"StreamLabel": "2018-10-26T15:06:25.995"
}
]
}
And:
aws dynamodbstreams describe-stream --stream-arn "arn:aws:dynamodb:eu-west-2:00000000000:table/MyTableName-dev/stream/2018-10-26T15:06:25.995"
{
"StreamDescription": {
"StreamLabel": "2018-10-26T15:06:25.995",
"StreamStatus": "ENABLED",
"TableName": "MyTableName-dev",
"Shards": [
{
"ShardId": "shardId-000000000000000-0000000f",
"SequenceNumberRange": {
"StartingSequenceNumber": "00000000000000000000000"
}
}
],
"CreationRequestDateTime": 1540566385.987,
"StreamArn": "arn:aws:dynamodb:eu-west-2:0000000000000000:table/MyTableName-dev/stream/2018-10-26T15:06:25.995",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "application_id"
}
],
"StreamViewType": "KEYS_ONLY"
}
}
It is not a solution, but found that fact I realized that I don't have an issue.
I am using the VMWare vCenter REST API to deploy new Virtual Machines from OVF library items. Part of the API allows for additional_paramaters but I am unable to get it to function properly. Specifically, I would like to set the PropertyParams for custom OVF template properties.
When deploying VM from OVF, I am using the following REST API:
POST https://{server}/rest/com/vmware/vcenter/ovf/library-item/id:{ovf_library_item_id}?~action=deploy
I have tried many structures and either end up with the POST succeeding but the parameters completely ignored, or with a 500 Internal Server error with a message about failing to convert the properties structure:
Could not convert field 'properties' of structure 'com.vmware.vcenter.ovf.property_params'
The payload that seems correct from the documentation (but fails with the error above):
deployment_spec : {
/* ... */
additional_parameters : [
{
type : 'PropertyParams',
properties : [
{
id : 'my_property_name',
value : 'foo',
}
]
}
]
}
Given an OVF that contains the following:
<ProductSection>
<Info>Information about the installed software</Info>
<Product>MyProduct</Product>
<Vendor>MyCompany</Vendor>
<Version>1.0</Version>
<Category>Config</Category>
<Property ovf:userConfigurable="true" ovf:type="string" ovf:key="my_property_name" ovf:value="">
<Label>My Property</Label>
<Description>A custom property</Description>
</Property>
</ProductSection>
This also fails for other property types such as boolean.
Note that I have posted on the vCenter forums as well.
I had the same issue, i success to solve it by browsing the vapi structure /com/vmware/vapi/metadata/metamodel/structure/id:<idstructure>
Here is my finding :
firstly, get your properties structure by using the filter api :
https://{{vc}}/rest/com/vmware/vcenter/ovf/library-item/id:300401a5-4561-4c3d-ac67-67bc7a1a6
Then, to deploy, use the class com.vmware.vcenter.ovh.property_params. It will be more clear with the exemple :
{
"deployment_spec": {
"accept_all_EULA": true,
"name": "clientok",
"default_datastore_id": "datastore-10",
"additional_parameters": [
{
"#class": "com.vmware.vcenter.ovf.property_params",
"properties":
[
{
"instance_id": "",
"class_id": "",
"description": "The gateway IP for this virtual appliance.",
"id": "gateway",
"label": "Default Gateway Address",
"category": "LAN",
"type": "ip",
"value": "10.1.2.1",
"ui_optional": true
}
],
"type": "PropertyParams"
}
]
}
we use to follow instruction here! to set the bucket lifecycle policy, but with the latest gcloud components update, we are getting an error like this:
Failure: Unsupported tag SetStorageClass.
search the gcs storage lifecycle doc did not fund any update.
The command we used is gsutil lifecycle set <json file> gs://<bucket name>/
and gsutil version: 4.25
{
"lifecycle":{
"rule":[
{
"action":{
"type":"SetStorageClass",
"storageClass":"NEARLINE"
},
"condition":{
"age":30,
"matchesStorageClass":[
"REGIONAL",
"STANDARD",
"DURABLE_REDUCED_AVAILABILITY"
]
}
}
]
}
}
EDIT 2
This was fixed in this GitHub commit, which has been included in the newest version (v4.26) of gsutil.
EDIT
It looks like you actually uncovered a bug that occurs when using the XML API. I've opened a GitHub issue an will work on fixing this ASAP:
https://github.com/GoogleCloudPlatform/gsutil/issues/427
Thanks for the report!
Looking at the code in the Boto library, you're probably trying to specify SetStorageClass a JSON key:
{
...
"SetStorageClass": ...
...
}
rather than making it the value of the action's type attribute. Here's an example using your (fixed) sample from a question comment:
{
"lifecycle": {
"rule": [
{
"action": {
"type": "SetStorageClass",
"storageClass": "NEARLINE"
},
"condition": {
"age":30,
"matchesStorageClass": ["STANDARD", "DURABLE_REDUCED_AVAILABILITY"]
}
}
]
}
}
I have already created a service fabric cluster with azure diagnostics and it is functional currently with my services deployed into that cluster. I have an ETW EventSource in my service that I would like to start collecting events from because my service code already uses this event source to write my service related events. Since the cluster is already enabled for azure diagnostics and my services are already deployed into that cluster, I think it is a simple matter of updating the ETW provider with my event source in this service fabric cluster. Here is the exported template (only a partial is shown that is relevant for azure diagnostics):
{
"properties": {
"publisher": "Microsoft.Azure.Diagnostics",
"type": "IaaSDiagnostics",
"typeHandlerVersion": "1.5",
"autoUpgradeMinorVersion": true,
"settings": {
"WadCfg": {
"DiagnosticMonitorConfiguration": {
"overallQuotaInMB": "50000",
"EtwProviders": {
"EtwEventSourceProviderConfiguration": [
{
"provider": "Microsoft-ServiceFabric-Actors",
"scheduledTransferKeywordFilter": "1",
"scheduledTransferPeriod": "PT5M",
"DefaultEvents": {
"eventDestination": "ServiceFabricReliableActorEventTable"
}
},
{
"provider": "Microsoft-ServiceFabric-Services",
"scheduledTransferPeriod": "PT5M",
"DefaultEvents": {
"eventDestination": "ServiceFabricReliableServiceEventTable"
}
},
{
"provider": "Bb.ServiceFabric.Infrastructure.Container",
"scheduledTransferPeriod": "PT1M",
"DefaultEvents": {
"eventDestination": "ServiceFabricReliableServiceEventTable"
}
}
],
"EtwManifestProviderConfiguration": [
{
"provider": "cbd93bc2-71e5-4566-b3a7-595d8eeca6e8",
"scheduledTransferLogLevelFilter": "Information",
"scheduledTransferKeywordFilter": "4611686018427387904",
"scheduledTransferPeriod": "PT5M",
"DefaultEvents": {
"eventDestination": "ServiceFabricSystemEventTable"
}
}
]
}
}
},
"StorageAccount": "sfdgsmsraghuplaygrou6827"
}
},
"name": "VMDiagnosticsVmExt_vmNodeType0Name"
}
I would like to update following EtwProviders/EtwEventSourceProviderConfiguration to contain following section (as MyCompany.MyServices.MyStatelessService is the name of my service's EventSource):
{
"provider": "MyCompany.MyServices.MyStatelessService",
"scheduledTransferPeriod": "PT5M",
"DefaultEvents": {
"eventDestination": "ServiceFabricReliableServiceEventTable"
}
}
Here are my questions:
Is this the correct way of inserting an ETW provider/EventSource (from my service) into an existing cluster (that is already enabled with azure diagnostics)?
Can I add this event source (as a ETW event source provider) using a powershell command(s)?
If so, what is the exact powershell command (using all the information from the above code fragment)?
Note: I am using .net framework 4.5.2.
All seems good with the added configuration above. Just be aware that for ETWProviders the EventDestination cannot contain hyphens (-), yours don't so you are ok.
To update the Windows Azure Diagnostics (WAD) agent configuration, you can use either PowerShell or Cloud Explorer in Visual Studio.
For the former, simply update the ARM template and use the New-AzureRmResourceGroupDeployment cmdlet. See here for further information: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-diagnostics-how-to-setup-wad/#update-diagnostics-to-collect-and-upload-logs-from-new-eventsource-channels
For using Cloud Explorer in Visual Studio. Browse to your Virtual Machine Scale Set (as this is the Azure resource that holds the WAD configuration). Right-click and choose Update Diagnostics. In the dialog shown, you have the option to upload a private and public configuration file. Simple take a .json document containing the {"WadCfg": {}} element, and upload that as a public configuration.
If you need to update the private configuration specifies the storage account name and AccessKey:
{
"storageAccountName": "",
"storageAccountKey": "",
"storageAccountEndPoint": "https://core.windows.net",
}
Hope this helps.
Mikkel
After changing the target framework from 4.5.1 to 4.6 the service in Auzure Fail, the local deployment is working.
Do I need to add .Net 4.6 support ? - I'm unable to find where I can see the frameworks available in my cluster in azure.
Thank you
ApplicationName :
fabric:/Lending20.Service.IdentityManagement AggregatedHealthState
: Error UnhealthyEvaluations :
Unhealthy services: 100% (1/1), ServiceType='IdentityManagementServiceType',
MaxPercentUnhealthyServices=0%.
Unhealthy service:
ServiceName='fabric:/Lending20.Service.IdentityManagement/Identity
ManagementService', AggregatedHealthState='Error'.
Unhealthy partitions: 100% (1/1),
MaxPercentUnhealthyPartitionsPerService=0%.
Unhealthy partition:
PartitionId='7c68b397-fda3-491d-9e17-921cd24217ca',
AggregatedHealthState='Error'.
Error event: SourceId='System.FM', Property='State'.
ServiceHealthStates :
ServiceName :
fabric:/Lending20.Service.IdentityManagement/IdentityManagementService
AggregatedHealthState : Error
DeployedApplicationHealthStates :
ApplicationName : fabric:/Lending20.Service.IdentityManagement
NodeName : _lending1
AggregatedHealthState : Ok
HealthEvents :
SourceId : System.CM
Property : State
HealthState : Ok
SequenceNumber : 3464
SentAt : 11/21/2015 12:38:08 PM
ReceivedAt : 11/21/2015 12:38:08 PM
TTL : Infinite
Description : Application has been created.
RemoveWhenExpired : False
IsExpired : False
Transitions : Warning->Ok = 11/21/2015 12:38:08 PM, LastError = 1/1/0001
12:00:00 AM
You can use the following ARM template to install .NET 4.6.1. Note that it's dependent on this script (used by Service Profiler). You also can replace it with any other PowerShell script.
The parameter is the base name of the node. So if you have VM0,.. VM5 in your cluster, you should set vmName = 'VM'. The vmExtensionLoop is set to 5 nodes; you can also change that of course.
If you use an ARM template to deploy your cluster, you can include this as part of it. Note it can slow down the deployment of the scale set, since it requires a restart.
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"vmName": {
"type": "string",
"metadata": {
"description": "Virtual machine name."
},
}
},
"resources": [
{
"apiVersion": "2015-05-01-preview",
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "[concat(parameters('vmName'),copyIndex(0), '/CustomScriptExtensionInstallNet461')]",
"location": "[variables('location')]",
"tags": {
"displayName": "CustomScriptExtensionInstallNet461"
},
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "1.4",
"autoUpgradeMinorVersion": true,
"settings": {
"fileUris": [ "https://gist.githubusercontent.com/aelij/7ea90dda4a187a482584/raw/a3e0f946d4a22b0af803edb503d0a30a263fba2c/InstallNetFx461.ps1" ],
"commandToExecute": "powershell.exe -ExecutionPolicy Unrestricted -File InstallNetFx461.ps1"
}
},
"copy": {
"name": "vmExtensionLoop",
"count": 5
}
}
]
}
.NET 4.6 is not yet available in the default Windows Server 2012 image used in Azure. At this point, your only option is to log into each VM and install it.
Use the windows Server 2016 image to get .net 4.6.1. pre installed. vmImageSku:"2016-Datacenter" when provisoning the cluster.
another option is use azure resource group template that includes a DSC extension to provision your VMs to have .net 46 installed.
Here is the snippet in my dsc powershell to deal with the installation of .net 461
code or gist for more complete script
Until 4.6 is supported by Azure natively, I'd use a custom VM image with .NET 4.6 preinstalled. See this article for details on how to create and use one.
Now .NET 4.6 and above is available in the Release of SDK 2.5.216 and Runtime 5.5.216
For more details please see: https://azure.microsoft.com/en-us/blog/announcing-azure-service-fabric-5-5-and-sdk-2-5/