I'm using Serverless to deploy my AWS cloudformation stack. On one of my tables, I enable streams via "StreamEnabled": true. When this is enabled, I get an error on deployment: Encountered unsupported property StreamEnabled.
If I remove the property, I get a validation exception: ValidationException: Stream StreamEnabled was null.
I found a git issue that was addressed and apparently fixed (here), but after upgrading to v1.3, I'm still getting the same errors on deployment.
Can anyone lend insight as to what the issue may be?
It is enabled by default. You can check it from shell:
aws dynamodbstreams list-streams
{
"Streams": [
{
"TableName": "MyTableName-dev",
"StreamArn": "arn:aws:dynamodb:eu-west-2:0000000000000:table/MyTableName-dev/stream/2018-10-26T15:06:25.995",
"StreamLabel": "2018-10-26T15:06:25.995"
}
]
}
And:
aws dynamodbstreams describe-stream --stream-arn "arn:aws:dynamodb:eu-west-2:00000000000:table/MyTableName-dev/stream/2018-10-26T15:06:25.995"
{
"StreamDescription": {
"StreamLabel": "2018-10-26T15:06:25.995",
"StreamStatus": "ENABLED",
"TableName": "MyTableName-dev",
"Shards": [
{
"ShardId": "shardId-000000000000000-0000000f",
"SequenceNumberRange": {
"StartingSequenceNumber": "00000000000000000000000"
}
}
],
"CreationRequestDateTime": 1540566385.987,
"StreamArn": "arn:aws:dynamodb:eu-west-2:0000000000000000:table/MyTableName-dev/stream/2018-10-26T15:06:25.995",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "application_id"
}
],
"StreamViewType": "KEYS_ONLY"
}
}
It is not a solution, but found that fact I realized that I don't have an issue.
Related
I have the problem with running index managment policy for new indices. I get following error on "set number_of_replicas" step:
{
"cause": "no permissions for [indices:admin/settings/update] and associated roles [index_management_full_access, own_index, security_rest_api_access]",
"message": "Failed to set number_of_replicas to 2 [index=sample.name-2022.10.22]"
}
The indices are created by logstash with "sample.name-YYYY.MM.DD" name template, so in the index policy I have "sample.name-*" index pattern.
My policy:
{
"policy_id": "sample.name-*",
"description": "sample.name-* policy ",
"schema_version": 16,
"error_notification": null,
"default_state": "set replicas",
"states": [
{
"name": "set replicas",
"actions": [
{
"replica_count": {
"number_of_replicas": 2
}
}
]
],
"ism_template": [
{
"index_patterns": [
"sample.name-*"
],
"priority": 1
}
]
}
I don't understand the reason of this error.
Am I doing something wrong?
Retry of the policy doesn't work.
The policy works only if I manually reassign it to index by Dashboards or API.
Opensearch version: 2.3.0
First time I created the policy using API under custom internal user with mapped “security_rest_api_access” security role only.
So I added all_access rights to my internal user and re-created policy and it works!
Seems that the policy runs under my internal user, which created it
Cloudformation stack throws Error "Encountered unsupported property Comparison Operator" , while creating an AWS::CloudWatch::Alarm using cloudformation.
As per AWS documentation ComparisionOperator value GreaterThanOrEqualtoThreshold is valid.
I use AWSTemplateFormatVersion as 2010-09-09
Any help would be appreciated :)
"CPUHighAlarm":{
"Type":"AWS::CloudWatch::Alarm",
"Properties":{
"AlarmDescription":"High CPU utilization",
"MetricName":"CPUUtilization",
"Namespace":"AWS/EC2",
"AlarmActions":[{"Ref":"asgScaleOut"}],
"ComparisionOperator": "GreaterThanOrEqualtoThreshold",
"EvaluationPeriods": "1",
"Threshold": "70",
"Period":"180",
"Statistic": "Average",
"Dimensions": [
{
"Name": "AutoScalingGroupName",
"Value": {
"Ref": "asg"
}
}
]
}
},
Just a typo. Should be ComparisonOperator instead of ComparisionOperator.
The CloudFormation Linter can help you catch these quicker and the Visual Studio Code extension can help prevent typos with autocompletion:
E3002: Invalid Property Resources/CPUHighAlarm/Properties/ComparisionOperator
Maybe it's case sensitive. Try GreaterThanOrEqualToThreshold.
I have a stack in cloudformation (ECS cluster, App LB, Autoscaling Group, launch templates, etc etc.) It all works fine and we have been using this in production and pre production environments for a while.
A problem recently arose while trying to push a stack update. I made some changes to UserData in the AWS::EC2::LaunchTemplate. If i launch a new stack from this template it works great.
BUT:
If i make a change set and apply a stack update cloudformation creates a NEW launch template version -however- the autoscaling group still references the OLD version.
Looking at the AWS docs for AWS::AutoScaling::AutoScalingGroup LaunchTemplateSpecification
I see:
"AWS CloudFormation does not support specifying $Latest, or $Default for the template version number."
Anyone wrangled w/ stack updates creating new versions of resources that need to be referenced elsewhere? I feel like i am missing something obvious.
yay, i'm dumb:
use Fn::GetAtt
ok, make fun of me for using json not yaml
...
"ECSAutoScalingGroup": {
"Type": "AWS::AutoScaling::AutoScalingGroup",
"Properties": {
"VPCZoneIdentifier": {"Ref" : "Subnets"},
"MinSize": "1",
"MaxSize": "10",
"DesiredCapacity": { "Ref": "DesiredInstanceCount" },
"MixedInstancesPolicy": {
"InstancesDistribution" :
{
"OnDemandBaseCapacity" : "0",
"OnDemandPercentageAboveBaseCapacity" : { "Ref" : "PercentOnDemand"}
},
"LaunchTemplate" : {
"LaunchTemplateSpecification" : {
"LaunchTemplateId" : {"Ref" : "ECSLaunchTemplate"},
"Version" : { "Fn::GetAtt" : [ "ECSLaunchTemplate", "LatestVersionNumber" ] }
},
"Overrides" : [ {"InstanceType": "m5.xlarge"},{"InstanceType": "t3.xlarge"},{"InstanceType": "m4.xlarge" },{"InstanceType": "r4.xlarge"},{"InstanceType": "c4.xlarge"}]
}
}
},
...
we use to follow instruction here! to set the bucket lifecycle policy, but with the latest gcloud components update, we are getting an error like this:
Failure: Unsupported tag SetStorageClass.
search the gcs storage lifecycle doc did not fund any update.
The command we used is gsutil lifecycle set <json file> gs://<bucket name>/
and gsutil version: 4.25
{
"lifecycle":{
"rule":[
{
"action":{
"type":"SetStorageClass",
"storageClass":"NEARLINE"
},
"condition":{
"age":30,
"matchesStorageClass":[
"REGIONAL",
"STANDARD",
"DURABLE_REDUCED_AVAILABILITY"
]
}
}
]
}
}
EDIT 2
This was fixed in this GitHub commit, which has been included in the newest version (v4.26) of gsutil.
EDIT
It looks like you actually uncovered a bug that occurs when using the XML API. I've opened a GitHub issue an will work on fixing this ASAP:
https://github.com/GoogleCloudPlatform/gsutil/issues/427
Thanks for the report!
Looking at the code in the Boto library, you're probably trying to specify SetStorageClass a JSON key:
{
...
"SetStorageClass": ...
...
}
rather than making it the value of the action's type attribute. Here's an example using your (fixed) sample from a question comment:
{
"lifecycle": {
"rule": [
{
"action": {
"type": "SetStorageClass",
"storageClass": "NEARLINE"
},
"condition": {
"age":30,
"matchesStorageClass": ["STANDARD", "DURABLE_REDUCED_AVAILABILITY"]
}
}
]
}
}
I have already created a service fabric cluster with azure diagnostics and it is functional currently with my services deployed into that cluster. I have an ETW EventSource in my service that I would like to start collecting events from because my service code already uses this event source to write my service related events. Since the cluster is already enabled for azure diagnostics and my services are already deployed into that cluster, I think it is a simple matter of updating the ETW provider with my event source in this service fabric cluster. Here is the exported template (only a partial is shown that is relevant for azure diagnostics):
{
"properties": {
"publisher": "Microsoft.Azure.Diagnostics",
"type": "IaaSDiagnostics",
"typeHandlerVersion": "1.5",
"autoUpgradeMinorVersion": true,
"settings": {
"WadCfg": {
"DiagnosticMonitorConfiguration": {
"overallQuotaInMB": "50000",
"EtwProviders": {
"EtwEventSourceProviderConfiguration": [
{
"provider": "Microsoft-ServiceFabric-Actors",
"scheduledTransferKeywordFilter": "1",
"scheduledTransferPeriod": "PT5M",
"DefaultEvents": {
"eventDestination": "ServiceFabricReliableActorEventTable"
}
},
{
"provider": "Microsoft-ServiceFabric-Services",
"scheduledTransferPeriod": "PT5M",
"DefaultEvents": {
"eventDestination": "ServiceFabricReliableServiceEventTable"
}
},
{
"provider": "Bb.ServiceFabric.Infrastructure.Container",
"scheduledTransferPeriod": "PT1M",
"DefaultEvents": {
"eventDestination": "ServiceFabricReliableServiceEventTable"
}
}
],
"EtwManifestProviderConfiguration": [
{
"provider": "cbd93bc2-71e5-4566-b3a7-595d8eeca6e8",
"scheduledTransferLogLevelFilter": "Information",
"scheduledTransferKeywordFilter": "4611686018427387904",
"scheduledTransferPeriod": "PT5M",
"DefaultEvents": {
"eventDestination": "ServiceFabricSystemEventTable"
}
}
]
}
}
},
"StorageAccount": "sfdgsmsraghuplaygrou6827"
}
},
"name": "VMDiagnosticsVmExt_vmNodeType0Name"
}
I would like to update following EtwProviders/EtwEventSourceProviderConfiguration to contain following section (as MyCompany.MyServices.MyStatelessService is the name of my service's EventSource):
{
"provider": "MyCompany.MyServices.MyStatelessService",
"scheduledTransferPeriod": "PT5M",
"DefaultEvents": {
"eventDestination": "ServiceFabricReliableServiceEventTable"
}
}
Here are my questions:
Is this the correct way of inserting an ETW provider/EventSource (from my service) into an existing cluster (that is already enabled with azure diagnostics)?
Can I add this event source (as a ETW event source provider) using a powershell command(s)?
If so, what is the exact powershell command (using all the information from the above code fragment)?
Note: I am using .net framework 4.5.2.
All seems good with the added configuration above. Just be aware that for ETWProviders the EventDestination cannot contain hyphens (-), yours don't so you are ok.
To update the Windows Azure Diagnostics (WAD) agent configuration, you can use either PowerShell or Cloud Explorer in Visual Studio.
For the former, simply update the ARM template and use the New-AzureRmResourceGroupDeployment cmdlet. See here for further information: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-diagnostics-how-to-setup-wad/#update-diagnostics-to-collect-and-upload-logs-from-new-eventsource-channels
For using Cloud Explorer in Visual Studio. Browse to your Virtual Machine Scale Set (as this is the Azure resource that holds the WAD configuration). Right-click and choose Update Diagnostics. In the dialog shown, you have the option to upload a private and public configuration file. Simple take a .json document containing the {"WadCfg": {}} element, and upload that as a public configuration.
If you need to update the private configuration specifies the storage account name and AccessKey:
{
"storageAccountName": "",
"storageAccountKey": "",
"storageAccountEndPoint": "https://core.windows.net",
}
Hope this helps.
Mikkel