Azure bicep Storage account sku is read-only - azure-devops

I am trying to deploy a storage account using azure bicep.
In my code:
resource storageAccounts_storageacntin_name_default 'Microsoft.Storage/storageAccounts/blobServices#2021-04-01' = {
parent: storageAccounts_storageacntin_name_resource
name: 'default'
sku: {
name: 'Standard_RAGRS'
tier: 'Standard'
}
properties: {
changeFeed: {
enabled: false
}
restorePolicy: {
enabled: false
}
containerDeleteRetentionPolicy: {
enabled: true
days: 7
}
cors: {
corsRules: []
}
deleteRetentionPolicy: {
enabled: true
days: 30
}
isVersioningEnabled: true
}
}
I get an error in the SKU. The error is the following.
The property "sku" is read-only. Expressions cannot be assigned to read-only properties.bicep(BCP073)
I don't fully understand why is this error showing up, I am still new to azure bicep and trying to move slowly from terraform deployments to azure bicep.
Please can anyone explain me why is this error is coming up and how to solve it?
Thank you so much
UPDATE CODE:
this is the error I am getting when I removed the sku
param storageAccounts array = [
'storage1'
]
resource storage_Accounts 'Microsoft.Storage/storageAccounts#2021-04-01' = [ for storageName in storageAccounts :{
name: [storageName]
location: 'westeurope'
sku: {
name: 'Standard_RAGRS'
tier: 'Standard'
}
kind: 'StorageV2'
properties: {
allowCrossTenantReplication: true
minimumTlsVersion: 'TLS1_2'
allowBlobPublicAccess: false
allowSharedKeyAccess: true
networkAcls: {
bypass: 'AzureServices'
virtualNetworkRules: []
ipRules: []
defaultAction: 'Allow'
}
supportsHttpsTrafficOnly: true
encryption: {
services: {
file: {
keyType: 'Account'
enabled: true
}
blob: {
keyType: 'Account'
enabled: true
}
}
keySource: 'Microsoft.Storage'
}
accessTier: 'Hot'
}
}]
resource storageAccounts_hamzaelaouane1_name_default 'Microsoft.Storage/storageAccounts/blobServices#2021-04-01' = [ for storageName in storageAccounts: {
parent: [storage_Accounts]
name: storageName
properties: {
changeFeed: {
enabled: false
}
restorePolicy: {
enabled: false
}
containerDeleteRetentionPolicy: {
enabled: true
days: 7
}
cors: {
corsRules: []
}
deleteRetentionPolicy: {
enabled: true
days: 30
}
isVersioningEnabled: true
}
}
]
the error is at the last 2 lines. It says that is expecting } and ] at that point. Checking line by line, I couldn't see any error on syntax

The sku field is read-only for services that are under a storage account like blobServices and fileServices.
You can (only) set the SKU on Storage Account level (Microsoft.Storage/storageAccounts#2021-04-01).
To be complete; the tier field is also read-only on the storage account, since it's based on the SKU name. Remove these fields and you should be good to go.

Related

Arm template for Azure data factory with diagnostics settings in bicep

I have the following bicep to create ADF resource:
resource dataFactory 'Microsoft.DataFactory/factories#2018-06-01' = {
name: name
identity: {
type: 'SystemAssigned'
}
properties: {
globalParameters: {
environment: {
type: 'String'
value: environmentAbbreviation
}
}
}
location: location
}
I need to add a diagnostic setting to ADF resource as follows:
How do I update the bicep?
I tried to create diagnostic settings in ADF using Bicep. Below is the code.
Bicep code for creating data factory
This code is for creating data factory. It is same as the code in question.
param settingName string='XXXXX'
param factoryName string='XXXXX'
resource datafactory 'Microsoft.DataFactory/factories#2018-06-01' = {
name: factoryName
location: resourceGroup().location
identity: {
type: 'SystemAssigned'
}
properties: {
}
}
Bicep code for adding diagnostic setting
In order to add diagnostic setting to data factory, below code is added along with the code to create data factory.
resource factoryName_microsoft_insights_settingName 'Microsoft.DataFactory/factories/providers/diagnosticSettings#2017-05-01-preview' = {
name: '${factoryName}/microsoft.insights/${settingName}'
location: resourceGroup().location
properties: {
workspaceId: 'XXXX'
logAnalyticsDestinationType: 'Dedicated'
logs: [
{
category: 'PipelineRuns'
enabled: true
retentionPolicy: {
enabled: false
days: 0
}
}
{
category: 'TriggerRuns'
enabled: true
retentionPolicy: {
enabled: false
days: 0
}
}
{
category: 'ActivityRuns'
enabled: true
retentionPolicy: {
enabled: false
days: 0
}
}
]
metrics: [
{
category: 'AllMetrics'
timeGrain: 'PT1M'
enabled: true
retentionPolicy: {
enabled: false
days: 0
}
}
]
}
dependsOn: [
datafactory
]
}
When the above both codes are combined and run, resources got deployed successfully.
The above code will enable the categories - Pipeline runs log,Trigger runs log, Pipeline activity runs log. Change the code as per the requirement.
Reference: Microsoft.Insights/diagnosticSettings - Bicep, ARM template & Terraform AzAPI reference | Microsoft Learn

Bicep variable creation from module output

I-m trying to create multiple app services with bicep using a for loop on a module.
The module has the following output:
output id string = appServiceAppResource.id
output name string = appServiceAppResource.name
output vnetIntegrationOn bool = allowVnetIntegration[appServicePlan.sku.name]
output principalId string = appServiceAppResource.identity.principalId // <-- important
After this, i want to create a key vault, and provide "get" access to all the app services previously created using an access policy:
...
param appServices object
...
module appServicePlanModules 'modules/appServicePlan.bicep' = [for appServicesConfig in appServicePlans.appServicesConfig: {
name: '${appServicesConfig.name}-appserviceplan-module'
params: {
location: location
name: appServicesConfig.name
sku: appServicesConfig.sku
}
}]
...
var accessPolicy = [ for i in range(0, appServices.instanceCount) : {
objectId: appServiceModule[i].outputs.principalId
permissions: {
secrets: ['get']
}
tenantId: subscription().tenantId
}]
module keyValutModule 'modules/keyvault.bicep' = {
name: 'key-valut-module'
dependsOn: [appServiceModule]
params: {
location: location
accessPolicies: accessPolicy
publicNetworkAccess: keyvault.publicNetworkAccess
keyVaultName: keyvault.name
}
}
The problem is that when i try to create that access policy, it fails
What puzzles me is that, this is working:
var accessPolicy = [{
objectId: appServiceModule[0].outputs.principalId
permissions: {
secrets: ['get']
}
tenantId: subscription().tenantId
}
{
objectId: appServiceModule[1].outputs.principalId
permissions: {
secrets: ['get']
}
tenantId: subscription().tenantId
}]
And also this:
var accessPolicies = [ for i in range(0,1):{
objectId: '52xxxxxx-25xx-4xxf-axxx-xxdxx3axxdff'
permissions: {
secrets: ['get']
}
tenantId: subscription().tenantId
}]
Since I want to use this template for multiple env, I want it to be more generic (so that i can have 1 or 5 service apps), so that for loop wold be very useful for me.
I'm not sure why, if I use a for loop in combination of a module output this is not working.
Do you have any idea why, or of there is a workaround on this.
Thank you!
Best regards,
Dorin

Deploy a FargateService to an ECS that's living within a different Stack (preoject)

1- I have a project core-infra that encapasses all the core infra related compoents (VPCs, Subnets, ECS Cluster...etc)
2- I have microservice projects with independant stacks each used for deployment
I want to deploy a FargateService from a microservice project stack A to the already existing ECS living within the core-infra stack
Affected area/feature
Pulumi Service
ECS
Deploy microservice
FargateService
Pulumi github issue link
Pulumi Stack References are the answer here:
https://www.pulumi.com/docs/intro/concepts/stack/#stackreferences
Your core-infra stack would output the ECS cluster ID and then stack B consumes that output so it can, for example, deploy an ECS service to the given cluster
(https://www.pulumi.com/registry/packages/aws/api-docs/ecs/service/).
I was able to deploy using aws classic.
PS: The setup is way more complex than with the awsx, the doc and resources aren't exhaustive.
Now I have few issues:
The Loadbalancer isn't reachable and keeps loading forever
I don't have any logs in the CloudWatch LogGoup
Not sure how to use the LB Listner with the ECS service / Not sure about the port mapping
Here is the complete code for reference (people who're husteling) and I'd appreciate if you could suggest any improvments/answers.
// Capture the EnvVars
const appName = process.env.APP_NAME;
const namespace = process.env.NAMESPACE;
const environment = process.env.ENVIRONMENT;
// Load the Deployment Environment config.
const configMapLoader = new ConfigMapLoader(namespace, environment);
const env = pulumi.getStack();
const infra = new pulumi.StackReference(`org/core-datainfra/${env}`);
// Fetch ECS Fargate cluster ID.
const ecsClusterId = infra.getOutput('ecsClusterId');
// Fetch DeVpc ID.
const deVpcId = infra.getOutput('deVpcId');
// Fetch DeVpc subnets IDS.
const subnets = ['subnet-aaaaaaaaaa', 'subnet-bbbbbbbbb'];
// Fetch DeVpc Security Group ID.
const securityGroupId = infra.getOutput('deSecurityGroupId');
// Define the Networking for our service.
const serviceLb = new aws.lb.LoadBalancer(`${appName}-lb`, {
internal: false,
loadBalancerType: 'application',
securityGroups: [securityGroupId],
subnets,
enableDeletionProtection: false,
tags: {
Environment: environment
}
});
const serviceTargetGroup = new aws.lb.TargetGroup(`${appName}-t-g`, {
port: configMapLoader.configMap.service.http.externalPort,
protocol: configMapLoader.configMap.service.http.protocol,
vpcId: deVpcId,
targetType: 'ip'
});
const http = new aws.lb.Listener(`${appName}-listener`, {
loadBalancerArn: serviceLb.arn,
port: configMapLoader.configMap.service.http.externalPort,
protocol: configMapLoader.configMap.service.http.protocol,
defaultActions: [
{
type: 'forward',
targetGroupArn: serviceTargetGroup.arn
}
]
});
// Create AmazonECSTaskExecutionRolePolicy
const taskExecutionPolicy = new aws.iam.Policy(
`${appName}-task-execution-policy`,
{
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [
{
Effect: 'Allow',
Action: [
'ecr:GetAuthorizationToken',
'ecr:BatchCheckLayerAvailability',
'ecr:GetDownloadUrlForLayer',
'ecr:BatchGetImage',
'logs:CreateLogStream',
'logs:PutLogEvents',
'ec2:AuthorizeSecurityGroupIngress',
'ec2:Describe*',
'elasticloadbalancing:DeregisterInstancesFromLoadBalancer',
'elasticloadbalancing:DeregisterTargets',
'elasticloadbalancing:Describe*',
'elasticloadbalancing:RegisterInstancesWithLoadBalancer',
'elasticloadbalancing:RegisterTargets'
],
Resource: '*'
}
]
})
}
);
// IAM role that allows Amazon ECS to make calls to the load balancer
const taskExecutionRole = new aws.iam.Role(`${appName}-task-execution-role`, {
assumeRolePolicy: JSON.stringify({
Version: '2012-10-17',
Statement: [
{
Effect: 'Allow',
Principal: {
Service: ['ecs-tasks.amazonaws.com']
},
Action: 'sts:AssumeRole'
},
{
Action: 'sts:AssumeRole',
Principal: {
Service: 'ecs.amazonaws.com'
},
Effect: 'Allow',
Sid: ''
},
{
Action: 'sts:AssumeRole',
Principal: {
Service: 'ec2.amazonaws.com'
},
Effect: 'Allow',
Sid: ''
}
]
}),
tags: {
name: `${appName}-iam-role`
}
});
new aws.iam.RolePolicyAttachment(`${appName}-role-policy`, {
role: taskExecutionRole.name,
policyArn: taskExecutionPolicy.arn
});
// New image to be pulled
const image = `${configMapLoader.configMap.service.image.repository}:${process.env.IMAGE_TAG}`;
// Set up Log Group
const awsLogGroup = new aws.cloudwatch.LogGroup(`${appName}-awslogs-group`, {
name: `${appName}-awslogs-group`,
tags: {
Application: `${appName}`,
Environment: 'production'
}
});
const serviceTaskDefinition = new aws.ecs.TaskDefinition(
`${appName}-task-definition`,
{
family: `${appName}-task-definition`,
networkMode: 'awsvpc',
executionRoleArn: taskExecutionRole.arn,
requiresCompatibilities: ['FARGATE'],
cpu: configMapLoader.configMap.service.resources.limits.cpu,
memory: configMapLoader.configMap.service.resources.limits.memory,
containerDefinitions: JSON.stringify([
{
name: `${appName}-fargate`,
image,
cpu: parseInt(
configMapLoader.configMap.service.resources.limits.cpu
),
memory: parseInt(
configMapLoader.configMap.service.resources.limits.memory
),
essential: true,
portMappings: [
{
containerPort: 80,
hostPort: 80
}
],
environment: configMapLoader.getConfigAsEnvironment(),
logConfiguration: {
logDriver: 'awslogs',
options: {
'awslogs-group': `${appName}-awslogs-group`,
'awslogs-region': 'us-east-2',
'awslogs-stream-prefix': `${appName}`
}
}
}
])
}
);
// Create a Fargate service task that can scale out.
const fargateService = new aws.ecs.Service(`${appName}-fargate`, {
name: `${appName}-fargate`,
cluster: ecsClusterId,
taskDefinition: serviceTaskDefinition.arn,
desiredCount: 5,
loadBalancers: [
{
targetGroupArn: serviceTargetGroup.arn,
containerName: `${appName}-fargate`,
containerPort: configMapLoader.configMap.service.http.internalPort
}
],
networkConfiguration: {
subnets
}
});
// Export the Fargate Service Info.
export const fargateServiceName = fargateService.name;
export const fargateServiceUrl = serviceLb.dnsName;
export const fargateServiceId = fargateService.id;
export const fargateServiceImage = image;

Include an object from another file into main bicep template

Trying to do something, I don't know if it's possible, and if it is, I am asking for some help on how to do it.
I have a file "test.bicep" that has an object:
{
name: 'testRafaelRule'
priority: 1001
ruleCollectionType: 'FirewallPolicyFilterRuleCollection'
action: {
type: 'Allow'
}
rules: [
{
name: 'deleteme-1'
ipProtocols: [
'Any'
]
destinationPorts: [
'*'
]
sourceAddresses: [
'192.168.0.0/16'
]
sourceIpGroups: []
destinationIpGroups: []
destinationAddresses: [
'AzureCloud.EastUS'
]
ruleType: 'NetworkRule'
destinationFqdns: []
}
]
}
and I have another file, in which I am trying to somehow input the object in test.bicep into a specific property called "ruleCollections":
resource fwll 'Microsoft.Network/firewallPolicies/ruleCollectionGroups#2020-11-01' = {
name: 'netrules'
properties: {
priority: 200
ruleCollections: [
**ADD_OBJECT_FROM_TEST.BICEP_HERE_HOW?**
]
}
}
any suggestions or links to useful documentation would be helpful.
I have looked at outputs and parameters, but I am trying to add just an object into an existing property, I am not adding an entire resource on its own, otherwise, I would output the resouce and consume it with the "module" keyword.
It's not possible straightforward, but you can leverage variables or module's output.
var RULE = {
name: 'testRafaelRule'
priority: 1001
(...)
}
resource fwll 'Microsoft.Network/firewallPolicies/ruleCollectionGroups#2020-11-01' = {
name 'netrules'
properties: {
ruleCollections: [
RULE
]
}
}
or
rule.bicep
output rule object = {
name: 'testRafaelRule'
priority: 1001
(...)
}
main.bicep
module fwrule 'rule.bicep' = {
name: 'fwrule'
}
resource fwll 'Microsoft.Network/firewallPolicies/ruleCollectionGroups#2020-11-01' = {
name 'netrules'
properties: {
ruleCollections: [
fwrule.outputs.rule
]
}
}

searchkick reindex not working in staging env

In development environment Moment.reindex and search is OK, but in staging env is error:
2.3.1 :002 > Moment.reindex
Elasticsearch::Transport::Transport::Errors::BadRequest: [400] {"error":{"root_cause":[{"type":"parse_exception","reason":"Failed to parse content to map"}],"type":"parse_exception","reason":"Failed to parse content to map","caused_by":{"type":"json_parse_exception","reason":"Duplicate field 'moment'\n at [Source: org.elasticsearch.transport.netty4.ByteBufStreamInput#1e0d7046; line: 1, column: 2720]"}},"status":400}
staing env using same ES.
My Moment class:
class Moment
include Mongoid::Document
searchkick inheritance: true, callbacks: :async, merge_mappings: true, mappings: {
moment: {
properties: {
text: {
type: "text",
# analyzer: "ik_max_word",
fields: {
analyzed: {
type: "text",
analyzer: "ik_max_word"
}
}
}
}
}
}}
GET /_cat/indices?v
health status index
yellow open moments_development_20180223203756302
yellow open moments_staging
This was an issue with how mappings were merged. It's fixed in the latest version of Searchkick.