How NOT to create a azurerm_mssql_database_extended_auditing_policy - azure-devops

I'm trying to deploy my infra with terraform.
I have a mssql server and database and using azurerm 2.32
While deploying mssql I'm getting following error
Error: issuing create/update request for SQL Server "itan-mssql-server" Blob Auditing Policies(Resource Group "itan-west-europe-resource-group"): sql.ExtendedServerBlobAuditingPoliciesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="DataSecurityInvalidUserSuppliedParameter" Message="Invalid parameter 'storageEndpoint'. Value should be a blob storage endpoint (e.g. https://MyAccount.blob.core.windows.net)."
I have already tried
defining extended_auditing_policy on database level - failed
defining extended_auditing_policy on server level - failed
defining azurerm_mssql_database_extended_auditing_policy on root level - failed
leaving empty extended_auditing_policy - failed
Global level of definition looks like this (^C^V from terraform documentation with adjustment to my project):
resource "azurerm_mssql_database_extended_auditing_policy" "db-policy" {
database_id = azurerm_mssql_database.itan-mssql-database.id
storage_endpoint = azurerm_storage_account.itan_storage_account.primary_blob_endpoint
storage_account_access_key = azurerm_storage_account.itan_storage_account.primary_access_key
storage_account_access_key_is_secondary = false
retention_in_days = 1
depends_on = [
azurerm_mssql_database.itan-mssql-database,
azurerm_storage_account.itan_storage_account]
}
I'm looking for one of two possible solutions:
total disabling of audits (I don't really needed now)
fixing error and enabling the audit
Thanks!
Jarek

This is caused by Breaking change in the SQL Extended Auditing Settings API. Please check also this issue in terraform provider.
As a workaround you may try call ARM template from terraform. However, I'm not sure if under the hood they use the same or different API.

Workarund that looks to be working for me is like this:
I Followed tip by [ddarwent][1] from git hub:
https://github.com/terraform-providers/terraform-provider-azurerm/issues/8915#issuecomment-711029508
So basically its like this:
terraform apply
Go to terraform.tfstate delete "tainted mssql server"
terraform apply
Go to terraform.tfstate delete "tainted mssql database"
terraform apply
Looks like all my stuff is on and working

Related

Azure Data Factory CICD error: The document creation or update failed because of invalid reference

All, when running a build pipeline using Azure Devops with ARM template, the process is consistently failing when trying to deploy a dataset or a reference to a dataset with this error:
ARM Template deployment: Resource Group scope (AzureResourceManagerTemplateDeployment)
BadRequest: The document creation or update failed because of invalid reference 'dataset_1'.
I've tried renaming the dataset and also recreating it to see if that would help.
I then deleted the dataset_1.json file from the repo and still get the same message so it's some reference to this dataset and not the dataset itself I think. I've looked through all the other files for references to this but they all look fine.
Any ideas on how to troubleshoot this?
thanks
try this
Looks like you have created 'myTestLinkedService' linked service, tested connection but haven't published it yet and trying to reference that linked service in the new dataset that you are trying to create using Powershell.
In order to reference any data factory entity from Powershell, please make sure those entities are published first. Please try publishing the linked service first from the portal and then try to run your Powershell script to create the new dataset/actvitiy.
I think I found the issue. When I went into the detailed logs I found that in addition to this error there was an error message about an invalid SQL connection string, so I though it may be related since the dataset in question uses Azure SQL database linked service.
I adjusted the connection string and this seems to have solved the issue.

RDS OptionGroup not working while creating it from via cloudformation for SQL Server

I am trying to create an rds option group for RDS SQlserver independently via cloud formation while creating it is getting failed with the below error. The same when I am created with the same parameters it is able to create. Any pointers would be very helpful.
SqlServerOptionGroup:
Type: AWS::RDS::OptionGroup
Properties:
EngineName: "sqlserver-ex"
MajorEngineVersion: "14.0.0"
OptionGroupDescription: rds-sql-optiongroup
OptionConfigurations:
- OptionName: SQLSERVER_BACKUP_RESTORE
Error:
Cannot find major version 14.0.0 for sqlserver-ex (Service: AmazonRDS; Status Code: 400; Error Code: InvalidParameterCombination
The same when I have created via console it is getting created
Try "14.00" for MajorEngineVersion.
I also found you need to quote the EngineName and MajorEngineVersion, which you have done.

Updating a CloudFormation stack with a Cognito pool claims that we're adding attributes when we're not

Starting on Nov 7, 2018 we started getting the following error when updating our CloudFormation stacks:
Updating user pool schema is not allowed from cloudformation. Use the
AddCustomAttributes API or the AWS Cognito Console to update user pool
schema.
Our CF stacks don't have any changes to the custom attributes of the Cognito pool. They only have changes to the PostConfirmation and CustomMessage triggers, as well the addition of API Gateway responses.
Does anybody know why we might be seeing this? How can we avoid this error message?
We had the same problem with deployment. For now we are deploying it without CustomMessage trigger and setting CustomMessage trigger manually after deployment.
we removed the CustomMessage changes from our template and that seemed to do the trick.
Mostly by luck, I've found an answer that allows me to get around this in an automated manner.
How our scripts used to work
First, let me explain how this used to work. I used to have the following set of cloudFormation scripts:
cognitoSetup.template --> <Serverless Framework> --> <cognitoSetup.template updated with triggers>
So we'd setup the Cognito pool, run the Serverless Framework to add the Cognito Lambda functions, and then update the cognitoSetup.template file with the ARNs for the lambdas exported when the Serverless Framework ran.
The Fix
Now, we include the ARNs for the Lambdas in the cognitoSetup.template. So now cognitoSetup.template looks like this:
"CognitoUserPool": {
"Type": "AWS::Cognito::UserPool"
...
"Properties": {
...
"LambdaConfig": {
"CustomMessage": "arn:aws:lambda:<our aws region>:<our account#>:function:main-<our stage>-onCognitoCustomMessage"
}
}
Note, we're setting this trigger before the lambda even exists. The trigger just needs an ARN, and it doesn't seem to care that it's not there yet. Then we run sls deploy which creates the actual Lambda function and everything works fine.
Now our scripts look like this:
cognitoSetup.template --> <Serverless Framework>
Why does this fix this error? I don't actually know. CloudFormation seems to be fine with this modification but not okay with modifying the same file later in our process. But it works.

Connection to a S3 instance using a service-connector

I'm trying to create a service-connector to my s3 instance like this:
cf service-connector 13001 mybucketname.ds31s3.swisscom.com:443
But I get the following error:
Server-Error 403: Check of security groups failed (no access)
I have created my service key according to this documentation.
Connecting to my MongoDB works perfectly using a service connector.
You can access Swisscom's S3 directly without the service connector.
The error message suggests that your current org and space do no have access to the S3. This is usually the case is there is no app-binding for that service in the current space. Please check whether you created your service key in the right org and space.
There was a misconfiguration due to security changes. We fixed the issue, so connecting to s3 with the service-connector should now work.

WSO2 API MANAGER clustering Worker-Manager

This is regarding WSO2 API Manager Worker cluster configuration with external Postgres db. I have used 2 databases i.e wso2_carbon for registry and user management and the wso2_am, for storing APIs. Respective xmls have been configured. The postgres scripts have been run to create the database tables. My log console when wso2server.sh is run, shows enabled clustering and the members of the domain. However on the https://: when I try to create to create APIs, it throws and error in the design phase itself.
ERROR - add:jag org.wso2.carbon.apimgt.api.APIManagementException: Error while checking whether context exists
[2016-12-13 04:32:37,737] ERROR - ApiMgtDAO Error while locating API: admin-hello-v.1.2.3 from the database
java.sql.SQLException: org.postgres.Driver cannot be found by jdbc-pool_7.0.34.wso2v2
As per the error message, the driver class name you have given is org.postgres.Driver which is not correct. It should be org.postgresql.Driver. Double check master-datasource.xml config.