API: sqs:CreateQueue always ACCESS DENIED - aws-cloudformation

I'm trying to create an sqs queue with cloudformation but I keep getting this error in the console.
API: sqs:CreateQueue Access to the resource https://sqs.us-east-1.amazonaws.com/ is denied.
Obviously I'm missing some sort of permission. This guide didn't really specify how I could resolve this.
Here's the code I made:
AWSTemplateFormatVersion: "2010-09-09"
Resources:
MyQueue:
Type: AWS::SQS::Queue
Properties:
FifoQueue: false
QueueName: sqs-test
ReceiveMessageWaitTimeSeconds: 20
RedrivePolicy:
deadLetterTargetArn:
Fn::GetAtt:
- "MyDLQ"
- "Arn"
maxReceiveCount: 4
Tags:
-
Key: "ProjectName"
Value: "project-x"
MyDLQ:
Type: AWS::SQS::Queue
Properties:
FifoQueue: false
QueueName: sqs-dlq-test
I'm trying to understand this doc. But I'm not sure how I could attach a policy to allow creation of queues. Someone please give me a full example.

tyron's comment on your question is spot on. Check permissions of the user executing the CloudFormation. If you're running commands directly, this is usually pretty easy to check. In some cases, you may be working with a more complicated environment with automation.
I find the best way to troubleshoot permissions in an automated world is via CloudTrail. After any API call has failed, whether from the CLI, CloudFormation, or another source, you can look up the call in CloudTrail.
In this case, searching for "Event Name" = "CreateQueue" in the time range of the failure will turn up a result with details like the following:
Source IP Address; this field may say something like cloudformation.amazonaws.com, or the IP of your machine/office. Helpful when you need to filter events based on the source.
User name; In my case, this was the EC2 instance ID of the agent running the CFN template.
Access Key ID; For EC2 instances, this is likely a set of temporary access credentials, but for a real user, it will show you what key was used.
Actual event data; Especially helpful for non-permissions errors, the actual event may show you errors in the request itself.
In my case, the specific EC2 instance that ran automation was out of date and needed to be updated to use the correct IAM Role/Instance Profile. CloudTrail helped me track that down.

If you are using AWS CodePipeline (where you may be using AWS CodeBuild to run & deploy your CloudFormation stack), remember your CodeBuild role (created under IAM Roles) must have the correct permissions.
You can identify which role is being used & attach required policies -
Open CodeBuild Project
Go to Build Details > Environment > Service Role
Open Service Role (hyperlinked)
Add SQS to role policies

Related

ARM template with managed private endpoint fails while creating a release in azure devops

I have created a data factory with a pipeline moving data from storage account to azure sql.
Company advised me to use a managed private endpoint to create connection with azure sql.
Scenario:
I have a Dev resource group where my storage account, data factory and sql sit and a Sit resource group where Sit resources sit. I have created managed private endpoint in both data factories with same name, but pointing to different sql servers.
sql_mpe: /subscriptions/123456789/resourceGroups/rg-dev/providers/Microsoft.Sql/servers/dev-sql-server
sql_mpe: /subscriptions/123456789/resourceGroups/rg-sit/providers/Microsoft.Sql/servers/sit-sql-server
As you can see managed private endpoint created has the same name but pointing to different sql servers based on the environment.
Now when I publish the dev adf to azure git, it takes the dev managed private endpoint keys as parameters as follows:
-sql_mpe_properties_privateLinkResourceId "/subscriptions/123456789/resourceGroups/rg-sit/providers/Microsoft.Sql/servers/sit-sql-server"
-sql_mpe_properties_groupId "sqlServer"
-sql_mpe_properties_ipAddress {}
-sql_mpe_properties_resourceId "/subscriptions/987654321/resourceGroups/vnet-45645632-UKSouth-567-rg/providers/Microsoft.Network/privateEndpoints/sit-sql-server.sql_mpe"
For some weird reason, in privateLinkResourceId, resource group and subscription are correct but in resourceId, they are weird values. I don't where they come from hence can't comment on it.
Now when I run my release pipeline, I get the following error:
2022-03-14T15:33:41.5334804Z ##[error]At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.
2022-03-14T15:33:41.5366078Z ##[debug]Processed: ##vso[task.issue type=error;]At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.
2022-03-14T15:33:41.5373551Z ##[error]Details:
2022-03-14T15:33:41.5374630Z ##[debug]Processed: ##vso[task.issue type=error;]Details:
2022-03-14T15:33:41.5376732Z ##[error]ManagedPrivateEndpointInvalidPayload: Managed private endpoint 'sql_mpe' is invalid.
Error is very generic, hence I went through the docs to understand it. I found the below reason from azure doc Best practices for CI CD:
If a private endpoint already exists in a factory and you try to
deploy an ARM template that contains a private endpoint with the same
name but with modified properties, the deployment will fail.
So I got to know that if you deploy managed private endpoint with same name but different modifies properties (like my sit endpoint is pointing to sit), it will fail.
So now I know why pipeline is failing.
I have to fix this issue for a successful release.
Below are my possible options that I can go with , but don't know how to ? This is where I require some help/ assistance:
resourceId value needs to be understood and changed for SIT (I mentioned some weird values are getting there, and in template, I am just overriding the 'dev' part to 'sit'. I am not changing the vnet resource group and other values.
Remove managed private endpoint parameters from template before publishing to azure git or remove them before creating a release. If I release them in pipeline, error is caused.
Need some insight and help here.

Create Service Connection from Azure DevOps to GCP Artifact Registry

Is there have any tutorials for creating a service account to GCP Artifact Registry?
i have tried this: https://cloud.google.com/architecture/creating-cicd-pipeline-vsts-kubernetes-engine
... but it is using GCP Container Registry
I do not imagine it should be much different but i keep on getting this:
##[error]denied: Permission "artifactregistry.repositories.downloadArtifacts" denied on resource
BUT the service account i created has the permissions needed (albeit those roles are in beta). i even gave it a very elevated role and still getting this.
when i created the service connect i followed these steps from the documentation linked above:
Docker Registry: https://gcr.io/PROJECT_ID, replacing PROJECT_ID with the name of your project (for example, https://gcr.io/azure-pipelines-test-project-12345).
Docker ID: _json_key
Password: Paste the content of azure-pipelines-publisher-oneline.json.
Service connection name: gcr-tutorial
Any advice on this would be appreciated.
I was having the same issue. As #Mexicoder points out the service account needs the ArtifactRegistryWriter permission. In addition, the following wasn't clear to me initially:
The service connection needs to be in the format: https://REGION-docker.pkg.dev/PROJECT-ID (where region is something like 'us-west2')
The repository parameter to the Docker task (Docker#2) needs to be in the form: PROJECT-ID/REPO/IMAGE
I was able to get it working with the documentation for Container Registry.
my issue was with the repository name.
ALSO the main difference when using Artifact Registry is the permission you need to give the IAM service account. Use ArtifactRegistryWriter. StorageAdmin will be useless.

How to use Azure Managed Identity in Azure Function to access Service Bus with a trigger?

I have created a ServiceBus namespace in Azure, along with a topic and a subscription. I also have a simple Azure version 1 function that triggers on a received topic in the ServiceBus, like this:
[FunctionName("MyServiceBusTriggerFunction")]
public static void Run([ServiceBusTrigger("myTopic", "mySubscription", Connection = "MyConnection")]string mySbMsg, TraceWriter log)
{
log.Info($"C# ServiceBus topic trigger function processed message: {mySbMsg}");
}
The function triggers nicely for the topics in the ServiceBus when I define the connection string in functions Application Settings by using Shared Access Policy for topic, like this:
Endpoint=sb://MyNamespace.servicebus.windows.net/;SharedAccessKeyName=mypolicy;SharedAccessKey=UZ...E0=
Now, instead of Shared Access Keys, I would like to use Managed Service Identity (MSI) for accessing the ServiceBus. According to this (https://learn.microsoft.com/en-us/azure/active-directory/managed-service-identity/services-support-msi) it should be possible, unless I have misunderstood something. I haven't managed to get it working though.
What I tried, was to
set the Managed Service Identity "On" for my function in Azure portal
give Owner role for the function in ServiceBus Access Control section in Azure Portal
set the connection string for MyFunction like this: Endpoint=sb://MyNamespace.servicebus.windows.net/
The function is not triggering in this set-up, so what am I missing or what am I doing wrong?
I'd be grateful for any advice to help me get further. Thanks.
Update for Microsoft.Azure.WebJobs.Extensions.ServiceBus version 5.x
There is now an offical docs for the latest version of the package in here.
{
"Values": {
"<connection_name>__fullyQualifiedNamespace": "<service_bus_namespace>.servicebus.windows.net"
}
}
Previous answer:
This actually seems to be possible now, at least worked just fine for me. You need to use this connection string:
Endpoint=sb://service-bus-namespace-name.servicebus.windows.net/;Authentication=ManagedIdentity
I have not actually found any documentation about this on Microsoft site, but in a blog here.
Microsoft does have documentation though on roles that you can use and how to limit them to scope in here. Example:
az role assignment create \
--role $service_bus_role \
--assignee $assignee_id \
--scope /subscriptions/$subscription_id/resourceGroups/$resource_group/providers/Microsoft.ServiceBus/namespaces/$service_bus_namespace/topics/$service_bus_topic/subscriptions/$service_bus_subscription
what am I missing or what am I doing wrong?
You may mix up with MSI and Shared Access Policy.They are using different provider to access to Azure servicebus. You could just use connectionstring or just use MSI to authenticate.
When you use Managed Service Identity(MSI) to authenticate, you need to create a token provider for the managed service identity with the following code.
TokenProvider.CreateManagedServiceIdentityTokenProvider(ServiceAudience.ServiceBusAudience).
This TokenProvider's implementation uses the AzureServiceTokenProvider found in the Microsoft.Azure.Services.AppAuthentication library. AzureServiceTokenProvider will follow a set number of different methods, depending on the environment, to get an access token. And then initialize client to operate the servicebus.
For more details, you could refer to this article.
When you use servicebus connectionstring to access which using the Shared Access Token (SAS) token provider, so you can operate directly.
Agreed that from azure function we cannot access the resource like ASB directly. However, one still does not need to put in the password in this case "SharedAccessKeyName" in the connectionstring directly.
Azure function can work with Azure KeyVault. Thus one can store the connectionstring with sensitive information as a secret in the KeyVault and then grant System assigned identity from azure functions access over KeyVault and then specify the value for the settings in the portal as
#Microsoft.KeyVault(SecretUri={theSecretUri})
Details on how to achieve the above is mentioned in the following blog.
https://medium.com/statuscode/getting-key-vault-secrets-in-azure-functions-37620fd20a0b
This will still avoid specifying the connectionstring directly in Azure functions and provides with single point of access via Vault to be disabled in case of a security breach

How to get a foundary service whitelist IPs

We have a GUI that manages Cloud Foundry, and there's a link that show an instance with IP white list external dependency (quite large) How can I easily re-create this config as JSON, and recreate to diff Foundry env ?
It's not entirely clear what is being presented in your GUI but it sounds like it might be the application security groups. You might try running cf security-groups or cf security-group <name> to see if this information matches up with what's displayed in the GUI.
If that's what you want, you can use the following API calls to obtain the JSON data & recreate it in another environment.
1.) List all the security groups: http://apidocs.cloudfoundry.org/1.40.0/security_groups/list_all_security_groups.html
2.) List security groups applied to all applications: http://apidocs.cloudfoundry.org/1.40.0/security_group_running_defaults/return_the_security_groups_used_for_running_apps.html
3.) List security groups applied to all staging containers: http://apidocs.cloudfoundry.org/1.40.0/security_group_staging_defaults/return_the_security_groups_used_for_staging.html
4.) Retrieve a particular security group: http://apidocs.cloudfoundry.org/1.40.0/security_groups/retrieve_a_particular_security_group.html
And you can find more details about the API calls here: http://apidocs.cloudfoundry.org/
You can also run the cf cli commands above with the -v flag to show the HTTP requests being made by the CLI to obtain the information that's displayed.
Hope that helps!

Creating a bucket using Google Cloud Platform Deployment Manager Template

I'm trying to create a bucket using GCP Deployment Manager. I already went through the QuickStart guide and was able to create a compute.v1.instance. But I'm trying to create a bucket in Google Cloud Storage, but am unable to get anything other than 403 Forbidden.
This is what my template file looks like.
resources:
- type: storage.v1.bucket
name: test-bucket
properties:
project: my-project
name: test-bucket-name
This is what I'm calling
gcloud deployment-manager deployments create deploy-test --config deploy.yml
And this is what I'm receiving back
Waiting for create operation-1474738357403-53d4447edfd79-eed73ce7-cabd72fd...failed.
ERROR: (gcloud.deployment-manager.deployments.create) Error in Operation operation-1474738357403-53d4447edfd79-eed73ce7-cabd72fd: <ErrorValue
errors: [<ErrorsValueListEntry
code: u'RESOURCE_ERROR'
location: u'deploy-test/test-bucket'
message: u'Unexpected response from resource of type storage.v1.bucket: 403 {"code":403,"errors":[{"domain":"global","message":"Forbidden","reason":"forbidden"}],"message":"Forbidden","statusMessage":"Forbidden","requestPath":"https://www.googleapis.com/storage/v1/b/test-bucket"}'>]>
I have credentials setup, and I even created an account owner set of credentials (which can access everything) and I'm still getting this response.
Any ideas or good places to look? Is it my config or do I need to pass additional credentials in my request?
I'm coming from an AWS background, still finding my way around GCP.
Thanks
Buckets on Google Cloud Platform need to be unique.
If you try to create a bucket with a name that is already used by somebody else (on another project), you will receive an ERROR MESSAGE. I would test by creating a new bucket with another name.