asm fetching username: authorizationdata is malformed, empty field - amazon-ecs

Here are the steps I did and got the error every time...
Created docker registry in jfrog artifactory and added container there.
create access token on jfrog artifactory
create secret in aws secret manager side (username: password:). Added Task execution role to add secretsmanager:*
create ECS task definition. Add private repository and add secret manager ARN
run the task and wait to see NGINX container to run... and finally got below error..
"asm fetching username: authorization data is malformed, empty field"
Not sure what wrong i am doing here. Here is the Doc has given by AWS on same. But still not working...
https://aws.amazon.com/blogs/compute/introducing-private-registry-authentication-support-for-aws-fargate/
Any help appreciated!

this is my mistake; I added a space after "username " section in AWS SM. once delete that space, it worked immidiate.

Need to write username and password spelling correctly in key value pair.

Related

Build Pipeline is failing with key vault authorization error

Build pipeline is failing with following error. Please suggest
I have set up key valut already.
Yes you have set up the KeyVault, but the service connection needed to access the keyVault seems to be missing.
If you have created the service connection too, then you just need to authorize the service connection(one time activity), you can try clicking on the Authorize resources button (bottom-right in the screenshot).

Create Service Connection from Azure DevOps to GCP Artifact Registry

Is there have any tutorials for creating a service account to GCP Artifact Registry?
i have tried this: https://cloud.google.com/architecture/creating-cicd-pipeline-vsts-kubernetes-engine
... but it is using GCP Container Registry
I do not imagine it should be much different but i keep on getting this:
##[error]denied: Permission "artifactregistry.repositories.downloadArtifacts" denied on resource
BUT the service account i created has the permissions needed (albeit those roles are in beta). i even gave it a very elevated role and still getting this.
when i created the service connect i followed these steps from the documentation linked above:
Docker Registry: https://gcr.io/PROJECT_ID, replacing PROJECT_ID with the name of your project (for example, https://gcr.io/azure-pipelines-test-project-12345).
Docker ID: _json_key
Password: Paste the content of azure-pipelines-publisher-oneline.json.
Service connection name: gcr-tutorial
Any advice on this would be appreciated.
I was having the same issue. As #Mexicoder points out the service account needs the ArtifactRegistryWriter permission. In addition, the following wasn't clear to me initially:
The service connection needs to be in the format: https://REGION-docker.pkg.dev/PROJECT-ID (where region is something like 'us-west2')
The repository parameter to the Docker task (Docker#2) needs to be in the form: PROJECT-ID/REPO/IMAGE
I was able to get it working with the documentation for Container Registry.
my issue was with the repository name.
ALSO the main difference when using Artifact Registry is the permission you need to give the IAM service account. Use ArtifactRegistryWriter. StorageAdmin will be useless.

How to delete IBMCloud database instances with having same names

I am having following same named instances as shown in image
Names are as follows:-
stage-tas-postgres-service
stage-tas-postgres-service
stage-tas-postgres-service
And I am tried to delete it from three dots option but since the stage environment is blocked for deletion activity.
I have referred the below link for deletionIBM Cloud Deletion DB
We have IAM identity and through which tried to delete the instance from Jenkins job
and the command I tried to delete after successful login into IAM user is as follows :-
stage("Deleting resource") {
ibmcloud "resource service-instance-delete stage-tas-postgres-service --recursive"
}
The problem is this job ends with success results, but did not delete the instance.
I am using only 3rd from all list and other two are unused show in image and in above list.
Is there any way to delete the DB from crn or deployment id
Thanks in advance.
The error says that you do not have the required permissions to delete the database. You can see and probably use that database instance, but not delete it.
It seems you are not the account owner or someone with administrator privilege. Therefore, someone else needs to delete the service.
For the future, you could set up a serviceID with the required permissions. Then, use a script which uses the serviceID for login to IBM Cloud and deletion of that service.

Github: Failed to add secret. Please try again

I just created a IAM User in AWS and am now wanting to add the access key ID and the secret access key to my forked Github project for use in GitHub actions
I use the same name as defined in my workflow yml for GitHub actions, and enter the text copied from IAM, and I get this error with no further text:
Failed to add secret. Please try again.
Do you know what may cause this?
I had the same issue on Safari. The console showed JS errors:
"Unrecognized Content-Security-Policy directive 'worker-src'"
...
Setting the secret through Chrome worked though.

API: sqs:CreateQueue always ACCESS DENIED

I'm trying to create an sqs queue with cloudformation but I keep getting this error in the console.
API: sqs:CreateQueue Access to the resource https://sqs.us-east-1.amazonaws.com/ is denied.
Obviously I'm missing some sort of permission. This guide didn't really specify how I could resolve this.
Here's the code I made:
AWSTemplateFormatVersion: "2010-09-09"
Resources:
MyQueue:
Type: AWS::SQS::Queue
Properties:
FifoQueue: false
QueueName: sqs-test
ReceiveMessageWaitTimeSeconds: 20
RedrivePolicy:
deadLetterTargetArn:
Fn::GetAtt:
- "MyDLQ"
- "Arn"
maxReceiveCount: 4
Tags:
-
Key: "ProjectName"
Value: "project-x"
MyDLQ:
Type: AWS::SQS::Queue
Properties:
FifoQueue: false
QueueName: sqs-dlq-test
I'm trying to understand this doc. But I'm not sure how I could attach a policy to allow creation of queues. Someone please give me a full example.
tyron's comment on your question is spot on. Check permissions of the user executing the CloudFormation. If you're running commands directly, this is usually pretty easy to check. In some cases, you may be working with a more complicated environment with automation.
I find the best way to troubleshoot permissions in an automated world is via CloudTrail. After any API call has failed, whether from the CLI, CloudFormation, or another source, you can look up the call in CloudTrail.
In this case, searching for "Event Name" = "CreateQueue" in the time range of the failure will turn up a result with details like the following:
Source IP Address; this field may say something like cloudformation.amazonaws.com, or the IP of your machine/office. Helpful when you need to filter events based on the source.
User name; In my case, this was the EC2 instance ID of the agent running the CFN template.
Access Key ID; For EC2 instances, this is likely a set of temporary access credentials, but for a real user, it will show you what key was used.
Actual event data; Especially helpful for non-permissions errors, the actual event may show you errors in the request itself.
In my case, the specific EC2 instance that ran automation was out of date and needed to be updated to use the correct IAM Role/Instance Profile. CloudTrail helped me track that down.
If you are using AWS CodePipeline (where you may be using AWS CodeBuild to run & deploy your CloudFormation stack), remember your CodeBuild role (created under IAM Roles) must have the correct permissions.
You can identify which role is being used & attach required policies -
Open CodeBuild Project
Go to Build Details > Environment > Service Role
Open Service Role (hyperlinked)
Add SQS to role policies