Not able to associate elastic ip address to my AWS ECS instance - amazon-ecs

I have created an AWS ECS instance in ca-central region. It works with the dynamic public ip which changes every time when I update the service. Everything is good so far.
As I need a public static IP, I have created an elastic ip in the same region and try to associate the ip with the ECS instance.
Resource Type: Network Interface
Reassociation: Allow this Elastic IP address to be reassociated (checked)
When I try this, it throws the error like this:
Elastic IP address could not be associated.
Elastic IP address nn.nn.nn.nn: You do not have permission to access the specified resource.

It seems the EIP you are trying to associate to the ECS container instance is already associated with another resource (e.g. Nat Gateway?). Please make sure the EIP is not currently associated with any other resource then try again.
Also confirm the user performing these actions has the following permissions:
"ec2.AssociateAddress"

To apply the various EC2 Elastic IP permissions in the AWS console, you can basically follow the instructions in this link below.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-policies-ec2-console.html#ex-eip
I wanted to make sure that my IAM user had all the permissions necessary to view, allocate, associate, release Elastic IPs, so I added permissions through IAM to the specific IAM group by:
Opening the Permissions tab, selecting Add permissions -> Create Inline Policy
After naming the policy, added the following into the JSON tab
Here's the JSON text below
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeAddresses",
"ec2:AllocateAddress",
"ec2:DescribeInstances",
"ec2:AssociateAddress",
"ec2:ReleaseAddress",
"ec2:DescribeAvailabilityZones",
"ec2:describeCoipPools",
"ec2:describePublicIpv4Pools"
],
"Resource": "*"
}
]
}

Related

How to Manage IBM Cloud Key-Protect Instance from CLI when Private Network Only Policy is Applied?

In doing some testing of the IBM Cloud Security and Compliance items, specifically the CIS Benchmarks for Best Practices, one item I was non-compliant on was in Cloud Key protect for the Goal "Check whether Key Protect is accessible only by using private endpoints"
My Key-protect instance was indeed set to "Public and Private" so I changed it to Private. This change now requires me to manage my Key-Protect instance from the CLI.
When I try to even look at my Key-Protect instance policy from the CLI I receive the following error:
ibmcloud kp instance -i my_instance_id policies
Retrieving policy details for instance: my_instance_id...
Error while getting instance policy: kp.Error: correlation_id='cc54f61d-4424-4c72-91aa-d2f6bc20be68', msg='Unauthorized: The user does not have access to the specified resource'
FAILED
Unauthorized: The user does not have access to the specified resource
Correlation-ID:cc54f61d-4424-4c72-91aa-d2f6bc20be68
I'm confused - I am running the CLI logged, in as the tenant admin with Access policy of All resources in account (including future IAM enabled services)
What am I doing wrong here?
Private endpoints are only accessible from within IBM Cloud. If you connect from the public internet, access should be blocked.
There are multiple ways, how to work with such a policy in place. One is to deploy (a VPC with) a virtual machine on a private network. Then, connect to it with a VPN or Direct Link. Thus, your resources are not accessible from the public internet, but only through private connectivity. You could continue to use the IBM Cloud CLI, but set it to use private endpoints.

Service account not assigned to CloudSQL instance

I would need to have a CloudSQL instace created with particular service account. Trying API call instances.insert:
POST https://www.googleapis.com/sql/v1beta4/projects/{project}/instances
{
"serviceAccountEmailAddress": "<my account>#managed-gcp.iam.gserviceaccount.com",
"name": "pvtest20200611-3",
"settings": {
"tier": "db-n1-standard-1"
},
"databaseVersion": "MYSQL_5_7"
}
The instance is created but it has a generated svc account (e.g. p754990076948-kf1bsf#gcp-sa-cloud-sql.iam.gserviceaccount.com) instead of mine.
For my SA, I have storage admin/storage object admin roles assiged (this is what I would need newly created instances to always have). I also added cloudsql admin role. When I thought it was a role problem so even tried the Project Editor role, but this didn't work.
I have tried MySQL and Postgres db types.
Would you know why is not my account picked up, why is CloudSQL engine always assigning it's own?
What are requirement/setup for custom SA to work with CloudSQL instance?
When you create an instance in Cloud SQL, it will use the default one during the creation, so, you won't be able to set a custom one during the creation.
It's possible, however, for you to give access and permissions for a Service Account after the creation. As explained in the official documentation Granting roles to a service account for specific resources, you can provide specific permissions to your Service Account. You can try using the gcloud command as follows:
gcloud projects add-iam-policy-binding my-project-123 \
--member serviceAccount:my-sa-123#my-project-123.iam.gserviceaccount.com \
--role roles/editor
Besides that, you can also check all your available Service Accounts using this link here, to verify if your custom one is there and even add the permissions via UI, if you think it's better via this way.
Let me know if the information helped you!

Restrict gcloud service account to specific bucket

I have 2 buckets, prod and staging, and I have a service account. I want to restrict this account to only have access to the staging bucket. Now I saw on https://cloud.google.com/iam/docs/conditions-overview that this should be possible. I created a policy.json like this
{
"bindings": [
{
"role": "roles/storage.objectCreator",
"members": "serviceAccount:staging-service-account#lalala-co.iam.gserviceaccount.com",
"condition": {
"title": "staging bucket only",
"expression": "resource.name.startsWith(\"projects/_/buckets/uploads-staging\")"
}
}
]
}
But when i fire gcloud projects set-iam-policy lalala policy.json i get:
The specified policy does not contain an "etag" field identifying a
specific version to replace. Changing a policy without an "etag" can
overwrite concurrent policy changes.
Replace existing policy (Y/n)?
ERROR: (gcloud.projects.set-iam-policy) INVALID_ARGUMENT: Can't set conditional policy on policy type: resourcemanager_projects and id: /lalala
I feel like I misunderstood how roles, policies and service-accounts are related. But in any case: is it possible to restrict a service account in that way?
Following comments, i was able to solve my problem. Apparently bucket-permissions are somehow special, but i was able to set a policy on the bucket that allows access for my user, using gsutil:
gsutils iam ch serviceAccount:staging-service-account#lalala.iam.gserviceaccount.com:objectCreator gs://lalala-uploads-staging
After firing this, the access is as-expected. I found it a little bit confusing that this is not reflected on the service-account policy:
% gcloud iam service-accounts get-iam-policy staging-service-account#lalala.iam.gserviceaccount.com
etag: ACAB
Thanks everyone

PUT Object to AWS S3 via HTTP through VPC Endpoint with proper ACL?

I am using an HTTPS client to PUT an object to Amazon S3 from an EC2 instance within a VPC that has an S3 VPC Endpoint configured. The target Bucket has a Bucket Policy that only allows access from specific VPCs, so authentication via IAM is impossible; I have to use HTTPS GET and PUT to read and write Objects.
This works fine as described, but I'm having trouble with the ACL that gets applied to the Object when I PUT it to the Bucket. I've played with setting a Canned ACL using HTTP headers like the following, but neither results in the correct behavior:
x-amz-acl: private
If I set this header, the Object is private but it can only be read by the root email account so this is no good. Others need to be able to access this Object via HTTPS.
x-amz-acl: bucket-owner-full-control
I totally thought this Canned ACL would do the trick, however, it resulted in unexpected behavior, namely that the Object became World Readable! I'm also not sure how the Owner of the Object was decided either since it was created via HTTPS, in the console the owner is listed as a seemingly random value. This is the documentation description:
Both the object owner and the bucket owner get FULL_CONTROL over the
object. If you specify this canned ACL when creating a bucket, Amazon
S3 ignores it.
This is totally baffling me, because according to the Bucket Policy, only network resources of approved VPCs should even be able to list the Object, let alone read it! Perhaps it has to do with the union of the ACL and the Bucket Policy and I just don't see something.
Either way, maybe I'm going about this all wrong anyway. How can I PUT an object to S3 via HTTPS and set the permissions on that object to match the Bucket Policy, or otherwise make the Bucket Policy authoritative over the ACL?
Here is the Bucket Policy for good measure:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:GetObject",
"s3:GetObjectTagging",
"s3:GetObjectTorrent",
"s3:GetObjectVersion",
"s3:GetObjectVersionTagging",
"s3:GetObjectVersionTorrent",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:ListBucketVersions",
"s3:ListMultipartUploadParts",
"s3:PutObject",
"s3:PutObjectTagging"
],
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
],
"Condition": {
"StringEquals": {
"aws:SourceVpc": "vpc-12345678"
}
}
}
]
}
The way that S3 ACLs and Bucket Policies work is the concept of "Least Privilege".
Your bucket policy only specifies ALLOW for the specified VPC. No one else is granted ALLOW access. This is NOT the same as denying access.
This means that your Bucket or object ACL is granting access.
In the S3 console double check who the file owner is after the PUT.
Double check the ACL for the bucket. What rights have your granted at the bucket level?
Double check the rights that you are using for the PUT operation. Unless you have granted public write access or the PUT is being ALLOWED by the bucket policy, the PUT must be using a signature. This signature will determine the permissions for the PUT operation and who owns the file after the PUT. This is determined by the ACCESS KEY used for the signature.
Your x-amz-acl should contain bucket-owner-full-control.
[EDIT after numerous comments below]
The problem that I see is that you are approaching security wrong in your example. I would not use the bucket policy. Instead I would create an IAM role and assign that role to the EC2 instances that are writing to the bucket. This means that the PUTs are then signed with the IAM Role Access Keys. This preserves the ownership of the objects. You can then have the ACL being bucket-owner-full-control and public-read (or any supported ACL permissions that you want).

Google Storage access based on IP Address

Is there a way to give access to a Google Cloud Storage bucket based on the IP address it is coming from.
On Amazon s3, you can just set this in the access policy like this:
"Condition" : {
"IpAddress" : {
"aws:SourceIp" : ["192.168.176.0/24","192.168.143.0/24"]
}
}
I do not want to use a signed url.
The updated answers on this page are only partially correct and should not be recommended for the use case of access control to Cloud Storage Objects.
Access Context Manager (ACM) defines rules to allow access (e.g. an IP address).
VPC Service Controls create an "island" around a project and ACM rules can be attached. These rules are "ingress" rules and not "egress" rules meaning "anyone at that IP can get into all resources in the project with the correct IAM permissions".
The ACM rule specifying an IP address will allow that IP address to access all Cloud Storage Objects and all other protected resources owned by that project. This is usually not the intended result. You cannot apply an IP address rule to an object, only to all objects in a project. VPC Service Controls are designed to prevent data from getting out of a project and are NOT designed to allow untrusted anonymous users access to a project's resources.
UPDATE: This is now possible using VPC Service Controls
No, this is not currently possible.
There's currently a Feature request to restrict google cloud storage bucket by IP Address.
The VPC Service Controls [1] allow users to define a security perimeter around Google Cloud Platform resources such as Cloud Storage buckets (and some others) to constrain data within a VPC and help mitigate data exfiltration risks.
[1] https://cloud.google.com/vpc-service-controls/
I used VPC Service Controls on behalf of a client recently to attempt to accomplish this. You cannot use VPC Service Controls to whitelist an ip address on a single bucket. Jterrace is right. There is no such solution for that today. However, using VPC Service Controls you can draw a service perimeter around the Google Cloud Storage (GCS) service as a whole within a given project, then apply an access level to your service perimeter in the project to allow an ip address/ip address range access to the service (and resources within). The implications are that any new buckets created within the project will be created within the service perimeter and thus be regulated by the access levels applied to the perimeter. So you'll likely want this to be the sole bucket in this project.
Note that the service perimeter only affects services you specify. It does not protect the project as a whole.
Developer Permissions:
Access Context Manager
VPC Service Controls
Steps to accomplish this:
Use VPC Service Controls to create a service perimeter around the entire Google Cloud Storage service in the project of your choosing
Use Access Context Manager to create access levels for ip address you want to whitelist and users/groups who will have access to the service
Apply these access levels to the service perimeter created in the previous step (it will take 30 minutes for this change to take effect)
Note: Best practice would be to provide access to the bucket using a service account or users/groups ACL, if that is possible. I know it isn't always so.