Google mysql cloud Instance Authorized Network with domain name - google-cloud-sql

I created an instance and i connected successfully with a public external IP. I wish to know is there a way to assign authorized network with domain name instead of an IP address? I wish to use domain name with no-ip as public IP will changed after router restarted. This makes troublesome because i need to change the authorized network if there is changes in my public IP.

Currently there is no way to use a name instead of an IP. Note that Cloud SQL API [1] allows updating the list of authorized networks. The gcloud command line tool from Cloud SDK [2] has support for that.
$ gcloud sql instances patch -h
usage: gcloud sql instances patch
[optional flags] INSTANCE
Updates the settings of a Cloud SQL instance.
optional flags:
[...]
--authorized-networks AUTHORIZED_NETWORKS
The list of external networks that are allowed to
connect to the instance. Specified in CIDR notation,
also known as 'slash' notation (e.g. 192.168.100.0/24).
[...]
positional arguments:
INSTANCE Cloud SQL instance ID.
[1] https://developers.google.com/cloud-sql/docs/admin-api/v1beta3/instances
[2] https://developers.google.com/cloud-sql/docs/cloud-sdk

Related

How to Manage IBM Cloud Key-Protect Instance from CLI when Private Network Only Policy is Applied?

In doing some testing of the IBM Cloud Security and Compliance items, specifically the CIS Benchmarks for Best Practices, one item I was non-compliant on was in Cloud Key protect for the Goal "Check whether Key Protect is accessible only by using private endpoints"
My Key-protect instance was indeed set to "Public and Private" so I changed it to Private. This change now requires me to manage my Key-Protect instance from the CLI.
When I try to even look at my Key-Protect instance policy from the CLI I receive the following error:
ibmcloud kp instance -i my_instance_id policies
Retrieving policy details for instance: my_instance_id...
Error while getting instance policy: kp.Error: correlation_id='cc54f61d-4424-4c72-91aa-d2f6bc20be68', msg='Unauthorized: The user does not have access to the specified resource'
FAILED
Unauthorized: The user does not have access to the specified resource
Correlation-ID:cc54f61d-4424-4c72-91aa-d2f6bc20be68
I'm confused - I am running the CLI logged, in as the tenant admin with Access policy of All resources in account (including future IAM enabled services)
What am I doing wrong here?
Private endpoints are only accessible from within IBM Cloud. If you connect from the public internet, access should be blocked.
There are multiple ways, how to work with such a policy in place. One is to deploy (a VPC with) a virtual machine on a private network. Then, connect to it with a VPN or Direct Link. Thus, your resources are not accessible from the public internet, but only through private connectivity. You could continue to use the IBM Cloud CLI, but set it to use private endpoints.

Can I create a lambda function in CloudFormation that runs an aws cli command that updates nameservers for the registered domain?

I registered a domain example.com in my Route53, now I created a CloudFormation stack that creates a hosted zone called example53 an A record example.com that routes traffic to my ALB and an ACM resources that should validate the example.com domain.
The problem is it will never validate the domain if the nameservers are wrong and before the ACM resource I need to update the nameservers for the domain registered in my Route53 with the name servers the NS record has in my hosted zone.
There is no CloudFormation domain resource manipulation but there is an AWS CLI command that can change the name servers for the domain, is there a way I can run that AWS CLI command with a Lambda resource created in CloudFormation?
I run the stack with a Makefile, can a Makefile run the AWS CLI command and realized the conditions such as when HostedZone is first created.
You can create a custom resource on CloudFormation. The resource would be in the form of a lambda function. The function would use AWS SDK, e.g. boto3 to perform actions on R53 resources that you require.
Since the function is developed by you, not provided by AWS, the custom resource can do whatever you want it to do. Its not limited by regular CloudFormation shortcomings.

How to get meaningful alias or name for provisioned service in IBM Cloud?

I am using the CLI command bx service create to provision a new service. Some of the services support resource groups. For them, I noticed that the service itself has a long generic name and is listed under "Services". The name I chose is only associated with an alias, listed under "Cloud Foundry Services".
How can I get those services to use the name I picked?
The trick is to use another IBM Cloud CLI command. It is part of the set of commands to manage resource groups and its objects:
bx resource service-instance-create
Using the above command, the name is used for the service and there is no alias created. The service is only listed under "Services". Here is a full example:
bx resource service-instance-create ghstatsDDE dynamic-dashboard-embedded lite us-south

How to programmatically retrieve the name node hostname?

The IBM Analytics Engine docs have the following instructions for getting the name node hostname:
Go to Manage Cluster in IBM® Cloud and click the nodes tab to get the name node host name. It's the host name of the management-slave1 node type.
How can I programmatically retrieve the name node host name? Can I retrieve it via an API, or maybe I can get it by running a command over ssh. Failing that, can I derive it from one of the host names on vcap services?
Maybe this information should be provided to users in the vcap info?
In the end, I solved this using ambari. The solution that worked for me is captured here: https://stackoverflow.com/a/47844056/1033422

Google Storage access based on IP Address

Is there a way to give access to a Google Cloud Storage bucket based on the IP address it is coming from.
On Amazon s3, you can just set this in the access policy like this:
"Condition" : {
"IpAddress" : {
"aws:SourceIp" : ["192.168.176.0/24","192.168.143.0/24"]
}
}
I do not want to use a signed url.
The updated answers on this page are only partially correct and should not be recommended for the use case of access control to Cloud Storage Objects.
Access Context Manager (ACM) defines rules to allow access (e.g. an IP address).
VPC Service Controls create an "island" around a project and ACM rules can be attached. These rules are "ingress" rules and not "egress" rules meaning "anyone at that IP can get into all resources in the project with the correct IAM permissions".
The ACM rule specifying an IP address will allow that IP address to access all Cloud Storage Objects and all other protected resources owned by that project. This is usually not the intended result. You cannot apply an IP address rule to an object, only to all objects in a project. VPC Service Controls are designed to prevent data from getting out of a project and are NOT designed to allow untrusted anonymous users access to a project's resources.
UPDATE: This is now possible using VPC Service Controls
No, this is not currently possible.
There's currently a Feature request to restrict google cloud storage bucket by IP Address.
The VPC Service Controls [1] allow users to define a security perimeter around Google Cloud Platform resources such as Cloud Storage buckets (and some others) to constrain data within a VPC and help mitigate data exfiltration risks.
[1] https://cloud.google.com/vpc-service-controls/
I used VPC Service Controls on behalf of a client recently to attempt to accomplish this. You cannot use VPC Service Controls to whitelist an ip address on a single bucket. Jterrace is right. There is no such solution for that today. However, using VPC Service Controls you can draw a service perimeter around the Google Cloud Storage (GCS) service as a whole within a given project, then apply an access level to your service perimeter in the project to allow an ip address/ip address range access to the service (and resources within). The implications are that any new buckets created within the project will be created within the service perimeter and thus be regulated by the access levels applied to the perimeter. So you'll likely want this to be the sole bucket in this project.
Note that the service perimeter only affects services you specify. It does not protect the project as a whole.
Developer Permissions:
Access Context Manager
VPC Service Controls
Steps to accomplish this:
Use VPC Service Controls to create a service perimeter around the entire Google Cloud Storage service in the project of your choosing
Use Access Context Manager to create access levels for ip address you want to whitelist and users/groups who will have access to the service
Apply these access levels to the service perimeter created in the previous step (it will take 30 minutes for this change to take effect)
Note: Best practice would be to provide access to the bucket using a service account or users/groups ACL, if that is possible. I know it isn't always so.