Can I create a lambda function in CloudFormation that runs an aws cli command that updates nameservers for the registered domain? - aws-cloudformation

I registered a domain example.com in my Route53, now I created a CloudFormation stack that creates a hosted zone called example53 an A record example.com that routes traffic to my ALB and an ACM resources that should validate the example.com domain.
The problem is it will never validate the domain if the nameservers are wrong and before the ACM resource I need to update the nameservers for the domain registered in my Route53 with the name servers the NS record has in my hosted zone.
There is no CloudFormation domain resource manipulation but there is an AWS CLI command that can change the name servers for the domain, is there a way I can run that AWS CLI command with a Lambda resource created in CloudFormation?
I run the stack with a Makefile, can a Makefile run the AWS CLI command and realized the conditions such as when HostedZone is first created.

You can create a custom resource on CloudFormation. The resource would be in the form of a lambda function. The function would use AWS SDK, e.g. boto3 to perform actions on R53 resources that you require.
Since the function is developed by you, not provided by AWS, the custom resource can do whatever you want it to do. Its not limited by regular CloudFormation shortcomings.

Related

injected db credentials change when I deploy new app version to cloud

I deploy a web app to a local cloudfoundry environment. As a database service for my DEV environment I have chosen a Marketplace service google-cloudsql-postgres with the plan postgres-db-f1-micro. Using the Web UI I created an instance with the name myapp-test-database and mentioned it in the CF Manifest:
applications:
- name: myapp-test
services:
- myapp-test-database
At first, all is fine. I can even redeploy the existing artifact. However, when I build a new version of my app and push it to CF, the injected credentials are updated and the app can no longer access the tables:
PSQLException: ERROR: permission denied for table
The tables are still there, but they're owned by the previous user. They were automatically created by the ORM in the public schema.
While the -OLD application still exists I can retrieve the old username/password from the CF Web UI or $VCAP_SERVICES and drop the tables.
Is this all because of Rolling App Deployments? But then there should be a lot of complaints.
If you are strictly doing a cf push (or restart/restage), the broker isn't involved (Cloud Controller doesn't talk to it), and service credentials won't change.
The only action through cf commands that can modify your credentials is doing an unbind followed by a bind. Many, but not all, service brokers will throw away credentials on unbind and provide new, unique credentials for a bind. This is often desirable so that you can rotate credentials if credentials are compromised.
Where this can be a problem is if you have custom scripts or cf cli plugins to implement rolling deployments. Most tools like this will use two separate application instances, which means you'll have two separate bindings and two separate sets of credentials.
If you must have one set of credentials you can use a service key to work around this. Service keys are like bindings but not associated with an application in CloudFoundry.
The downside of the service key is that it's not automatically exposed to your application, like a binding, through $VCAP_SERVICES. To workaround this, you can pass the service key creds into a user-provided service and then bind that to your application, or you can pass them into your application through other environment variables, like DB_URL.
The other option is to switch away from using scripts and cf cli plugins for blue/green deployment and to use the support that is now built into Cloud Foundry. With cf cli version 7+, cf push has a --strategy option which can be set to rolling to perform a rolling deployment. This does not create multiple application instances and so there would only ever exist one service binding and one set of credentials.
Request a static username using the extra bind parameter "username":
cf bind-service my-app-test-CANDIDATE myapp-test-database -c "{\"username\":\"myuser\"}"
With cf7+ it's possible to add parameters to the manifest:
applications:
- name: myapp-test
services:
- name: myapp-test-database
parameters: { "username": "myuser" }
https://docs.cloudfoundry.org/devguide/services/application-binding.html#arbitrary-params-binding
Note: Arbitrary parameters are not supported in app manifests in cf CLI v6.x. Arbitrary parameters are supported in app manifests in cf CLI v7.0 and later.
However, I can't find the new syntax here: https://docs.cloudfoundry.org/devguide/deploy-apps/manifest-attributes.html#services-block . The syntax I use comes from some other SO question.

What is the difference between an Endpoints service and AppEngine service when they have the same URL?

I have two services in Endpoints in GCP to host two APIs
They are
Service A
and
Service B
Service A's host is
projectid.appspot.com
Service B's host is
test-dot-projectid.appspot.com
When I deploy my app using gcloud app deploy Service A's test service in
appengine my app.yaml looks like this
runtime: go
env: flex
service: test
endpoints_api_service:
name: projectid.appspot.com
rollout_strategy: managed
handlers:
- url: .* #mandatory
secure: always #deprecated in flex environment
redirect_http_response_code: 301 #optional
script: _go_app #mandatory
From my understanding, the app has been deployed to Service A's URL
projectid.appspot.com but with the subdomain test so test-dot-projectid.appspot.com
However is this now not technically deploying to Service B on a default service i.e.
test-dot-projectid.appspot.com
Is this not interfering with deploying on service A with service test? What is the difference?
My understanding is: if service A is "projectid.appspot.com". Only "projectid.appspot.com" will be routed to A, not "test.projectid.appspot.com". So you can safely deploy sevice B with "test.projectid.appspot.com". But I am not sure. Have you tried it?
When having an application with multiple services, the service names must be configured and must be unique.
The default project is not required to have a service name. This is true when a single service for an application exists.
When having multiple services (Service A, Service B), the application must be first deployed to the default service. After which, each service must have their own app.yaml file.
It is recommended for each service to be configured with a unique service name to prevent miscommunication. If a unique service name is not configured for each service, by default, it will deploy to the default service’s URL. In your case, since the default service URL and Service A’s service URL are the same, it is causing a conflict.
To deploy to a specific services’ service name, you will need to specify the service name and its app.yaml file.
For more information, please refer to:
Planning your directory structure and app.yaml naming:
https://cloud.google.com/appengine/docs/flexible/go/configuration-files#directory_structure
How to deploy to multiple or specific service in a multi-service environment:
https://cloud.google.com/appengine/docs/flexible/go/testing-and-deploying-your-app#deploying_multiple_services
App Engine is made up of application resources that are made up of one or more services (previously known as modules). Each service is code independent and can run different runtimes and you deploy versions of that service. The URL is how your application is accessed either via a user request or an API call to the service.
For more information, please see https://cloud.google.com/appengine/docs/standard/python/microservices-on-app-engine#app_engine_services_as_microservices
Endpoints are part of the Cloud Endpoints Framework service offered by Google Cloud to manage, deploy, maintain and secure your API. In App Engine, Endpoint framework is only supported on Python 2.7 and Java runtime environments. It is a tool that allows you to generate REST API and client libraries for your application.
For more information, please see
https://cloud.google.com/endpoints/docs/frameworks/about-cloud-endpoints-frameworks

Serverless Framework - Get API Gateway URL for use in tests

I'm using the Serverless framework, and I want to be able to reference my API Gateway URL in my acceptance tests.
My test environment is regularly destroyed and then recreated, so hardcoding a URL into the tests is not possible.
I can see there are ways to reference API Gateway as an AWS environment variable, but this doesnt help me to locally get the URL for my tests.
I was hoping that the cloudformation output would be referenced in the .serverless package, and accessible via json, but this doesnt seem to be the case.
Any idea how I can reference the API Gateway URL in my acceptance test files?
NOTE: These tests need to be run on AWS, not using a local server to mimic API Gateway
The serverless-plugin-test-helper plugin can help here. It will generate a YAML file containing all of the outputs of your stack. This includes a couple of standard ones - the S3 bucket that was used (ServerlessDeploymentBucketName) and the base service endpoint (ServiceEndpoint).
If you are using Node and have your tests in the same directory as the stack being tested then there's also a module to read this file. Otherwise, it's just standard YAML and you can use whatever tools are convenient.
Consider adding an APIGateway custom domain for your API. You can then use a known DNS name for your acceptance tests.
You will need to add an ApiGateway base path mapping, apigateway domain name, and a route53 recordset to the resources section of your serverless.yml.

How to get meaningful alias or name for provisioned service in IBM Cloud?

I am using the CLI command bx service create to provision a new service. Some of the services support resource groups. For them, I noticed that the service itself has a long generic name and is listed under "Services". The name I chose is only associated with an alias, listed under "Cloud Foundry Services".
How can I get those services to use the name I picked?
The trick is to use another IBM Cloud CLI command. It is part of the set of commands to manage resource groups and its objects:
bx resource service-instance-create
Using the above command, the name is used for the service and there is no alias created. The service is only listed under "Services". Here is a full example:
bx resource service-instance-create ghstatsDDE dynamic-dashboard-embedded lite us-south

Google Storage access based on IP Address

Is there a way to give access to a Google Cloud Storage bucket based on the IP address it is coming from.
On Amazon s3, you can just set this in the access policy like this:
"Condition" : {
"IpAddress" : {
"aws:SourceIp" : ["192.168.176.0/24","192.168.143.0/24"]
}
}
I do not want to use a signed url.
The updated answers on this page are only partially correct and should not be recommended for the use case of access control to Cloud Storage Objects.
Access Context Manager (ACM) defines rules to allow access (e.g. an IP address).
VPC Service Controls create an "island" around a project and ACM rules can be attached. These rules are "ingress" rules and not "egress" rules meaning "anyone at that IP can get into all resources in the project with the correct IAM permissions".
The ACM rule specifying an IP address will allow that IP address to access all Cloud Storage Objects and all other protected resources owned by that project. This is usually not the intended result. You cannot apply an IP address rule to an object, only to all objects in a project. VPC Service Controls are designed to prevent data from getting out of a project and are NOT designed to allow untrusted anonymous users access to a project's resources.
UPDATE: This is now possible using VPC Service Controls
No, this is not currently possible.
There's currently a Feature request to restrict google cloud storage bucket by IP Address.
The VPC Service Controls [1] allow users to define a security perimeter around Google Cloud Platform resources such as Cloud Storage buckets (and some others) to constrain data within a VPC and help mitigate data exfiltration risks.
[1] https://cloud.google.com/vpc-service-controls/
I used VPC Service Controls on behalf of a client recently to attempt to accomplish this. You cannot use VPC Service Controls to whitelist an ip address on a single bucket. Jterrace is right. There is no such solution for that today. However, using VPC Service Controls you can draw a service perimeter around the Google Cloud Storage (GCS) service as a whole within a given project, then apply an access level to your service perimeter in the project to allow an ip address/ip address range access to the service (and resources within). The implications are that any new buckets created within the project will be created within the service perimeter and thus be regulated by the access levels applied to the perimeter. So you'll likely want this to be the sole bucket in this project.
Note that the service perimeter only affects services you specify. It does not protect the project as a whole.
Developer Permissions:
Access Context Manager
VPC Service Controls
Steps to accomplish this:
Use VPC Service Controls to create a service perimeter around the entire Google Cloud Storage service in the project of your choosing
Use Access Context Manager to create access levels for ip address you want to whitelist and users/groups who will have access to the service
Apply these access levels to the service perimeter created in the previous step (it will take 30 minutes for this change to take effect)
Note: Best practice would be to provide access to the bucket using a service account or users/groups ACL, if that is possible. I know it isn't always so.