Taking over existing Domains (HostedZones) in CloudFormation - aws-cloudformation

I've been setting up CloudFormation templates for some new infrastructure for a project and I've made it to Route 53 Hosted Zones.
Now ideally I'd like to create a "core-domains" stack with all our hosted zones and base configuration. Thing is we already have these created manually using the AWS console (and they're used for test/live infrastructure), is there any way to supply the existing "HostedZoneId" as a property to the resource definition and essentially have it introspect what we already have and then apply the diff? (If I've done my job there shouldn't be a diff hopefully so should just be a no-op!).
I can't see a "HostedZoneId" property in the docs: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-route53-hostedzone.html
Any suggestions?
PS. I'm assuming this isn't possible and I'll have to recreate all the HostedZones under CloudFormation but I thought I'd check :)

Found it. It's fine to make other HostedZones, they're issued with new Nameservers, just use the "HostedZoneConfig: Comment" field to note down which hosted zone is which and then you can switch the nameservers over when you're ready!

Related

Alter auto-generation of host for routes in openshift on namespace basis

I try to adjust the template for generating routes within a single namespace.
So basically what openshift does, when I enter a route without settng the host via yaml is generate a route in the following way:
${name}-${namespace}.myapps.mycompany.com
I would prefer to have a base domain for many routes which differs in the path, e.g.:
${namespace}.myapps.mycompany.com/${name}
Is this possible? Especially If I am not an admin of openshift at my company but a dev whose team is responsible just for a few namespaces?
For context: We want to use ArgoCD + Git to use gitops, but do not want to hardcode any infrastructure knowledge like the host or domain in our git repo. We came from using ingresses, but if we omit the host there no routes are generated at all...
Thanks in advance for any help!
You can have path-based routes, e.g., [host]/[path]. If you don't provide your own value for [host], it will use the same OpenShift ${name}-${namespace}.myapps.mycompany.com based values.
I'm not sure that you can change OpenShift's default route template, but you can definitely provide your own path values.

Usage of 'local' plugin in coredns

As I am new to coredns configurations in kubernetes and I'm trying to explore plugins provided by coredns in kubernetes. I see a plugin named local which will respond with a reply to local requests. But I could not understand a use-case where this plugin will be exactly useful for. Can someone explain with an example how it can be make use of? Also in unbound configuration man page, I see an option called local-zone: .
local-zone:
Configure a local zone. The type determines the answer to give if there is no match from local-data. The types are deny, refuse, static, transparent, redirect, nodefault, typetransparent, and are explained below. After that the default settings are listed. Use local-data: to enter data into the local zone. Answers for local zones are authoritative DNS answers. By default the zones are class IN.
nodefault:
Used to turn off default contents for AS112 zones. The other types also turn off default contents for the zone. The 'nodefault' option has no other effect than turning off default contents for the given zone.
Is this local plugin behaves similar to this unbound local-zone option? If not, Is there any plugin which act similar to local-zone in unbound ? I am expecting a coredns plugin to behave similar to local-data in unbound particularly for nodefault type in local-data. eg: local-zone: nodefault. It would be really helpful if someone helps me to clear out this. Thanks in advance!!!
As the author of the local plugin for CoreDNS explains, localhost.<searchpath> queries are hitting coredns, which is wrong. So, he wrote this plugin to intercept localhost.<'domain'> queries and return the correct response.
From the official CoreDNS github web page:
local will respond with a basic reply to a "local request". Local request are defined to be names in the following zones: localhost, 0.in-addr.arpa, 127.in-addr.arpa and 255.in-addr.arpa and any query asking for localhost.<domain>.
With local enabled any query falling under these zones will get a reply. This prevents the query from "escaping" to the internet and putting strain on external infrastructure.
You can check the code of the local plugin here. I don't see similar to the unbound local-zone nodefault functionality there.
See all in-tree plugins for CoreDNS here.

Have a fallback for ImportValue on a CloudFormation template?

I've got two projects (backoffice and frontoffice) deployed using CloudFormation.
In the frontoffice, I import some DynamoDB table names from the backoffice stack as Environment variables for my Lambdas.
To run some acceptance tests I need to sometimes deploy the frontoffice withtout deploying the backoffice. Therefore, the frontoffice will try to do an ImportValue of an Export that doesn't exists.
Is there any pattern that would allow me to get the Frontend deployed anyway - and then handle the lack of value in my code ?
You could pass an additional Parameter to frontoffice indicating whether you are going to deploy with backoffice or not.
Based on the value of the parameter, you could use DependsOn and/or Fn::If to either import or not the DynamoDB table names.
For a fully automated solution without any extra Parameter, you would have to use custom resource. The resource would be a lambda function, which would use AWS SDK to query CloudFormation stacks and check for backoffice.

How do I Re-route Ghost Blog Admin URL without modifying the API Address?

Ghost blog platform has a setting that allows you to change the admin panel login location (which starts as: https://whateveryoursiteis.com/ghost). Methodology / docs for changing that setting can be found here: https://ghost.org/docs/config/#admin-url
However — when using the above methodology the API Url that is used for Search etc etc is ALSO modified meaning all requests to the ghost API will also be forwarded to the alternate domain (not just the admin access).
My question is — what is the best way to achieve a redirect of the admin URL to a different Domain / protocol while allowing the API url used by Ghost to remain the same?
More background.
We are running ghost on top of GKE (Google Kubernetes Engine) on a Multi-Region Ingress which allows us to dump our CloudSQL DB down to a SQLite file and then build that database into our production Docker Containers which are then deployed to the different Kubernetes nodes that are fronted by the GCE-Ingress load balancer.
Since we need to rebuild that database / container on content change (not just on code change) we need to have a separate Admin URL backed by Cloud SQL where we can persist / modify our data which then triggers the rebuild on our Ci pipeline via Ghost Webhooks.
Another related question might be:
Is it possible to use standard ghost redirects (created via: https://docs.ghost.org/concepts/redirects/) to redirect the admin panel URL (ie. https://whateveryoursiteis.com/ghost) to a different domain (ie. https://youradminsite.com/ghost)?
Another Related GKE / GCE-Ingress Question:
Is it possible to create 301 redirects natively using Kuberentes GCE-Ingress on GKE without adding an nGinx container etc?
That will be my first attempt after posting this — but I figured either way maybe it helps another ghost platform fan down the line someplace — I will attempt to respond back as I find answers to those questions (assuming someone doesn't beat me to it!).
Regarding your question if it's possible to create 301 redirects without adding a nginx container, I can suggest to use istio, find out more information about traffic routing here.
OK. So as it turns out the Ghost team currently has things setup to point API connections at the Admin URL. So if you change your Admin URL expect your clients to attempt to connect to that URL.
I am going to be raising the potential of splitting these off as a feature request over on the ghost forums (as soon as I get out from under pre-launch hell on the current project).
Here's the official Ghost response:
What is referred as 'official docker image' is not something that we
as a Ghost team support.
The APIs are indeed hosted under the same URL as the admin and that's
by design and not really a bug. Introducing configuration options for
each API Ghost instance hosts would be a feature and should be
discussed at our forum first 👍 I think it's a nice idea to be able to
serve APIs from different host, but it's not something that is within
our priorities at the moment.
In case you need more granular handling of admin site, you could
introduce those on your proxy level and for example, handle requests
that are coming to /ghost/api with a different set of rules.
See the full discussion over here on the TryGhost GitHub:
https://github.com/TryGhost/Ghost/issues/10441#issuecomment-460378033
I haven't looked into what it would take to implement the feature but the suggestion on proxying the request could work... if only I didn't need to run on GKE Multi region (which requires use of GCE-Ingress which doesn't have support for redirection hah!). This would be relatively easy to solve the nGinx ingress.
Hopefully this helps someone — I will update as I work through the process. As of now I solved it by dumping my GCP CloudSQL database down to a SQLite db file during build time (thereby allowing me to keep my admin instance clean and separate from the API endpoint — which for me remains the same URL).

Updating a CloudFormation stack with a Cognito pool claims that we're adding attributes when we're not

Starting on Nov 7, 2018 we started getting the following error when updating our CloudFormation stacks:
Updating user pool schema is not allowed from cloudformation. Use the
AddCustomAttributes API or the AWS Cognito Console to update user pool
schema.
Our CF stacks don't have any changes to the custom attributes of the Cognito pool. They only have changes to the PostConfirmation and CustomMessage triggers, as well the addition of API Gateway responses.
Does anybody know why we might be seeing this? How can we avoid this error message?
We had the same problem with deployment. For now we are deploying it without CustomMessage trigger and setting CustomMessage trigger manually after deployment.
we removed the CustomMessage changes from our template and that seemed to do the trick.
Mostly by luck, I've found an answer that allows me to get around this in an automated manner.
How our scripts used to work
First, let me explain how this used to work. I used to have the following set of cloudFormation scripts:
cognitoSetup.template --> <Serverless Framework> --> <cognitoSetup.template updated with triggers>
So we'd setup the Cognito pool, run the Serverless Framework to add the Cognito Lambda functions, and then update the cognitoSetup.template file with the ARNs for the lambdas exported when the Serverless Framework ran.
The Fix
Now, we include the ARNs for the Lambdas in the cognitoSetup.template. So now cognitoSetup.template looks like this:
"CognitoUserPool": {
"Type": "AWS::Cognito::UserPool"
...
"Properties": {
...
"LambdaConfig": {
"CustomMessage": "arn:aws:lambda:<our aws region>:<our account#>:function:main-<our stage>-onCognitoCustomMessage"
}
}
Note, we're setting this trigger before the lambda even exists. The trigger just needs an ARN, and it doesn't seem to care that it's not there yet. Then we run sls deploy which creates the actual Lambda function and everything works fine.
Now our scripts look like this:
cognitoSetup.template --> <Serverless Framework>
Why does this fix this error? I don't actually know. CloudFormation seems to be fine with this modification but not okay with modifying the same file later in our process. But it works.