Why won't my content security policy deploy to CloudFront? - aws-cloudformation

I'm composing a fairly large CSP and deploying it to CloudFront with CloudFormation. The old CSP worked, but the new one doesn't. It doesn't look like it has any syntax errors.
Resource:
AWS::CloudFront::ResponseHeadersPolicy
- ResponseHeadersPolicyConfig
- SecurityHeadersConfig
- ContentSecurityPolicy
The error is:
UPDATE_FAILED - Internal error reported from downstream service during operation 'AWS::CloudFront::ResponseHeadersPolicy'

The policy is too long
I'm pretty sure this was due to the CSP simply being too long. I can't find anything in the docs (neither W3C nor AWS) that say there's a limit to the length. But it seems that CloudFront won't accept a CSP longer than 1780 characters. Since I'm using the upgrade-insecure-requests directive, I don't really need to specify the scheme for the sources. So, changing the sources like this fixed the problem:
- default-src https://foo.example
+ default-src foo.example

Related

How do I Re-route Ghost Blog Admin URL without modifying the API Address?

Ghost blog platform has a setting that allows you to change the admin panel login location (which starts as: https://whateveryoursiteis.com/ghost). Methodology / docs for changing that setting can be found here: https://ghost.org/docs/config/#admin-url
However — when using the above methodology the API Url that is used for Search etc etc is ALSO modified meaning all requests to the ghost API will also be forwarded to the alternate domain (not just the admin access).
My question is — what is the best way to achieve a redirect of the admin URL to a different Domain / protocol while allowing the API url used by Ghost to remain the same?
More background.
We are running ghost on top of GKE (Google Kubernetes Engine) on a Multi-Region Ingress which allows us to dump our CloudSQL DB down to a SQLite file and then build that database into our production Docker Containers which are then deployed to the different Kubernetes nodes that are fronted by the GCE-Ingress load balancer.
Since we need to rebuild that database / container on content change (not just on code change) we need to have a separate Admin URL backed by Cloud SQL where we can persist / modify our data which then triggers the rebuild on our Ci pipeline via Ghost Webhooks.
Another related question might be:
Is it possible to use standard ghost redirects (created via: https://docs.ghost.org/concepts/redirects/) to redirect the admin panel URL (ie. https://whateveryoursiteis.com/ghost) to a different domain (ie. https://youradminsite.com/ghost)?
Another Related GKE / GCE-Ingress Question:
Is it possible to create 301 redirects natively using Kuberentes GCE-Ingress on GKE without adding an nGinx container etc?
That will be my first attempt after posting this — but I figured either way maybe it helps another ghost platform fan down the line someplace — I will attempt to respond back as I find answers to those questions (assuming someone doesn't beat me to it!).
Regarding your question if it's possible to create 301 redirects without adding a nginx container, I can suggest to use istio, find out more information about traffic routing here.
OK. So as it turns out the Ghost team currently has things setup to point API connections at the Admin URL. So if you change your Admin URL expect your clients to attempt to connect to that URL.
I am going to be raising the potential of splitting these off as a feature request over on the ghost forums (as soon as I get out from under pre-launch hell on the current project).
Here's the official Ghost response:
What is referred as 'official docker image' is not something that we
as a Ghost team support.
The APIs are indeed hosted under the same URL as the admin and that's
by design and not really a bug. Introducing configuration options for
each API Ghost instance hosts would be a feature and should be
discussed at our forum first 👍 I think it's a nice idea to be able to
serve APIs from different host, but it's not something that is within
our priorities at the moment.
In case you need more granular handling of admin site, you could
introduce those on your proxy level and for example, handle requests
that are coming to /ghost/api with a different set of rules.
See the full discussion over here on the TryGhost GitHub:
https://github.com/TryGhost/Ghost/issues/10441#issuecomment-460378033
I haven't looked into what it would take to implement the feature but the suggestion on proxying the request could work... if only I didn't need to run on GKE Multi region (which requires use of GCE-Ingress which doesn't have support for redirection hah!). This would be relatively easy to solve the nGinx ingress.
Hopefully this helps someone — I will update as I work through the process. As of now I solved it by dumping my GCP CloudSQL database down to a SQLite db file during build time (thereby allowing me to keep my admin instance clean and separate from the API endpoint — which for me remains the same URL).

What is the source of strange 404 and 403 WARNINGs in GCE Loadbalancer Logs

Checking the logs of a GCE Loadbalancer in the Google Cloud Platform Logs, shows a bunch of WARNINGs in the form of:
"GET https://<MY_DOMAIN>/.well-known/acme-challenge/*" 404 215 "Go-http-client/1.1"
and ..:
"GET https://<MY_SERVICE_DOMAIN>/*" 401 561 "Go-http-client/1.1"
What is causing these calls? Is it some kind of health check?
From what I gather from the docs, the readiness-probes of the backing Pods should be expected to be called. Also as far as I see the backend service groups are considered healthy.
As they appear as WARNINGs in the logs, I assume I should work on making them go away?
"GET https://<MY_DOMAIN>/.well-known/acme-challenge/*" 404 215 "Go-http-client/1.1"
That one is pretty straightforward and is caused by the Let's Encrypt protocol checking for your ownership over the domain. It's hard to say whether that's an actual error without knowing whether you were expecting Let's Encrypt to check that domain.
"GET https://<MY_SERVICE_DOMAIN>/*" 401 561 "Go-http-client/1.1"
Without knowing what MY_SERVICE_DOMAIN means, that's also hard to know, but I wouldn't expect readiness checks to involve the LoadBalancer since (as you correctly observed) that check should be at the Pod level, not from outside the cluster
As they appear as WARNINGs in the logs, I assume I should work on making them go away?
That likely is a personal preference. Without any question having those extraneous messages makes finding actual warnings harder, but I doubt they are actually hurting anything, either. The distinction to me would be whether some process is expecting a successful HTTP response to the MY_SERVICE_DOMAIN request, and when it doesn't receive one, that causes a downstream failure -- it won't be the LoadBalancer that would require action, but rather the consumer of it.

modsecurity for iis not logging request body

We have an ASP.Net/angular/web api on IIS 7.5 that is being called with bad encoding and we're trying to get the request body logged so we can show the problem to the caller.
We googled around and found ModSecurity, so we installed it and are giving it a try - but only for the Audit Logging portion. Unfortunately, neither C nor I seem to be logging anything, no matter what we do. I've seen some other Oflow posts that I infer to mean ModSecurity only logs those types for certain incoming requests (.html logs C but nothing else does kind of thing). WebApi and angular might be confusing it, but I'm not sure. Nothing I've tried seems to work.
Here's our configuration:
# -- Audit log configuration -------------------------------------------------
# Log the transactions that are marked by a rule, as well as those that
# trigger a server error (determined by a 5xx or 4xx, excluding 404,
# level response status codes).
#
SecAuditEngine On
SecRequestBodyAccess On
# Log everything we know about a transaction.
SecAuditLogParts ABCIJDEFHZ
# Use a single file for logging. This is much easier to look at, but
# assumes that you will use the audit log only ocassionally.
#
SecAuditLogType Serial
SecAuditLog E:\ModLogs\modsec_audit4.log
Is there something else I've got to do to get ModSecurity to do the C/I logging?
ModSecurity 2.9.1 (downloaded today), IIS 7.5, Web Api, Angular, ASP.Net
Thanks
This looks to be a known bug with ModSecurity and IIS: https://github.com/SpiderLabs/ModSecurity/issues/538
Near the bottom of that bug there are comments that this can be resolved by setting this in your config:
SecStreamInBodyInspection On
There are also some posts suggesting disabling DynamicCompression in IIS is necessary.

Jaspersoft Studio: Username may not be null (HTTP proxy issue)

if you get an error similar to this although your credentials are correct or have even worked before in this environment, it may be due to an HTTP proxy misconfiguration (see answer below)
java.lang.IllegalArgumentException: Username may not be null
at org.apache.http.util.Args.notNull(Args.java:48)
at org.apache.http.auth.UsernamePasswordCredentials.<init>(UsernamePasswordCredentials.java:78)
at com.jaspersoft.studio.server.utils.HttpUtils.getCredentials(HttpUtils.java:107)
at com.jaspersoft.studio.server.utils.HttpUtils.setupProxy(HttpUtils.java:45)
at com.jaspersoft.studio.server.protocol.restv2.RestV2ConnectionJersey.connect(RestV2ConnectionJersey.java:91)
at com.jaspersoft.studio.server.protocol.ProxyConnection.connect(ProxyConnection.java:61)
at com.jaspersoft.studio.server.WSClientHelper.checkConnection(WSClientHelper.java:85)
at com.jaspersoft.studio.server.wizard.ServerProfileWizard.connect(ServerProfileWizard.java:101)
at com.jaspersoft.studio.server.wizard.ServerProfileWizard.access$1(ServerProfileWizard.java:97)
at com.jaspersoft.studio.server.wizard.ServerProfileWizard$2.run(ServerProfileWizard.java:78)
at org.eclipse.jface.operation.ModalContext$ModalContextThread.run(ModalContext.java:121)
Check your Eclipse->Window->Preferences->General->Network Connections settings. If your Jasper server is not behind an HTTP proxy it must be included and selected (yellow) in the Proxy bypass section (or no proxy may be necessary at all - but this seems more unlikely because other proxied network services would not be available then within Eclipse).
Otherwise it should not be included therein.
If you should get another error org.apache.http.client.HttpResponseException: Not Found
later on it could be an unrelated error due to a server<->studio version mismatch, e.g. described here:
https://community.jaspersoft.com/questions/823765/jaspersoft-studio-551-unable-connect-jasperreports-server-450
(I know this is not directly related to the question, but it may help after such an update scenario (which may be a big waste of time) ...)
Instead of downgrading Jasper Studio you can try to change the URL and Jasper version in the server connection. Try to remove services/repository/ from your URL, which worked for me :-)
(...as mentioned here: http://community.jaspersoft.com/jaspersoft-studio/issues/3497 )

Ignoring SSL certificates in Scala dispatch

When trying to hit an environment with improperly configured SSL certificates, I get the following error:
javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated
at com.sun.net.ssl.internal.ssl.SSLSessionImpl.getPeerCertificates(SSLSessionImpl.java:352)
at org.apache.http.conn.ssl.AbstractVerifier.verify(AbstractVerifier.java:128)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:390)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:148)
at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:149)
at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:121)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:562)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:415)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:820)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:776)
at dispatch.BlockingHttp$class.dispatch$BlockingHttp$$execute(Http.scala:45)
at dispatch.BlockingHttp$$anonfun$execute$1$$anonfun$apply$3.apply(Http.scala:58)
at dispatch.BlockingHttp$$anonfun$execute$1$$anonfun$apply$3.apply(Http.scala:58)
at scala.Option.getOrElse(Option.scala:108)
at dispatch.BlockingHttp$$anonfun$execute$1.apply(Http.scala:58)
at dispatch.Http.pack(Http.scala:25)
at dispatch.BlockingHttp$class.execute(Http.scala:53)
at dispatch.Http.execute(Http.scala:21)
at dispatch.HttpExecutor$class.x(executor.scala:36)
at dispatch.Http.x(Http.scala:21)
at dispatch.HttpExecutor$class.when(executor.scala:50)
at dispatch.Http.when(Http.scala:21)
at dispatch.HttpExecutor$class.apply(executor.scala:60)
at dispatch.Http.apply(Http.scala:21)
at com.secondmarket.cobra.lib.delegate.UsersBDTest.tdsGet(UsersBDTest.scala:130)
at com.secondmarket.cobra.lib.delegate.UsersBDTest.setup(UsersBDTest.scala:40)
I would like to ignore the certificates entirely.
Update: I understand the technical concerns regarding improperly configured SSL certs and the issue isn't with our boxes but a service we're using. It happens mostly on test boxes rather than prod/stg so we're investigating but needed something to test the APIs.
You can't 'ignore the certificates entirely' for the following reasons:
The problem in this case is that the client didn't even provide one.
If you don't want security why use SSL at all?
I have no doubt whatsoever that many, perhaps most, of these alleged workarounds 'for development' have 'leaked' into production. There is a significant risk of deploying an insecure system if you build an insecure system. If you don't build the insecurity in, you can't deploy it, so the risk vanishes.
The following was able to allow unsafe SSL certs.
Http.postData(url, payload).options(HttpOptions.allowUnsafeSSL,
HttpOptions.readTimeout(5000))
For the newest version of Dispatch (0.13.2), you can use the following to create an http client that accepts any certificate:
val myHttp = Http.withConfiguration(config => config.setAcceptAnyCertificate(true))
Then you can use it for GET requests like this:
myHttp(url("https://www.host.com/path").GET OK as.String)
(Modify accordingly for POST requests...)
I found this out here: Why does dispatch throw "java.net.ConnectException: General SSLEngine ..." and "unexpected status" exceptions for a particular URL?
And to create an Http client that does verify the certificates, I found some sample code here: https://kevinlocke.name/bits/2012/10/03/ssl-certificate-verification-in-dispatch-and-asynchttpclient/.