When using the LetsEncrypt certbot to generate an SSL certificate for my domain, I am prompted to make a file available at my domain to verify my control at my domain:
http://example.com/.well-known/acme-challenge/XXXXXX
However when I try to upload that file to my Google Cloud Storage bucket I get the following error:
$ gsutil rsync -R . gs://example.com
Building synchronization state...
Starting synchronization
Copying file://./.well-known/acme-challenge/XXXXXX [Content-Type=application/octet-stream]...
BadRequestException: 400 ACME HTTP challenges are not supported.
Does Google Cloud Storage expressly forbid URLs with "acme challenge" in the path? Is it possible to setup a LetsEncrypt certificate for a domain hosted at a Google Cloud Storage bucket?
We worked around this by exposing /.well-known/acme-challenge as an endpoint and storing the challenge at a different directory that is allowed by Cloud Storage. When LE hits that endpoint we retrieve the generated challenge from it's directory and serialize it in the response.
Related
Trying to map a custom domain to an app deployed on Cloud Run.
Running into this issue: "Waiting for certificate provisioning. You must configure your DNS records for certificate issuance to begin."
Referred to this issue:
Google Cloud Run - Domain Mapping stuck at Certificate Provisioning
Am I missing a step or should I keep waiting?
Steps I took:
Added mapping with service and domain name.
Configured a Cloud DNS Zone and updated the DNS records on the domain host.
Linked the Cloud DNS Zone to a Cloud Domain.
Verified with TXT file google-site-verification=....
Used https://dnspropagation.net/ to monitor and it seems like regions Costa Rica and Indonesia are having trouble propagating.
It's possible that it is still provisioning. But you can consider checking the following :
Make sure that your SSL's scope is global.
A-record for your domain should be properly configured.
You can try using SSL Shopper or WhatsmyDNS to monitor and check the propagation status of your domain.
I have a custom domain (cdnexample.com) and a Firebase Google Cloud Storage Bucket (examplefiles.appspot.com).
I want to configure cdnexample.com domain in Cloudflare CDN to source from GCS bucket (examplefiles.appspot.com).
For example, given a GCS File: https://storage.googleapis.com/examplefiles.appspot.com/image1.jpg I want to get the Cloudflare CDN File working: https://cdnexample.com/image1.jpg
The problem is that I cannot change the GCS bucket name (examplefiles.appspot.com) to match my Cloudflare domain name (cdnexample.com). All the solutions I came across below require the GCS bucket name to match Cloudflare domain name and use CNAME configuration with c.storage.googleapis.com.
I have read through the following relevant articles:
https://cloud.google.com/storage/docs/request-endpoints
https://devopsdirective.com/posts/2020/10/gcs-cloudflare-hosting/
https://community.cloudflare.com/t/using-cloudflare-cdn-https-with-google-cloud-storage/15602
How to cache google cloud storage (GCS) with cloudflare?
Using Cloudflare CDN + HTTPS with Google Cloud Storage
Use CloudFlare to CDN a Google Cloud Storage Bucket
https://medium.com/#pablo.delvalle.cr/cloudflare-and-google-cloud-for-hosting-a-static-site-fd2e1a97aa9b
Does anyone have an idea of how to make the Cloudflare CDN work in this case?
In this case you can set up a load balancer with backend bucket which will connect your storage bucket and can be accessed with an IP address, later you point the IP address in your custom domain. you can find the below information about adding a backend bucket here
I need to implement a custom authentication and authorisation module for Kubernetes. This is going to have to be done via a web hook.
The documentation for the authentication and authorisation webhooks describes a config file that the API Server needs to be started with.
The config file looks identical for both authentication and authorisation and looks like this:
# clusters refers to the remote service.
clusters:
- name: name-of-remote-authn-service
cluster:
certificate-authority: /path/to/ca.pem # CA for verifying the remote service.
server: https://authn.example.com/authenticate # URL of remote service to query. Must use 'https'.
# users refers to the API server's webhook configuration.
users:
- name: name-of-api-server
user:
client-certificate: /path/to/cert.pem # cert for the webhook plugin to use
client-key: /path/to/key.pem # key matching the cert
# kubeconfig files require a context. Provide one for the API server.
current-context: webhook
contexts:
- context:
cluster: name-of-remote-authn-service
user: name-of-api-sever
name: webhook
I can see that the clusters section refers to the remote service, i.e. it's defining the webhook, thereby answering the question the API Server needs to have answered: "what is the URL endpoint to hit when an authn/authz decision is required, and when I connect via HTTPS, who is the CA authority for the webhook's TLS certificate so that I know I can trust the remote webhook?"
I'm not sure of the users section. What is the purpose of the client-certificate and client-key fields? The comment in the file says "cert for the webhook plugin to use", but as this config file is given to the API Server, not the web hook, I don't understand what this means. Is this a certificate that will allow the webhook service to authenticate the connection that the API Server will initiate with it? i.e. the client certificate needs to go into the truststore of the webhook server?
Are both of these assumptions correct?
Kubernetes webhook is using two-way SSL authentication, so the fields in users section is used to configure the certificates for "client side's authentication".
clusters section configuration just works normal one way SSL authentication, which is server (here is your webhook module) will validate client's (here is Kubernetes) request with configured certificate.
As long as you configured certificates in users section, client (Kubernetes) could have the ability to validate server's (webhook module) response, just acting like a reverse CA authentication of one way SSL.
I have a Kubernetes cluster running in High Availability mode with 3 master nodes. When I try to run the DNS cluster add-on as-is, the kube2sky application errors with an x509 signed by unknown certificate authority message for the API Server service address (which in my case is 10.100.0.1). Reading through some of the GitHub issues, it looked like Tim Hockin had fixed this type of issue via using the default service account tokens available.
All 3 of my master nodes generate their own certificates for the secured API port, so is there something special I need to do configuration-wise on the API servers to get the CA certificate included in the default service account token?
It would be ideal to have the service IP of the API in the SAN field of all your server certificates.
If this is not possible in your setup, set the clusters{}.cluster.insecure-skip-tls-verify field to true in your kubeconfig file, or the pass the --insecure-skip-tls-verify flag to kubectl.
If you are trying to reach the API from within a pod you could use the secrets mounted via the Service Account. By default, if you use the default secret, the CA certificate and a signed token are mounted to /var/run/secrets/kubernetes.io/serviceaccount/ in every pod, and any client can use them from within the pod to communicate with the API. This would help you solving the unknown certificate authority error and provide you with an easy way to authenticate against your API servers at the same time.
After my VPS provider moved their servers to a new location, I get AccessDeniedException: 403 This service is not available from your region error for every gsutil request.
New server IP where gsutil doesn't work is 51.254.184.21, previously it was 88.198.255.218, and that one worked ok.
I found related issue from bigquery where the similar error was caused by incorrect mapping of IP addresses on Google's side.
I am asking here on SO, because that's where Google routes their support page.