GCP - How do you create a URL Redirect path rule with gcloud url-maps? - redirect

I want to create a GCP Load Balancer path redirect rule programatically using the gcloud tool.
As a test, I created one manually through the GCP Console web interface.
For my manually created rule, gcloud compute url-maps describe my-url-map returns something that looks like:
creationTimestamp: '2021-02-23T20:26:04.825-08:00'
defaultService: https://www.googleapis.com/compute/v1/projects/my-project/global/backendServices/my-backend-service
fingerprint: abcdefg=
hostRules:
- hosts:
- blah.my-site.com
pathMatcher: path-matcher-1
id: '12345678'
kind: compute#urlMap
name: my-url-map
pathMatchers:
- defaultService: https://www.googleapis.com/compute/v1/projects/my-project/global/backendServices/my-backend-service
name: path-matcher-1
pathRules:
- paths:
- /my-redirect-to-root
urlRedirect:
httpsRedirect: false
pathRedirect: /
redirectResponseCode: MOVED_PERMANENTLY_DEFAULT
stripQuery: false
selfLink: https://www.googleapis.com/compute/v1/projects/my-project/global/urlMaps/my-url-map
What I would like to do is to recreate the urlRedirect rule above (redirecting from /my-redirect-to-root to /), but using the gcloud tool.
Looking through the gcloud docs I can't seem to find anything referring to redirects. Is it that this is not possible to do via the gcloud tool? and if not, is there any other solution for creating these redirect rules programatically?
I'm basically trying to get around another GCP issue to do with GCS URLs for static websites by using Load Balancer redirects for each folder in our static site (~400 folders).

Currently Cloud SDK does not support creating url maps with redirects.
If you think that functionality should be available, you can create a Feature Request at Public Issue Tracker to have this option added in future.
For now, you can use API which allows creating url maps with redirects.

Related

Is there a way to access kubernetes dashboard by passing token on the url?

I am able to access my kubernetes dashoard UI by accessing below url and providing the token and hitting sign in button on the login screen
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/workloads?namespace=default
Is there a way I can pass the token via the URL itself so the Dashboard UI opens in logged in state so i don't need to manually past the token and hit sign in?
I am looking for something like this (which was suggested by ChatGPT which unfortunately didn't work, this just opens the login screen again) :
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/?token=<authentication-token>
We can access the kubernetes dashboard UI by two ways
Bearer
token
KubeConfig
As to answer your question, we can't login by encoding the token in the URL. But we can use Skip option to avoid giving the token every time we login.
To enable the Skip button to appear in UI we need to add following flags in the dashboard deployment under args section
--enable-skip-login
--disable-settings-authorizer
After adding these flags the deployment looks something like this
spec:
containers:
- name: kubernetes-dashboard
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
ports:
- containerPort: 8443
protocol: TCP
args:
- --enable-skip-login
- --disable-settings-authorizer
- --auto-generate-certificates
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
Now when you redeploy the dashboard you are able to see the Skip button. By skipping the login will save a lot of time when testing locally deployed clusters.
Note: This is not a suggested method in terms of security standpoint. However, if you are deploying in an isolated testing environment you can proceed with the above steps.
For more information refer this link

ERROR: (gcloud.builds.submit) INVALID_ARGUMENT: could not parse service account URL

I want to use a custom Service Account to build my docker container with Cloud Build. Using gcloud iam service-accounts list and gcloud config list I have confirmed that the service account is in the project environment and I used gcloud services list --enabled to check that cloudbuild.googleapis.com is enabled. I get the error: ERROR: (gcloud.builds.submit) INVALID_ARGUMENT: could not parse service account URL. I tried all of the available service accounts and I tried with and without the prefix path. What is the correct URL or config after steps to get the service account working?
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/my-project-id/my-app']
images: ['gcr.io/my-project-id/my-app']
serviceAccount: 'projects/my-project-id/serviceAccount/my-sa#my-project-id.iam.gserviceaccount.com'
options:
logging: CLOUD_LOGGING_ONLY
The build config for serviceAccount references this page and there's an example that shows the structure:
projects/{project-id}/serviceAccounts/{service-account-email}
So, it follows Google's API convention of a plural noun (i.e. serviceAccounts) followed by the unique identifier.
Another way to confirm this is via APIs Explorer for Cloud Build.
The service's Build resource defines serviceAccount too.

Kubernetes Nginx Ingress HTTP to HTTPS redirect via 301 instead of 308?

We are running a couple of k8s clusters on Azure AKS.
The service (ghost blog) is behind the Nginx ingress and secured with a cert from Letsencrypt. All of that works fine but the redirect behavior is what I am having trouble with.
The Ingress correctly re-directs from http://whatever.com to
https://whatever.com — the issue is that it does so using a 308
redirect which strips all post/page Meta anytime a user shares a
page from the site.
The issue results in users who share any page of the site on most social properties receiving a 'Preview Link' — where the title of the page and the page meta preview do not work and are instead replaced with '308 Permanent Redirect' text — which looks like this:
From the ingress-nginx docs over here I can see that this is the intended behavior (ie. 308 redirect) what I believe is not intended is the interaction with social sharing services when those services attempt to create a page preview.
While the issue would be solved by Facebook (or twitter, etc etc) pointing direct to the https site by default, I currently have no way to force those sites to look to https for the content that will be used to create the previews.
Setting Permanent Re-Direct Code
I can also see that it looks like I should be able to set the redirect code to whatever I want it to be (I believe a 301 redirect will allow Facebook et al. to correctly pull post/page snippet meta), docs on that found here.
The problem is that when I add the redirect-code annotation as specified:
nginx.ingress.kubernetes.io/permanent-redirect-code: "301"
I still get a 308 re-direct on my resources despite being able to see (from my kubectl proxy) that the redirect-code annotation correctly applied. For reference, my full list of annotations on my Ingress looks like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ghost-ingress
annotations:
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/permanent-redirect-code: "301"
To reiterate — my question is; what is the correct way to force a redirect to https via a custom error code (in my case 301)?
My guess is the TLS redirect shadows the nginx.ingress.kubernetes.io/permanent-redirect-code annotation.
You can actually change the ConfigMap for your nginx-configuration so that the default redirect is 301. That's the configuration your nginx ingress controller uses for nginx itself. The ConfigMap looks like this:
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
name: nginx-configuration
namespace: ingress-nginx
data:
use-proxy-protocol: "true"
http-redirect-code: "301"
You can find more about the ConfigMap options here. Note that if you change the ConfigMap you'll have to restart your nginx-ingress-controller pod.
You can also shell into the nginx-ingress-controller pod and see the actual nginx configs that the controller creates:
kubectl -n ingress-nginx exec -it nginx-ingress-controller-xxxxxxxxxx-xxxxx bash
www-data#nginx-ingress-controller-xxxxxxxxx-xxxxx:/etc/nginx$ cat /etc/nginx/nginx.conf
These directions are for Azure AKS users but the solution for this solution for facebook / social property preview links showing as 308 permanent redirect will probably work on any cloud provider (though it has not been tested) — you would just need to change the way you login / get your credentials etc.
Thanks to Rico for the solution! Since this is only tested with Facebook you may or may not want to go the ConfigMap application route (which Rico mentions above) this walks through manually editing the ConfigMap as opposed to using kubectl apply -f to apply one saved locally.
Pickup AZ Credentials for your cluser (az login)
Assume the role for your cluster: az aks get-credentials --resource-group yourGroup --name your-cluster
Browse your Cluster: az aks browse --resource-group yourGroup --name your-cluster
Navigate to the namespace containing your Ingress nGinx containers (not the backend services — although they could be in the same NS).
On the left hand side navigation menu (just above settings) find the 'ConfigMaps' tab and click it.
Edit the 'Data' element of the YAML and add the following line (note the quotes around both the name and number in the key/value):
"data": {
"some-other-setting-here": "false",
"http-redirect-code": "301"
}
You will need a comma after each key/value line except the last.
Restart your nginx-controller POD by deleting it make SURE you don't delete the deployment like I did.
If you want to be productive you can upgrade your nginx install (from helm) which will restart / re-create the container in the process by using:
helm upgrade ngx-ingress stable/nginx-ingress
Where ngx-ingress is the name of your helm install. Also note that using the '--reuse-values' flag will cause your upgrade to fail (re: https://github.com/helm/helm/issues/4337)
If you don't know the name you used for nginx when you installed it from Helm originally you can use helm list to find it.
Finally to test and make sure your Re-Directs are using the correct ConfigMap code, curl your http site with:
curl myhttpdomain.com
You should receive something like this:
```
<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.15.3</center>
</body>
</html>
```
One important thing to note here is that if you are making the change to a 301 re-direct to try to fix the preview link for facebook or one of the other social media properties (twitter etc etc) then in all likelihood this will not fix any link to any page / post that you have already linked to — at least not right away.
The social properties all use intense caching to limit their resource usage but you can check to see if the above fixes your preview link issue by linking to a NEW page / post that you have not previously referenced.
Be Aware of Implications for 'POST'
So the major reason that nginx-ingress uses a code 308 is because it keeps the 'body' / payload intact in cases where you are sending a POST request (as opposed to a normal GET request link you do with a browser etc).
For me this wasn't a concern but if you are for whatever reason posting to the http address and expecting that to be re-directed seamlessly that will probably not work — after you swap to the 301 redirect discussed in post that is.
HOWEVER if you are not expecting a seamless redirect when sending POST requests (I think most people probably are not, I know I am not) then I think this is the best way to fix the Facebook 308 Permanent redirect behavior.

Is it possible to use like proxy forward on s3 website?

I'm planning to host s3 website with following DNS.
S3 bucket name: example.com
S3 endpoint: example.com.s3-website.amazonaws.com
I also want to separate manual page for my service:
S3 bucket name: manual
S3 endpoint: manual.s3-website.amazonaws.com
When I enter example.com/manual, it should forward all request to my manual S3 but URL should not be changed.
For example, when I access, http://example.com/manual/en/index.html,
it should show manual.s3-website.amazonaws.com/en/index.html
but the URL should not be changed.
I tried to use redirection rules of 'Static website hosting' of bucket properties, but it just redirects to the my manual page (it changed the url).
And I'm using jekyll, but it doesn't support proxy forward unlike nginx.
Is there anything solution, guide, or example to refer?
It would be possible if you would use CloudFront. You don't have to change your S3-setup.
create an origin for each bucket
create a second Behavior for the manual path
And you're done.

Creating a bucket using Google Cloud Platform Deployment Manager Template

I'm trying to create a bucket using GCP Deployment Manager. I already went through the QuickStart guide and was able to create a compute.v1.instance. But I'm trying to create a bucket in Google Cloud Storage, but am unable to get anything other than 403 Forbidden.
This is what my template file looks like.
resources:
- type: storage.v1.bucket
name: test-bucket
properties:
project: my-project
name: test-bucket-name
This is what I'm calling
gcloud deployment-manager deployments create deploy-test --config deploy.yml
And this is what I'm receiving back
Waiting for create operation-1474738357403-53d4447edfd79-eed73ce7-cabd72fd...failed.
ERROR: (gcloud.deployment-manager.deployments.create) Error in Operation operation-1474738357403-53d4447edfd79-eed73ce7-cabd72fd: <ErrorValue
errors: [<ErrorsValueListEntry
code: u'RESOURCE_ERROR'
location: u'deploy-test/test-bucket'
message: u'Unexpected response from resource of type storage.v1.bucket: 403 {"code":403,"errors":[{"domain":"global","message":"Forbidden","reason":"forbidden"}],"message":"Forbidden","statusMessage":"Forbidden","requestPath":"https://www.googleapis.com/storage/v1/b/test-bucket"}'>]>
I have credentials setup, and I even created an account owner set of credentials (which can access everything) and I'm still getting this response.
Any ideas or good places to look? Is it my config or do I need to pass additional credentials in my request?
I'm coming from an AWS background, still finding my way around GCP.
Thanks
Buckets on Google Cloud Platform need to be unique.
If you try to create a bucket with a name that is already used by somebody else (on another project), you will receive an ERROR MESSAGE. I would test by creating a new bucket with another name.