Cloud Run + Firebase Hosting region rewrites issue - firebase-hosting

I'm trying to use Firebase Hosting for CDN connected to Cloud Run. Yesterday I was testing something for region eu-west1 and it went well. Today I'm trying to do the same but for region eu-west4 and I'm getting error that this region is not supported.
I switched to eu-west1 and it worked.
Is this bug or region eu-west4 is not supported?
=== Deploying to 'xxxxxxxx'...
i deploying hosting
Error: HTTP Error: 400, Cloud Run region `europe-west4` is not supported.
"rewrites": [
{
"source": "**",
"run": {
"serviceId": "web-client",
"region": "europe-west4"
}
}
],
same for new asia-southeast1 region also
Error: HTTP Error: 400, Cloud Run region `asia-southeast1` is not supported.

From this info here is the Details information regarding Rewrite:
Firebase Hosting originates in us-central1 so while deploying cloud Run it's recommended to select us-central1 region for less First Contentful Paint score or quick loading of your website, but kills the advantage of your nearby region availability purpose(really unfortunate for google fanboys).
Example: if your location is India your nearest cloud run available is asia-southeast1 Singapore we can't select asia-southeast1
Request path would go like this:
you→India(CDN)→USA(Firebase)→Signapore(CloudRun+aync call to Firestore India)→USA→CDN→you (which is REALLY BAD in terms of latency).
you→India(CDN)→USA(Firebase)→USA us-central1(CloudRun+aync call to Firestore India)→USA→CDN→you
(static Page will Load FAST, but Firestore dynamic data on webapp will data load with REALLY BAD in terms of latency, we should select us-central1 for Firestore also this makes no use of your local region GCP products this really strange that Firebase hosting not available for at least for AMERICA EUROPE ASIA-PACIFIC zones atleast).
Conclusion(till this date):
Cloud Run region rewrites issue for Firebase Hosting is there for many regions but, for the optimal page load result we should select us-central1 it is really unfortunate THIS IS THE REAL PROBLEM compare to Rewrite Issue, to avoid website Firestore latency for non USA users we should use cloud run/cloud function cache control such that data will cached at your local/near by region CDN for fast data loading (we cant use firebase web SDK since CDN caching not possible via if we use SDK, we should use Cloud function in firebase/cloud run)
Firebase Hosting to Cloud Run rewrite availability ( as of Aug 31, 2020)
Available:
us-central1,
us-east1,
asia-northeast1,
europe-west1
Not available
asia-east1,
europe-north1,
europe-west4,
us-east4,
us-west1,
asia-southeast1
Please file a feature request for Firebase rewrite availability if it's not available in your region Cloud Run and Firebase hosting not available for at least for AMERICA EUROPE ASIA-PACIFIC zones.
FYI: Cloud Firestore multi-region also not available for Asia region if using multi-region Firestore is the fix for locked Firebase hosting and Cloud run regions to us-central1
Cloud Run availability region
(Please comment if you get the rewrite access to any of the above mentioned region)

I actually managed to figure out a way to "fix" this. I changed my regions to europe-west4 instead of my previous europe-west1 and that "fixed" my deployment problem.

Related

Google Data Transfer from S3-compatible sources MissingRegion

Mainly wondering if anyone has encountered before (and how to resolve).
Transfer job from s3-compatible source to Cloud Storage fails. Unable to list objects from the source.
Agent logs show
W0111 06:49:13.002952 8 helpers.go:465 got unknown FailureType: UNKNOWN_FAILURE, MissingRegion: could not find region configuration
I have been following the guide https://cloud.google.com/storage-transfer/docs/s3-compatible
Source is non-Amazon S3-compatible storage service. Source has been tested with other S3 client tools (including gsutil).
Appears AWS SDK for Go is in use
https://aws.github.io/aws-sdk-go-v2/docs/configuring-sdk/
The SDK requires a region to operate but does not have a default Region.
Amazon S3-compatible storage services (MinIO, Hitichai Content Platform) do not require a region.
I have tried passing AWS_REGION, AWS_DEFAULT_REGION environment variables to Agent running in Docker. Mounting ~/.aws/config in the container.
SDK may require both of these to be done
Set the AWS_REGION environment variable to the default Region
Set the region explicitly using config.WithRegion as an argument to config.LoadDefaultConfig when loading configuration.
Thanks

How do I Re-route Ghost Blog Admin URL without modifying the API Address?

Ghost blog platform has a setting that allows you to change the admin panel login location (which starts as: https://whateveryoursiteis.com/ghost). Methodology / docs for changing that setting can be found here: https://ghost.org/docs/config/#admin-url
However — when using the above methodology the API Url that is used for Search etc etc is ALSO modified meaning all requests to the ghost API will also be forwarded to the alternate domain (not just the admin access).
My question is — what is the best way to achieve a redirect of the admin URL to a different Domain / protocol while allowing the API url used by Ghost to remain the same?
More background.
We are running ghost on top of GKE (Google Kubernetes Engine) on a Multi-Region Ingress which allows us to dump our CloudSQL DB down to a SQLite file and then build that database into our production Docker Containers which are then deployed to the different Kubernetes nodes that are fronted by the GCE-Ingress load balancer.
Since we need to rebuild that database / container on content change (not just on code change) we need to have a separate Admin URL backed by Cloud SQL where we can persist / modify our data which then triggers the rebuild on our Ci pipeline via Ghost Webhooks.
Another related question might be:
Is it possible to use standard ghost redirects (created via: https://docs.ghost.org/concepts/redirects/) to redirect the admin panel URL (ie. https://whateveryoursiteis.com/ghost) to a different domain (ie. https://youradminsite.com/ghost)?
Another Related GKE / GCE-Ingress Question:
Is it possible to create 301 redirects natively using Kuberentes GCE-Ingress on GKE without adding an nGinx container etc?
That will be my first attempt after posting this — but I figured either way maybe it helps another ghost platform fan down the line someplace — I will attempt to respond back as I find answers to those questions (assuming someone doesn't beat me to it!).
Regarding your question if it's possible to create 301 redirects without adding a nginx container, I can suggest to use istio, find out more information about traffic routing here.
OK. So as it turns out the Ghost team currently has things setup to point API connections at the Admin URL. So if you change your Admin URL expect your clients to attempt to connect to that URL.
I am going to be raising the potential of splitting these off as a feature request over on the ghost forums (as soon as I get out from under pre-launch hell on the current project).
Here's the official Ghost response:
What is referred as 'official docker image' is not something that we
as a Ghost team support.
The APIs are indeed hosted under the same URL as the admin and that's
by design and not really a bug. Introducing configuration options for
each API Ghost instance hosts would be a feature and should be
discussed at our forum first 👍 I think it's a nice idea to be able to
serve APIs from different host, but it's not something that is within
our priorities at the moment.
In case you need more granular handling of admin site, you could
introduce those on your proxy level and for example, handle requests
that are coming to /ghost/api with a different set of rules.
See the full discussion over here on the TryGhost GitHub:
https://github.com/TryGhost/Ghost/issues/10441#issuecomment-460378033
I haven't looked into what it would take to implement the feature but the suggestion on proxying the request could work... if only I didn't need to run on GKE Multi region (which requires use of GCE-Ingress which doesn't have support for redirection hah!). This would be relatively easy to solve the nGinx ingress.
Hopefully this helps someone — I will update as I work through the process. As of now I solved it by dumping my GCP CloudSQL database down to a SQLite db file during build time (thereby allowing me to keep my admin instance clean and separate from the API endpoint — which for me remains the same URL).

connection to couchbase with flutter

I am new to Flutter and Couchbase, trying to connect to a sample bucket travel-sample using fluttercouch plugin, but getting error "unable to set target endpoint to ws//10.0.2.2:8091/travel-sample" for setting endpoint as ws//10.0.2.2:8091/travel-sample. Could some one explain to me what will be the endpoint for me and are there any changes needed else where. I am trying to test the server on the master repository of fluttercouch plugin. Here is the main.dart and bucket overview.
Couchbase Lite (the library underlying FlutterCouch implementation) cannot directly sync with Couchbase Server. A Couchbase Sync Gateway has to be deployed with the next instructions to allow syncing between the server and the mobile database:
"enable_shared_bucket_access": true,
"import_docs": "continuous",
You can find further documentation about Couchbase Sync Gateway in the Couchbase Documentation.
The primary point of confusion is you're trying to connect your fluttercouch plugin directly to Couchbase Server, and it's designed for Couchbase Lite, which is what runs on on a mobile device. I don't have enough experience with flutter to tell you what that endpoint should be, but it looks like you're targeting the wrong thing at the moment.

How to use alchemyAPI news data in Bluemix Node-RED?

I am using Bluemix environment and Node-RED flow editor. While trying to use the feature extract node that comes built-in Node-RED for the AlchemyAPI service, I am finding it hard to use it.
I tried connecting it to the HTTP request node, HTTP response node, etc, but no result. Maybe I am not completing the connections procedure correctly?
I need this code to get Twitter news and news using AlchemyAPI news data for specific companies and also give a sentiment score to and get store in IBM HDFS.
Here is the code:
[{"id":"8bd03bb4.742fc8","type":"twitter
in","z":"5fa9e76b.a05618","twitter":"","tags":"Ashok Leyland, Tata
Communication, Welspun, HCL Info,Fortis H, JSW Steel, Unichem Lab,
Graphite India, D B Realty, Eveready Ind, Birla Corporation, Camlin
Fine Sc, Indian Economy, Reserve Bank of India, Solar Power,
Telecommunication, Telecom Regulatory Authority of
India","user":"false","name":"Tweets","topic":"tweets","x":93,"y":92,"wires":[["f84ebc6a.07b14"]]},{"id":"db13f5f.f24ec08","type":"ibm
hdfs","z":"5fa9e76b.a05618","name":"Dec12Alchem","filename":"/12dec_alchem","appendNewline":true,"overwriteFile":false,"x":564,"y":226,"wires":[]},{"id":"4a1ed314.b5e12c","type":"debug","z":"5fa9e76b.a05618","name":"","active":true,"console":"false","complete":"false","x":315,"y":388,"wires":[]},{"id":"f84ebc6a.07b14","type":"alchemy-feature-extract","z":"5fa9e76b.a05618","name":"TrailRun","page-image":"","image-kw":"","feed":true,"entity":true,"keyword":true,"title":true,"author":"","taxonomy":true,"concept":true,"relation":"","pub-date":"","doc-sentiment":true,"x":246,"y":160,"wires":[["c0d3872.f3f2c78"]]},{"id":"c0d3872.f3f2c78","type":"function","z":"5fa9e76b.a05618","name":"To
mark tweets","func":"msg.payload={tweet:
msg.payload,score:msg.features};\nreturn
msg;\n","outputs":1,"noerr":0,"x":405,"y":217,"wires":[["db13f5f.f24ec08","4a1ed314.b5e12c"]]},{"id":"4181cf8.fbe7e3","type":"http
request","z":"5fa9e76b.a05618","name":"News","method":"GET","ret":"obj","url":"https://gateway-a.watsonplatform.net/calls/data/GetNews?apikey=&outputMode=json&start=now-1d&end=now&count=1&q.enriched.url.enrichedTitle.relations.relation=|action.verb.text=acquire,object.entities.entity.type=Company|&return=enriched.url.title","x":105,"y":229,"wires":[["f84ebc6a.07b14"]]},{"id":"53cc794e.ac3388","type":"inject","z":"5fa9e76b.a05618","name":"GetNews","topic":"News","payload":"","payloadType":"string","repeat":"","crontab":"","once":false,"x":75,"y":379,"wires":[["4181cf8.fbe7e3"]]}]
First you have to bind an Alchemy service instance to your node-red application.
Then you can develop your application, here is an example using the http and Feature Extract nodes:
Here is the node flow for this basic sample if you want to try:
[{"id":"e191029.f1e6f","type":"function","z":"2fc2a93f.d03d56","name":"","func":"msg.payload = msg.payload.url;\nreturn msg;","outputs":1,"noerr":0,"x":276,"y":202,"wires":[["12082910.edf7d7"]]},{"id":"12082910.edf7d7","type":"alchemy-feature-extract","z":"2fc2a93f.d03d56","name":"","page-image":"","image-kw":"","feed":"","entity":true,"keyword":true,"title":true,"author":true,"taxonomy":true,"concept":true,"relation":true,"pub-date":true,"doc-sentiment":true,"x":484,"y":203,"wires":[["8a3837f.f75c7c8","d164d2af.2e9b3"]]},{"id":"8a3837f.f75c7c8","type":"debug","z":"2fc2a93f.d03d56","name":"Alchemy Debug","active":true,"console":"true","complete":"true","x":736,"y":156,"wires":[]},{"id":"fb988171.04678","type":"http in","z":"2fc2a93f.d03d56","name":"Test Alchemy","url":"/test_alchemy","method":"get","swaggerDoc":"","x":103.5,"y":200,"wires":[["e191029.f1e6f"]]},{"id":"d164d2af.2e9b3","type":"http response","z":"2fc2a93f.d03d56","name":"End Test Alchemy","x":749,"y":253,"wires":[]}]
You can use curl to test it, for example:
curl -G http://yourapp.mybluemix.net/test_alchemy?url=<your url here>
or use your browser as well:
http://yourapp.mybluemix.net/test_alchemy?url=http://myurl_to_test_alchemy
You can see the results in the node-red debug tab or your can see it in application logs:
$ cf logs yourapp --recent

How do you use storage service in Bluemix?

I'm trying to insert some storage data onto Bluemix, I searched many wiki pages but I couldn't come to conclude how to proceed. So can any one tell me how to store images, files in storage of Bluemix through any language code ( Java, Node.js)?
You have several options at your disposal for storing files in your app. None of them include doing it in the app container file system as the file space is ephemeral and will be recreated from the droplet each time a new instance of your app is created.
You can use services like MongoLab, Cloudant, Object Storage, and Redis to store all kinda of blob data.
Assuming that you're using Bluemix to write a Cloud Foundry application, another option is sshfs. At your app's startup time, you can use sshfs to create a connection to a remote server that is mounted as a local directory. For example, you could create a ./data directory that points to a remote SSH server and provides a persistent storage location for your app.
Here is a blog post explaining how this strategy works and a source repo showing it used to host a Wordpress blog in a Cloud Foundry app.
Note that as others have suggested, there are a number of services for storing object data. Go to the Bluemix Catalog [1] and select "Data Management" in the left hand margin. Each of those services should have sufficient documentation to get you started, including many sample applications and tutorials. Just click on a service tile, and then click on the "View Docs" button to find the relevant documentation.
[1] https://console.ng.bluemix.net/?ace_base=true/#/store/cloudOEPaneId=store
Check out https://www.ng.bluemix.net/docs/#services/ObjectStorageV2/index.html#gettingstarted. The storage service in Bluemix is OpenStack Swift running in Softlayer. Check out this page (http://docs.openstack.org/developer/swift/) for docs on Swift.
Here is a page that lists some clients for Swift.
https://wiki.openstack.org/wiki/SDKs
As I search There was a service that name was Object Storage service and also was created by IBM. But, at the momenti I couldn't see it in the Bluemix Catalog. I guess , They gave it back and will publish new service in the future.
Be aware that pobject store in bluemix is now S3 compatible. So for instance you can use Boto or boto3 ( for python guys ) It will work 100% API comaptible.
see some example here : https://ibm-public-cos.github.io/crs-docs/crs-python.html
this script helps you to list recursively all objects in all buckets :
import boto3
endpoint = 'https://s3-api.us-geo.objectstorage.softlayer.net'
s3 = boto3.resource('s3', endpoint_url=endpoint)
for bucket in s3.buckets.all():
print(bucket.name)
for obj in bucket.objects.all():
print(" - %s") % obj.key
If you want to specify your credentials this would be :
import boto3
endpoint = 'https://s3-api.us-geo.objectstorage.softlayer.net'
s3 = boto3.resource('s3', endpoint_url=endpoint, aws_access_key_id=YouRACCessKeyGeneratedOnYouBlueMixDAShBoard, aws_secret_access_key=TheSecretKeyThatCOmesWithYourAccessKey, use_ssl=True)
for bucket in s3.buckets.all():
print(bucket.name)
for obj in bucket.objects.all():
print(" - %s") % obj.key
If you want to create a "hello.txt" file in a new bucket. :
import boto3
endpoint = 'https://s3-api.us-geo.objectstorage.softlayer.net'
s3 = boto3.resource('s3', endpoint_url=endpoint, aws_access_key_id=YouRACCessKeyGeneratedOnYouBlueMixDAShBoard, aws_secret_access_key=TheSecretKeyThatCOmesWithYourAccessKey, use_ssl=True)
my_bucket=s3.create_bucket('my-new-bucket')
s3.Object(my_bucket, 'hello.txt').put(Body=b"I'm a test file")
If you want to upload a file in a new bucket :
import boto3
endpoint = 'https://s3-api.us-geo.objectstorage.softlayer.net'
s3 = boto3.resource('s3', endpoint_url=endpoint, aws_access_key_id=YouRACCessKeyGeneratedOnYouBlueMixDAShBoard, aws_secret_access_key=TheSecretKeyThatCOmesWithYourAccessKey, use_ssl=True)
my_bucket=s3.create_bucket('my-new-bucket')
timestampstr = str (timestamp)
s3.Bucket(my_bucket).upload_file(<location of yourfile>,<your file name>, ExtraArgs={ "ACL": "public-read", "Metadata": {"METADATA1": "resultat" ,"METADATA2": "1000","gid": "blabala000", "timestamp": timestampstr },},)