Firebase Hosting adding ~450ms latency overhead to Cloud Run - firebase-hosting

I have a Firebase Hosting site that maps the /api path to a Cloud Run app, as described in: https://firebase.google.com/docs/hosting/cloud-run
Comparing the latency I get when accessing my API endpoint via Firebase Hosting compared with accessing the Cloud Run app directly, Firebase Hosting is adding an average of 450ms of latency. The app is hosted in us-west1, and I am located in the Seattle area.
% hyperfine --warmup 3 'curl -H "Authorization: $AUTH" https://staging.radiopaper.com/api/exchange'
Benchmark #1: curl -H "Authorization: $AUTH" https://staging.radiopaper.com/api/exchange
Time (mean ± σ): 660.9 ms ± 85.2 ms [User: 25.0 ms, System: 10.5 ms]
Range (min … max): 575.5 ms … 856.9 ms 10 runs
vs
% hyperfine --warmup 3 'curl -H "Authorization: $AUTH" https://api-server-klkjcchm4q-uc.a.run.app/api/exchange'
Benchmark #1: curl -H "Authorization: $AUTH" https://api-server-klkjcchm4q-uc.a.run.app/api/exchange
Time (mean ± σ): 212.5 ms ± 72.7 ms [User: 27.8 ms, System: 9.9 ms]
Range (min … max): 124.3 ms … 325.6 ms 11 runs
Is this the expected behavior? If so, it doesn't make much sense for me to run my Cloud Run app on the same domain as my static content.

This is definitely not an expected behaviour, but it can happen for a number of reasons. One of the reasons can be that the Firebase Hosting’s origin has to reach back out to the Cloud Run once it gets the request so there are multiple hops involved with routing through Firebase Hosting. We're working on making this better but don't have anything specific to share on that yet.
GCP Support Here! If you want to triage this, we will need specific details to find the root cause, thus I recommend you to raise a direct request with GCP support. If you don’t have a support contract you can contact free Firebase support.

Related

Cloud Run + Firebase Hosting region rewrites issue

I'm trying to use Firebase Hosting for CDN connected to Cloud Run. Yesterday I was testing something for region eu-west1 and it went well. Today I'm trying to do the same but for region eu-west4 and I'm getting error that this region is not supported.
I switched to eu-west1 and it worked.
Is this bug or region eu-west4 is not supported?
=== Deploying to 'xxxxxxxx'...
i deploying hosting
Error: HTTP Error: 400, Cloud Run region `europe-west4` is not supported.
"rewrites": [
{
"source": "**",
"run": {
"serviceId": "web-client",
"region": "europe-west4"
}
}
],
same for new asia-southeast1 region also
Error: HTTP Error: 400, Cloud Run region `asia-southeast1` is not supported.
From this info here is the Details information regarding Rewrite:
Firebase Hosting originates in us-central1 so while deploying cloud Run it's recommended to select us-central1 region for less First Contentful Paint score or quick loading of your website, but kills the advantage of your nearby region availability purpose(really unfortunate for google fanboys).
Example: if your location is India your nearest cloud run available is asia-southeast1 Singapore we can't select asia-southeast1
Request path would go like this:
you→India(CDN)→USA(Firebase)→Signapore(CloudRun+aync call to Firestore India)→USA→CDN→you (which is REALLY BAD in terms of latency).
you→India(CDN)→USA(Firebase)→USA us-central1(CloudRun+aync call to Firestore India)→USA→CDN→you
(static Page will Load FAST, but Firestore dynamic data on webapp will data load with REALLY BAD in terms of latency, we should select us-central1 for Firestore also this makes no use of your local region GCP products this really strange that Firebase hosting not available for at least for AMERICA EUROPE ASIA-PACIFIC zones atleast).
Conclusion(till this date):
Cloud Run region rewrites issue for Firebase Hosting is there for many regions but, for the optimal page load result we should select us-central1 it is really unfortunate THIS IS THE REAL PROBLEM compare to Rewrite Issue, to avoid website Firestore latency for non USA users we should use cloud run/cloud function cache control such that data will cached at your local/near by region CDN for fast data loading (we cant use firebase web SDK since CDN caching not possible via if we use SDK, we should use Cloud function in firebase/cloud run)
Firebase Hosting to Cloud Run rewrite availability ( as of Aug 31, 2020)
Available:
us-central1,
us-east1,
asia-northeast1,
europe-west1
Not available
asia-east1,
europe-north1,
europe-west4,
us-east4,
us-west1,
asia-southeast1
Please file a feature request for Firebase rewrite availability if it's not available in your region Cloud Run and Firebase hosting not available for at least for AMERICA EUROPE ASIA-PACIFIC zones.
FYI: Cloud Firestore multi-region also not available for Asia region if using multi-region Firestore is the fix for locked Firebase hosting and Cloud run regions to us-central1
Cloud Run availability region
(Please comment if you get the rewrite access to any of the above mentioned region)
I actually managed to figure out a way to "fix" this. I changed my regions to europe-west4 instead of my previous europe-west1 and that "fixed" my deployment problem.

Can a private Cardano network be created?

In Ethereum we can use geth to create a private network, for example by defining a genesis block with puppeth and then creating nodes.
Is there an equivalent of geth in Cardano and can we create private networks?
Don't know much about Ethereum but to set up private network for cardano you need "Cardano-sl". Do set it up on your local or VPS according to this instruction https://github.com/input-output-hk/cardano-sl/blob/develop/docs/how-to/build-cardano-sl-and-daedalus-from-source-code.md . After downloading and building binaries from either nix or stack mode you need to connect your node to mainnet or testnet as per your requirement follow this link for the same: https://github.com/input-output-hk/cardano-sl/blob/develop/docs/how-to/connect-to-cluster.md .
Now your node should start downloading blocks and it will take some time to complete sync. you can check synchronization progress by using simple curl command: curl -X GET https://localhost:8090/api/v1/node-info: also you need to provide certs with the request or can call with insecure option by proving -k option with the request, see API reference for complete info: https://cardanodocs.com/technical/wallet/api/v1/#
And once your node will be in sync, you can call APIs and create your wallet, accounts and do ADA transactions.
Although, I skipped some steps but i hope still it will help many to get going.

How to configure the rate filter in snort for windows environment?

I have installed and configured the snort software for windows environment. As per their documentation, the threshold is deprecated and I have to use other filters. I need to use rate_filter in my application, however, I don't know how to set it inside my snort software.
I have read all the documentation and internet resources, and I have added the example codes of rate_filter directly to my snort.conf file, but still I can't get what I want.
Am I missing something?
You may need to share your filter to best help here. An example of layout here:
Example 1 - allow a maximum of 100 connection attempts per second from any one IP address, and block further connection attempts from that IP address for 10 seconds:
rate_filter \
gen_id 135, sig_id 1, \
track by_src, \
count 100, seconds 1, \
new_action drop, timeout 10

How to use alchemyAPI news data in Bluemix Node-RED?

I am using Bluemix environment and Node-RED flow editor. While trying to use the feature extract node that comes built-in Node-RED for the AlchemyAPI service, I am finding it hard to use it.
I tried connecting it to the HTTP request node, HTTP response node, etc, but no result. Maybe I am not completing the connections procedure correctly?
I need this code to get Twitter news and news using AlchemyAPI news data for specific companies and also give a sentiment score to and get store in IBM HDFS.
Here is the code:
[{"id":"8bd03bb4.742fc8","type":"twitter
in","z":"5fa9e76b.a05618","twitter":"","tags":"Ashok Leyland, Tata
Communication, Welspun, HCL Info,Fortis H, JSW Steel, Unichem Lab,
Graphite India, D B Realty, Eveready Ind, Birla Corporation, Camlin
Fine Sc, Indian Economy, Reserve Bank of India, Solar Power,
Telecommunication, Telecom Regulatory Authority of
India","user":"false","name":"Tweets","topic":"tweets","x":93,"y":92,"wires":[["f84ebc6a.07b14"]]},{"id":"db13f5f.f24ec08","type":"ibm
hdfs","z":"5fa9e76b.a05618","name":"Dec12Alchem","filename":"/12dec_alchem","appendNewline":true,"overwriteFile":false,"x":564,"y":226,"wires":[]},{"id":"4a1ed314.b5e12c","type":"debug","z":"5fa9e76b.a05618","name":"","active":true,"console":"false","complete":"false","x":315,"y":388,"wires":[]},{"id":"f84ebc6a.07b14","type":"alchemy-feature-extract","z":"5fa9e76b.a05618","name":"TrailRun","page-image":"","image-kw":"","feed":true,"entity":true,"keyword":true,"title":true,"author":"","taxonomy":true,"concept":true,"relation":"","pub-date":"","doc-sentiment":true,"x":246,"y":160,"wires":[["c0d3872.f3f2c78"]]},{"id":"c0d3872.f3f2c78","type":"function","z":"5fa9e76b.a05618","name":"To
mark tweets","func":"msg.payload={tweet:
msg.payload,score:msg.features};\nreturn
msg;\n","outputs":1,"noerr":0,"x":405,"y":217,"wires":[["db13f5f.f24ec08","4a1ed314.b5e12c"]]},{"id":"4181cf8.fbe7e3","type":"http
request","z":"5fa9e76b.a05618","name":"News","method":"GET","ret":"obj","url":"https://gateway-a.watsonplatform.net/calls/data/GetNews?apikey=&outputMode=json&start=now-1d&end=now&count=1&q.enriched.url.enrichedTitle.relations.relation=|action.verb.text=acquire,object.entities.entity.type=Company|&return=enriched.url.title","x":105,"y":229,"wires":[["f84ebc6a.07b14"]]},{"id":"53cc794e.ac3388","type":"inject","z":"5fa9e76b.a05618","name":"GetNews","topic":"News","payload":"","payloadType":"string","repeat":"","crontab":"","once":false,"x":75,"y":379,"wires":[["4181cf8.fbe7e3"]]}]
First you have to bind an Alchemy service instance to your node-red application.
Then you can develop your application, here is an example using the http and Feature Extract nodes:
Here is the node flow for this basic sample if you want to try:
[{"id":"e191029.f1e6f","type":"function","z":"2fc2a93f.d03d56","name":"","func":"msg.payload = msg.payload.url;\nreturn msg;","outputs":1,"noerr":0,"x":276,"y":202,"wires":[["12082910.edf7d7"]]},{"id":"12082910.edf7d7","type":"alchemy-feature-extract","z":"2fc2a93f.d03d56","name":"","page-image":"","image-kw":"","feed":"","entity":true,"keyword":true,"title":true,"author":true,"taxonomy":true,"concept":true,"relation":true,"pub-date":true,"doc-sentiment":true,"x":484,"y":203,"wires":[["8a3837f.f75c7c8","d164d2af.2e9b3"]]},{"id":"8a3837f.f75c7c8","type":"debug","z":"2fc2a93f.d03d56","name":"Alchemy Debug","active":true,"console":"true","complete":"true","x":736,"y":156,"wires":[]},{"id":"fb988171.04678","type":"http in","z":"2fc2a93f.d03d56","name":"Test Alchemy","url":"/test_alchemy","method":"get","swaggerDoc":"","x":103.5,"y":200,"wires":[["e191029.f1e6f"]]},{"id":"d164d2af.2e9b3","type":"http response","z":"2fc2a93f.d03d56","name":"End Test Alchemy","x":749,"y":253,"wires":[]}]
You can use curl to test it, for example:
curl -G http://yourapp.mybluemix.net/test_alchemy?url=<your url here>
or use your browser as well:
http://yourapp.mybluemix.net/test_alchemy?url=http://myurl_to_test_alchemy
You can see the results in the node-red debug tab or your can see it in application logs:
$ cf logs yourapp --recent

will mongrel be blocked when uploading a huge file?

I believe that Mongrel is a single thread web server. So I suppose it will be blocked if user is uploading a huge file.
However, I did a test today, it seems not true.
I uploaded a file with curl like this:
time curl -k -F myfile=#/tmp/CGI.19974.3 -H 'LOGIN_NAME:admin' -H 'PASSWORD:pass' http://10.32.119.155:3000 -v
Here is the result:
real 6m38.756s
user 0m0.232s
sys 0m9.561s
You can see that this uploading cost 6 minutes. But during this period, the mongrel works well, it can handle the request correctly.
So, Can I say that there is another thread to handle the uploading?