How can I set my service worker works to cloudflare's workers? - progressive-web-apps

I'm using service worker with Workbox for PWA and it's working graete. The only problem that I'm facing is the preformance of the website. I think it's because of cloudflare therefor I decided to move my service worker to cloudflare's workers. Now I don't know how to do that. Is there anyone using cloudflare workers for PWA?

Yeah, it should be possible with the serverless model, and even with the new Cloudflare pages, using workers you want something like:
2020-06-01 - Cloudflare Workers Developer Docs - Deploy a React app with create‑react‑app
Sort of an out of date post, dead links, but also goes over Worker PWAs:
2018-11-23 - Cloudflare Blog - Serverless Progressive Web Apps using React with Cloudflare Workers

Related

Spring Cloud Gateway blocking requests for route descovery

I'm using Spring Cloud Gateway from spring-cloud-starter-gateway version 2.1.0.RELEASE and I need to understand why Gateway is blocking requests to perform the DiscoveryClientRouteDefinitionLocator process.
Spring Cloud Version: Greenwich.RELEASE.
I have two environments: staging and production.
In production we have a working gateway with the following latency for /actuator/health call:
I was investigatinng why those spikes occurs on a simple health call and I figure out the gateway blocks any requests sometimes (even health or real microservices call) to perform discovery routes of all my microservices.
We use Consul for discovery server and I tried to test this latency at my staging environment (with way less hardware resource on Consul). The impact of this block is clear:
After improving the Consul hardware resources we have no more spikes but the latency still is not perfect (and have minor spikes to discovery all routes) for a health call:
I need to ask: why Spring Cloud Gateway is blocking requests even having caching feature? Should not this process run in the background? What I'm doing wrong? Its really an issue with Spring Cloud Gateway?
Thank you.
As discussed here previous version of Spring Cloud Gateway was using a blocking discovery-client.
Using versions newer than 2.1.5.RELEASE will result in a more asynchronous gateway that doesn't do many blocking requests.

SSH to bluemix from bosh and capture metrics

Has anyone tried connecting to IBM bluemix using bosh-cli. I am seeing performance issues in my requests and was going through this article on cloud foundry. I am planning to login to ssh to gorouter and monitor go-router CPU utilization.
Can someone recommend any way to capture the following metrics from Bluemix:
CPU utilization
Latency
Requests per second
what do you mean by "connecting to IBM bluemix using bosh-cli"?
When you think about the public available IBM Cloud (formerly Bluemix) that's represented here https://console.bluemix.net/ it's not possible. The bosh cli is to maintain the platform, thus Cloudfoundry and potentially other deployments but not your apps.
If you have a private installation you might check the metrics that the system provides. Infos here https://docs.cloudfoundry.org/running/all_metrics.html
When you want to have metrics about your app I could think off your app is providing these metrics. Or you put something in place like the New Relic monitoring. The have a bunch of application performance monitoring (APM). Info here https://docs.newrelic.com/docs/agents
HP

jBoss EAP - full-ha profile required for stateless servers?

jBoss EAP 6.2 supports full and full-ha profiles (amongst others). In standalone deploy, we use domain mode with full profile for an app.
App that we have is primarily having/exposing RESTful services which are stateless - there is a administration web portal, but it is ok to not have session replication for this (i.e. if one server goes down, it is acceptable for users to lose the browsing session and login again). app does not make use of EJBs.
In deployment, if we have a hardware loadbalancer that is able to route requests to nodes in active-active mode, then, is it ok to just go with full profile on nodes and not use the full-ha profile ? or is there a benefit to be got from using the full-ha profile? using the former approach simplifies deployment and makes spinning up a new VM with the app relatively easier.
Any inputs/directions/pointers in this regard would be most useful.
The used JBoss profile depends on what you need from it. standalone-full-ha provides Infinispan, Web-Session and HornetQ (JMS) replication whereas standalone-ha provides no JMS.
You can run active-active clusters with load balancers in front with the standalone profile as long as you do not replicate data/states using JGroups, Infinispan and so on.

JMeter throughput drops when hitting Amazon ELB

I am hosting a web application on Amazon's AWS Servers. I am currently in the process of load testing the application with JMeter. My main problem seems to be that when I go through an Elastic Load Balancer (ELB) to hit the Amazon server's rather than hitting the servers directly - I seem to hit a cap in my throughput.
If I hit my web application directly - for each server I am able to achieve a throughput of 50 RPS per server.
If I hit my web application via Amazon's ELB - I am only able to achieve a max throughput of 50 RPS (total)
I was wondering if anyone else has experienced similar behavior when load testing using Jmeter via Amazon's ELB.
For more context my web application is a REST application which allows users to download content (~150 kb) via HTTP requests.
I am running Jmeter with the following flag "-Dsun.net.inetaddr.ttl=0" and running it with 10 threads. I have tried running these tests with multiple clients on different machines.
Thanks for any help in advance.
Load balancers may be tricky to test as they may have different mechanisms of orchestrating traffic depending on origin. The most commonly used approach to distinguish origin of the request and redirect it to the same host, which served previous request is a cookie. You can look into HTTP Cookie Manager to correctly manipulate your cookies and make sure than you have different ones for each testing thread or thread group (depending on your use case). Another flaky area is origin host IP. You may require to bind each testing thread to different IP address in order to hit different servers behind the load balancer. There can be also some issues with DNS in regards to Amazon LBs. useful guide on how to test Amazon ELBs
Most probable cause would be DNS caching by jmeter. ELB returns IPs of additional servers depending on how autoscaling is set but JMeter does not use these additional servers. This problem can be solved by ensuring that Jmeter does not cache DNS results...
The ELB is a name, not IP, and can suffer from DNS caching. Make sure you use "-Dsun.net.inetaddr.ttl=0" when starting JMeter
http://wiki.apache.org/jmeter/JMeterAndAmazon
A really late response, and slightly different than the original question, but I hope this can help others as it took me a while to get it all straight. My original problem was not reduced throughput as a result of the ELB, but the introduction of HTTP 503 errors. Actually, the ELB increased my throughput as compared to querying the web application directly, though even with 1 hour tests, the results were sporadic to say the least.
First, the ELB has 2-staged load balancing going on. The first load balance is across the ELB's themselves. That's done by associating multiple IP addresses to the hostname provided by AWS for the ELB you provision. The second is then, of course, across your application instances behind the ELB.
Without trying to offend the SO gods, this is a really helpful article.
https://blazemeter.com/blog/dns-cache-manager-right-way-test-load-balanced-apps
The most helpful information in there was to use the DNS Cache Manager module in JMeter. This will query multiple DNS servers, and wipe out your DNS cache.
I implemented that module and then setup Wireshark, filtering on the two IP addresses belonging to the ELB hostname and sure enough, it was querying both IP addresses, though clearly favored one over the other.
That didn't make a big difference, at least not over short tests.
The real difference (2-3 times more throughput) came when I tweaked the ELB health settings. I initially had a high error rate, however after reducing the unhealthy threshold and the interval between health checks, my error rates dropped dramatically.
Additionally, whereas all my other tests had been 60 - 90 minutes in duration, this one was 8 hours. I started out with decent throughput and it then quickly dropped (by about 2/3). After about 20 minutes or more, the throughput then started ticking back up and by the end of the test, it had sustained throughput of about 5 times what I was getting without the ELB (which was similar to what the throughput was when it dropped shortly after beginning this test).

How to deploy Node.js in cloud for high availability using multi-core, reverse-proxy, and SSL

I have posted this to ServerFault, but the Node.js community seems tiny there, so I'm hoping this bring more exposure.
I have a Node.js (0.4.9) application and am researching how to best deploy and maintain it. I want to run it in the cloud (EC2 or RackSpace) with high availability. The app should run on HTTPS. I'll worry about East/West/EU full-failover later.
I have done a lot of reading about keep-alive (Upstart, Forever), multi-core utilities (Fugue, multi-node, Cluster), and proxy/load balancers (node-http-proxy, nginx, Varnish, and Pound). However, I am unsure how to combine the various utilities available to me.
I have this setup in mind and need to iron out some questions and get feedback.
Cluster is the most actively developed and seemingly popular multi-core utility for Node.js, so use that to run 1 node "cluster" per app server on non-privileged port (say 3000). Q1: Should Forever be used to keep the cluster alive or is that just redundant?
Use 1 nginx per app server running on port 80, simply reverse proxying to node on port 3000. Q2: Would node-http-proxy be more suitable for this task even though it doesn't gzip or server static files quickly?
Have minimum 2x servers as described above, with an independent server acting as a load balancer across these boxes. Use Pound listening 443 to terminate HTTPS and pass HTTP to Varnish which would round robin load balance across the IPs of servers above. Q3: Should nginx be used to do both instead? Q4: Should AWS or RackSpace load balancer be considered instead (the latter doesn't terminate HTTPS)
General Questions:
Do you see a need for (2) above at all?
Where is the best place to terminate HTTPS?
If WebSockets are needed in the future, what nginx substitutions would you make?
I'd really like to hear how people are setting up current production environments and which combination of tools they prefer. Much appreciated.
It's been several months since I asked this question and not a lot of answer flow. Both Samyak Bhuta and nponeccop had good suggestions, but I wanted to discuss the answers I've found to my questions.
Here is what I've settled on at this point for a production system, but further improvements are always being made. I hope it helps anyone in a similar scenario.
Use Cluster to spawn as many child processes as you desire to handle incoming requests on multi-core virtual or physical machines. This binds to a single port and makes maintenance easier. My rule of thumb is n - 1 Cluster workers. You don't need Forever on this, as Cluster respawns worker processes that die. To have resiliency even at the Cluster parent level, ensure that you use an Upstart script (or equivalent) to daemonize the Node.js application, and use Monit (or equivalent) to watch the PID of the Cluster parent and respawn it if it dies. You can try using the respawn feature of Upstart, but I prefer having Monit watching things, so rather than split responsibilities, I find it's best to let Monit handle the respawn as well.
Use 1 nginx per app server running on port 80, simply reverse proxying to your Cluster on whatever port you bound to in (1). node-http-proxy can be used, but nginx is more mature, more featureful, and faster at serving static files. Run nginx lean (don't log, don't gzip tiny files) to minimize it's overhead.
Have minimum 2x servers as described above in a minimum of 2 availability zones, and if in AWS, use an ELB that terminates HTTPS/SSL on port 443 and communicates on HTTP port 80 to the node.js app servers. ELBs are simple and, if you desire, make it somewhat easier to auto-scale. You could run multiple nginx either sharing an IP or round-robin balanced themselves by your DNS provider, but I found this overkill for now. At that point, you'd remove the nginx instance on each app server.
I have not needed WebSockets so nginx continues to be suitable and I'll revisit this issue when WebSockets come into the picture.
Feedback is welcome.
You should not bother serving static files quickly. If your load is small - node static file servers will do. If your load is big - it's better to use a CDN (Akamai, Limelight, CoralCDN).
Instead of forever you can use monit.
Instead of nginx you can use HAProxy. It is known to work well with websockets. Consider also proxying flash sockets as they are a good workaround until websocket support is ubiquitous (see socket.io).
HAProxy has some support for HTTPS load balancing, but not termination. You can try to use stunnel for HTTPS termination, but I think it's too slow.
Round-robin load (or other statistical) balancing works pretty well in practice, so there's no need to know about other servers' load in most cases.
Consider also using ZeroMQ or RabbitMQ for communications between nodes.
This is an excellent thread! Thanks to everyone that contributed useful information.
I've been dealing with the same issues the past few months setting up the infrastructure for our startup.
As people mentioned previously, we wanted a Node environment with multi-core support + web sockets + vhosts
We ended up creating a hybrid between the native cluster module and http-proxy and called it Drone - of course it's open sourced:
https://github.com/makesites/drone
We also released it as an AMI with Monit and Nginx
https://aws.amazon.com/amis/drone-server
I found this thread researching how to add SSL support to Drone - tnx for recommending ELB but I wouldn't rely on a proprietary solution for something so crucial.
Instead I extended the default proxy to handle all the SSL requests. The configuration is minimal while the SSL requests are converted to plain http - but I guess that's preferable when you're passing traffic between ports...
Feel free to look into it and let me know if it fits your needs. All feedback welcomed.
I have seen AWS load balancer to load balance and termination + http-node-proxy for reverse proxy, if you want to run multiple service per box + cluster.js for mulicore support and process level failover doing extremely well.
forever.js on cluster.js could be good option for extreme care you want to take in terms of failover but that's hardly needed.