I have been working with using Strapi as my backend service and connecting it to my MongoDB Atlas instance and it is running and works. However, when I make http requests from my front-end, it takes up to 12 seconds to retrieve the response. My response is a very small JSON response (array of 3 objects with only 5 properties). However, it seems when I log into the admin portal and force the Strapi app to run, the requests are instant. I understand this may be an issue with the dynos cycling, but what solutions are there to this?
The reason you have this because of Heroku dynos basically turn-off after 30mins of inactivity. So next time when you ping the server/API, it takes some time to start again. The easiest way to go around is, have the Heroku service being pinged every x intervals (can be hour/day).
You can use services like this (http://easycron.com/) to set up a CRON to ping the Heroku service.
Hope it helps :)
Related
Some context:
I have Strapi deployed on Heroku successfully with a MongoDB backend, and can add/edit entries. My issue comes when I upload an image using the media library plug in. I'm able to upload an image, and have my frontend access it initially, displaying it etc. after sometime, like the next day or in an hour or so, the history of the file is present, as can be seen with this endpoint:
https://blog-back-end-green.herokuapp.com/upload/files/
However, the url endpoint to access the media doesn't work as it used to, and I get a 404 error when I follow it to the endpoint. e.g.
https://blog-back-end-green.herokuapp.com/uploads/avatarperson_32889bfac5.png
New to Strapi so any help/guidance appreciated
The docs address your question directly:
Like with project updates on Heroku, the file system doesn't support
local uploading of files as they will be wiped when Heroku "Cycles"
the dyno. This type of file system is called ephemeral, which means
the file system only lasts until the dyno is restarted (with Heroku
this happens any time you redeploy or during their regular restart
which can happen every few hours or every day).
Due to Heroku's filesystem you will need to use an upload provider
such as AWS S3, Cloudinary, or Rackspace. You can view the
documentation for installing providers here and you can see a list of
providers from both Strapi and the community on npmjs.com.
When your app runs, it consumes dyno hours of HEROKU
When your app idles (automatically, after 30 minutes of inactivity), as long as you have dyno hours, your app will be live and publicly accessible.
Generally, Authentication failures return a 401 (unauthorized) error but in some platforms, 404 error can also return.
Check Your second request does have the correct Authorization header
Check out role-permissions
I am the only developer (full-stack) in my company I have too much work other than automating the deployments as of now. In the future, we may hire a DevOps guy for the same.
Problem: We have 3 servers under Load Balancer. I don't want to block 2nd & 3rd servers till the 1st server updated and repeat the same with 2nd & 3rd because there might be huge traffic for one server initially and may fail at some specif time before other servers go live.
Server 1
User's ----> Load Balancer ----> Server 2 -----> Database
Server 3
Personal Opinion: Is there a way where we can pull the code by writing any scripts in the Load Balancer. I can replace the traditional Digital Ocean load balancer with Nginx Server making it a reverse proxy.
NOTE: I know there are plenty of other questions asked in Stack
Overflow on the same but none of them solves my queries.
Solutions I know
GIT Hooks - I know somewhat about GIT Hooks but don't want to use it because if I commit to master branch by mistake then it must not get sync to my production and create havoc in the live server and live users.
Open multiple tabs of servers and do it manually (Current Scenario). Believe me its pain in the ass :)
Any suggestions or redirects to the solutions will be really helpful for me. Thanks in advance.
One of the solutions is to write ansible playbook for this. With Ansible, you can specify to run it per one host at the time and also as the last step you can include verification check that checks if your application responds with response code 200 or it can query some endpoint that indicates the status of your application. If the check fails, Ansible will stop the execution. For example, in your case, Server1 deploys fine, but on server2 it fails. The playbook will stop and you will have servers 1 and 3 running.
I have done it myself. Works fine in environments without continuous deployments.
Here is one example
I've setup an Apollo Server GraphQL API to get some data from a remote MongoDB database, hosted on Atlas. Now, if I run the server in localhost, using the server less framework, every request I made to the MongoDB server takes around 8-10ms, which is good. However, if I do the same request, to the same server, from a Lambda hosted on eu-west-1, that request takes around 90-130ms! And the worst part is that I need this Lambda to run on us-east-1, where there takes even longer (around 400ms!).
What could be the cause for this slowdown? I'm not talking about the first request, that takes over a second, but I guess is because of the Lambda starting...
I've set up a sample repository so you can see if this slowdown is related to some wrong configuration
https://github.com/GimignanoF/SampleApolloLambda
I am trying to configure or setup the production environment of whatsapp business api as mentioned in the link https://developers.facebook.com/docs/whatsapp/installation/prod-single-instance
I have done everything mentioned in this my dockers are also running on port:9090 as can be seen in the image
still I can't access it. Whenever I try to call https://localhost:9090 the error with "This site can’t be reached" occurs. Whatsapp business api does not have good documentation or tutorials till now. So this site is the only last way for me.
I had a similar problem which could be your case, I saw the docker containers OK but nothing was working. After a day searching I saw where it happened, my problem was I installed mysql MANUALLY (not docker container) in the same instance where docker is running and in db.env I just used 127.0.0.1, this was passed literally to docker container, then looking at a the wait_on_mysql.sh script, the whastapp docker containers were waiting util the mysql ip has conectivity to actually do something and was printing "MySQL is not up yet - sleeping" each second, of course they wouldn't find any conectivity.
Since my instalation is for development, and I am already using such database to other stuff, my solution was to use the 172.17.0.1(docker gateway of the containers) IP instead, then add two sets of network iptables rules to the host to redirect from the docker containers IP to the IP binded by mysql when using such port (3306, the default in my case). After that everything works well. I think there are better solutions, but I didn't want to go far on it, you should evaluate you case if apply.
check the command:
docker-compose logs > debug_output.txt
That gives you insight about whats happening, hope it can helps someone.
I think your setup is already complete. You just need to start with the registration process and start sending messages. The containers are up and running but calling https://localhost:9090 won't send you any response as this is not any specified API endpoint expected to be used.
Since you're using prod single instance, the documentation can be found here which seems pretty straight forward. https://developers.facebook.com/docs/whatsapp/installation/prod-single-instance
You seem to have completed till the 7 steps. The next step can be to perform a health check to make sure it is healthy. The API endpoint for that would be https://localhost:9090/v1/health https://developers.facebook.com/docs/whatsapp/api/health
Has your db also been setup?
I cannot see it in the docker screenshot.
Also - you have to accept the certificate, as it does not have a public CA issues certificate.
I have a node-red flow in bluemix that uses dash-db nodes also. So each time some dash db maintenance or some other reason, this db connection gets lost and all writes fail. When i redeploy, everything is fine again. Bluemix shows only logs of last few hours hence I am finding it very difficult to debug. Meanwhile i was thinking of doing an automatic redeploy after i detect this issue to avoid losing writes.
Can this be done using GET /flows followed by POST /flows in the same node-red app itself?
it would be worth raising this as an issue with the dash-db nodes so the author can help address it - https://github.com/smchamberlin/node-red-nodes-cf-sqldb-dashdb
Yes, you can post back the flows. The full admin http api is documented here: http://nodered.org/docs/api/admin/ - have a look at the 'reload' option on /flows.