sailsjs + passport authentication Same query running multiple times - sails.js

I am using passportjs for authentication in sailsjs, everything is working fine, except when i ma starting node server with this command < LOG_QUERIES=true node app.js > it prints all the queries running on console. and it prints same queries multiple time at the time of login and auth check by id .

Related

Getting 503 error after every 30 seconds with akka and node js env

we have created one api to export patient details in csv file, this export request takes 2.5 min to execute.
we are using below technology for this app: Scala, akka, nginx and react/node js as front end.
when i will hit on export link, request got executed and able to see in logs.
but immediately after 30sec got error on browser console GET /export/request 503 (service unavailable ) with java script error in promise block.
after refering akka documentation i have increased ideal-timeout setting to 240s.
application.conf
http {
server {
request-timeout: 240s
idle-timeout: 240s
}
}
and it works on my local/development env. /export/ request was executed in 2 min.
after deploying this change at TEST env. issue is still there getting 503 after 30 sec.
on TEST env. application is running with docker env.
request flow/application setup:
Internal AWS load balancer => EC2 instance => nginx proxy (listing :80) => front end app (react js app) => backend (scala and akka)
i have not found any configuration key which has set to 30s.
Could you please help me with this ?
Many thanks
It seems your problem is on any intermediary proxy. You can perform your request with curl and check the response header Server:
curl -v -s -o /dev/null your_hostname/export/request
If it were akka you'd see a line as follows:
< Server: akka-http/10.2.4
Hopefully you can get more insights on your issue with this technique

GETS request not works after deploy

Works with the digital ocean server, mongo atlas and nodejs , front react
This is the problem I get in all the get requests on the site.
When I check like this I do get the information from mongo atlas
And that's what I get from cmd
[
Seems like you have a few things going on
The first screen shot says data.error is not defined.. So there is no .error as part of the data object, you may want to place that in a try catch OR if the call is promis based
then .catch( (e) =>{
console.log(e)
})
The later part of you post shows the error shows both apps are running on port 3000. PM2 in cluster mode will take care of multiple apps / port issues but if you are using PM2 to manage multiple applications they will need to be on different ports and then via nginx proxy 80 to that location.

ColdFusion Rest API not working

Previously I was running ColdFusion 10 configured with Apache on my localhost. I am using a different machine now and I am now using ColdFusion 11's built in web server. I have imported settings from an archived ".car" file. My app uses ColdFusion Rest services which were working on my previous machine. I have setup the rest service on this new machine and I am using the same service name but with a different absolute path to directory.
for example:
previous directory: - D:/foo/bar/api/v2
current directory: - C:/foo/bar/baz/api/v2
The rest service is registered and gets refreshed successfully in the ColdFusion Administrator. But when I try to access the API path on localhost, I get this 500 error :
This page isn’t working
localhost is currently unable to handle this request.
HTTP ERROR 500
Application v2 could not be found. The specific sequence of files included or processed is: ''''

How to check HTTP response code in zabbix?

I have a Zabbix server 2.2 and a few linux hosts with websites. How can I get a notification from Zabbix, if the HTTP(s) response code is not 200?
I've tried those triggers without any success:
{owncloud:web.test.rspcode[Availability of owncloud,owncloud availability].last(,10)}#200
{owncloud:web.test.error[Availability of owncloud].count(10,200)}<1
{owncloud:web.test.error[Availability of owncloud].last(#1,10)}=200
But nothing works. I never got an notification, that the code is not 200 anymore even it was 404, because I have renamed the index.php of owncloud to index2.php
I configured the Application and the we the Web Scenario as followed:
if you have already configured the host go to step 1
1) Select the host by Configuration-> Host groups -> select host (example server 1)
2) Go to Config > Hosts > [Host Created Above] > Applications and click on Create Application
3) Now you have to create the Web scenario with the status code check, in my case I checked status code 200. So go to Configuration > Hosts > [Host Created Above] > Web Scenarios and click on Create Web Scenario .
Remark: you have to select the previous application created at the step 2
4) After that without click on Add button go to Steps window and you have to configure the host and parameters for the chek. After that click on Add. In my cas e check the status code 200 response for the HTTP request.
I found the issue. You need to specify the URL to check with file. For example like this in your web scenario:
https://owncloud.example.com/index.php
"Note that Zabbix frontend uses JavaScript redirect when logging in, thus first we must log in, and only in further steps we may check for logged-in features. Additionally, the login step must use full URL to index.php file." - https://www.zabbix.com/documentation/2.4/manual/web_monitoring/example
I also used following expression as trigger:
{owncloud:web.test.fail[Availability of owncloud].last()}>0
you have set a triggers bye Expression
{host name:web.test.rspcode[Scenario name,Steps name].last()}=200
The question has been answered adequately, but I will provide a very much more advanced solution that you could use for all HTTP status codes.
I've created an item that monitors all HTTP status codes of a proxy, graphs them, and then set up multiple different types of triggers to watch last value and counts in last N minutes.
The regex I used to extract all the values from a Nginx or Apache access log is
^(\S+) (\S+) (\S+) \[([\w:\/]+\s[+\-]\d{4})\] \"(\S+)\s?(\S+)?\s?(\S+)?\" (\d{3}|-) (\d+|-)\s?\"?([^\"]*)\"?\s?\"?([^\"]*)\"?\s
I then set many triggers relevant for my particular situation
101 Switching Protocols
301 Moved Permanently
302 Redirect
304 not modified
400 Bad Request
401 Unauthorised
403 Forbidden
404 Not found
500 Server Error
It's also important that your Zabbix agent has permissions to read the log file on the host. You can add the zabbix-agent to the www-data group using this command.
$ sudo usermod -a -G www-data Zabbix
See the tutorial for all the steps in greater detail.

ElasticSearch admin user is unauthorized to access jdbc river plugin

Within ElasticSearch, I configured jdbc river plugin, it works before, after configured shield and assigned with admin user, the ElasticSearch is secured and able to access by TransportClient, but when I run river plugin script, I got the following exception:
pool-3-thread-1] ERROR river.jdbc.RiverPipeline - action [org.xbib.elasticsearch.action.river.jdbc.state.get] is unauthorized for user [ddtuser]
org.elasticsearch.shield.authz.AuthorizationException: action [org.xbib.elasticsearch.action.river.jdbc.state.get] is unauthorized for user [ddtuser]
at org.elasticsearch.shield.authz.InternalAuthorizationService.denial(InternalAuthorizationService.java:247)
BTW, I already modified the JDBCFeeder.java, to pass the shield.user into setting, but no luck!
Settings clientSettings = ImmutableSettings.settingsBuilder()
.put("cluster.name", settings.get("elasticsearch.cluster", "elasticsearch"))
.put("shield.user", "ddtuser:*mypassword*")