getting time out error while connecting postgres in sails application in development mode? - sails.js

I am trying to develop a simple HTTP API using sails framework. I have created models, controllers, helpers and routes in order to complete the workflow of the API but when I am lifting sails app in development mode , I am getting a timeout error for setting migrate=alter. After debugging on my own I got to the conclusion that my sails app is not connecting with the postgres.
I have installed all the related dependencies (ex. sails-postgresql, pg etc..), I am setting a correct URL in config.model file. I don't know what I am missing with this..
Any suggestions???
versions: sails-1.2.4, postgresql-12.4
edit: This is the error I am getting.
nirajrajput#Nirajs-MacBook-Pro demosails % node app.js
(node:29522) Warning: Accessing non-existent property 'padLevels' of
module exports inside circular dependency
(Use `node --trace-warnings ...` to show where the warning was
created)
info: ·• Auto-migrating... (alter)
info: Hold tight, this could take a moment.
error: Failed to lift app: Error: Sails is taking too long to load.
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
-- -- --
Troubleshooting tips:
-• Were you still reading/responding to an interactive prompt?
(Whoops, sorry! Please lift again and try to respond a bit more
quickly.)
-• Do you have a lot of stuff in `assets/`? Grunt might still be
running.
(Try increasing the hook timeout. Currently it is 40000.
e.g. `sails lift --hookTimeout=80000`)
-• Is `blueprints` a custom or 3rd party hook?
(*If* `initialize()` is using a callback, make sure it's being
called.)
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
-- -- --
at Timeout.tooLong [as _onTimeout] (/Users/nirajrajput/demosails/node_modules/sails/lib/app/private/loadHooks.js:191:21)
at listOnTimeout (internal/timers.js:551:17)
at processTimers (internal/timers.js:494:7)

Related

Haskell Stack Persistent error, cannot find -llibpq

I have set up a project, which uses Servant and Persistent, using a PostgreSQL backed. Everything works fine, when I run stack ghci, including migration, but when I run stack build, I get the following error:
Linking .stack-work\dist\29cc6475\build\ServantAuthTraining-exe\ServantAuthTraining-exe.exe ...
C://Users//Chris//AppData//Local//Programs//stack//x86_64-windows//ghc-8.8.3//mingw//bin/ld.exe: cannot find -llibpq
collect2.exe: error: ld returned 1 exit status
`gcc.exe' failed in phase `Linker'. (Exit code: 1)
-- While building package ServantAuthTraining-0.1.0.0 using:
C:\sr\setup-exe-cache\x86_64-windows\Cabal-simple_Z6RU0evB_3.0.1.0_ghc-8.8.3.exe --builddir=.stack-work\dist\29cc6475 build lib:ServantAuthTraining exe:ServantAuthTraining-exe --ghc-options " -fdiagnostics-color=always"
Process exited with code: ExitFailure 1
I am on windows, and very new to all of the mentioned techs.
I can connect to the database via pgAdmin.
You likely need to pass something like --extra-include-dirs=C:\PostgreSQL\8.4\include --extra-lib-dirs=C:\PostgreSQL\8.4\lib (or whatever path) to stack so it knows where to find libpq. This can also be configured in your stack.yaml as described at https://docs.haskellstack.org/en/latest/yaml_configuration/#extra-include-dirsextra-lib-dirs.

weblogic fail to start: Address already in use error

I have a test application that I created to start learn weblogic with Eclipse .
yesterday the jsp page was working well when I run as / on server , I got the basic page that I created .
but today I have an error message :
FATAL ERROR in native method: JDWP No transports initialized, jvmtiError=AGENT_ERROR_TRANSPORT_INIT(197)
ERROR: transport error 202: bind failed: Address already in use
ERROR: JDWP Transport dt_socket failed to initialize, TRANSPORT_INIT(510)
JDWP exit error AGENT_ERROR_TRANSPORT_INIT(197): No transports initialized [../../../src/share/back/debugInit.c:750]
in the browser I got this :
Error 404--Not Found
From RFC 2068 Hypertext Transfer Protocol -- HTTP/1.1:
10.4.5 404 Not Found
The server has not found anything matching the Request-URI. No indication is given of whether the condition is temporary or permanent.
and an other time I got this on console :
weblogic.application.ModuleException: null
null
at weblogic.servlet.internal.WebAppModule.createModuleException(WebAppModule.java:1824)
at weblogic.servlet.internal.WebAppModule.init(WebAppModule.java:270)
at weblogic.servlet.internal.WebAppModule.init(WebAppModule.java:682)
at weblogic.application.internal.flow.ScopedModuleDriver.init(ScopedModuleDriver.java:162)
at weblogic.application.internal.ExtensibleModuleWrapper.init(ExtensibleModuleWrapper.java:98)
Truncated. see log file for complete stacktrace
can you explain to me what I miss , when I start the server it has the status started . thank you I can add any information .
The most probable cause is server running in debug mode and having the same debug port as an existing server on same machine.
If you have multiple domains configured for each domain running in development mode on the same machine make sure DEBUG_PORT in the setDomainEnv script has different values.
If you have multiple managed servers on the same domain and same machine, create a separate startManagedWeblogic script to start your managed servers and set the Debug Port in that script.
If you use Node Manager to start your managed servers, make sure you have the following lines set in your nodemanager.properties file, otherwise etDomainEnv.sh won't be executed:
StartScriptEnabled=true
StopScriptEnabled=true
This way, if you start the Managed Server via the Admin Console, setDomainEnv.sh will get called.
Then, you can modify the WebLogic domain script setDomainEnv.sh to set the proper DEBUG_PORT value according to the server name that needs to be started:
Use your favourite editor to change the setDomainEnv.sh file
You should find the following lines already :
if [ "${SERVER_NAME}" = "" ] ; then
SERVER_NAME="AdminServer"
export SERVER_NAME
fi
The same way, you can add :
if [ "${SERVER_NAME}" = "ManagedServer1" ] ; then
DEBUG_PORT="8454"
export DEBUG_PORT
fi
if [ "${SERVER_NAME}" = "ManagedServer2" ] ; then
DEBUG_PORT="8455"
export DEBUG_PORT
fi

Solaris svcs command shows wrong status

I have freshly installed an application on solaris 5.10 . When checked through ps -ef | grep hyperic | grep agent, process are up and running . When checked the status through svcs hyperic-agent command, the output shows that the agent is in maintenance mode . Application is working fine and I dont have any issues with the application . Please help
There are several reasons that lead to that behavior:
Starter (start/exec property of service) returned status that is different from SMF_EXIT_OK (zero). Than you may check logs:
# svcs -x ssh
...
See: /var/svc/log/network-ssh:default.log
If you check logs, you may see following messages that means, starter script failed or incorrectly written:
[ Aug 11 18:40:30 Method "start" exited with status 96 ]
Another reason for such behavior is that service faults during while its working (i.e. one of processes coredumps or receives kill signal or all processes exits) as described here: https://blogs.oracle.com/lianep/entry/smf_5_fault_retry_models
The actual system that provides SMF facilities for monitoring that is System Contracts. You may determine contract ID of online service with svcs -v (field CTID):
# svcs -vp svc:/network/smtp:sendmail
STATE NSTATE STIME CTID FMRI
online - Apr_14 68 svc:/network/smtp:sendmail
Apr_14 1679 sendmail
Apr_14 1681 sendmail
Than watch events with ctwatch:
# ctwatch 68
CTID EVID CRIT ACK CTTYPE SUMMARY
68 28 crit no process contract empty
Than there are two options to handle that:
There is a real problem with service so it eventually faults. Than debug the application.
It is normal behavior of service, so you should edit and re-import your service manifest, to make SMF less paranoid. I.e. configure ignore_error and duration properties.

NFS mount points are going off/NFS compound failed for server mashost

We have an application in solaris during specific test case we will generate heap dump which will be written in to the server at specific path during this case we are getting following error in trace file
java.lang.OutOfMemoryError: Java heap space
Dumping heap to /ossrc/upgrade/JREheapdumps/java_pid16092.hprof ...
Dump file is incomplete: I/O error
and in /var/adm/messages we could see
Oct 28 13:00:10 ossuas2 nfs: [ID 733954 kern.info] NOTICE: [NFS4][Server: mashost][Mntpt: /ossrc/upgrade]NFS server mashost not
responding; still trying
Oct 28 13:02:53 ossuas2 nfs: [ID 733954 kern.info] NOTICE: [NFS4][Server: mashost][Mntpt: /usr/local]NFS server mashost not
responding; still trying
Oct 28 13:04:53 ossuas2 nfs: [ID 733954 kern.info] NOTICE: [NFS4][Server: mashost][Mntpt: /etc/opt/ericsson]NFS server mashost not
responding; still trying
Can anyone please help here why we are getting this problem and can any tell us can an application cause this impact on mashost ..????
First things first, check out the NFS service w/ svcbundle and svcs -- when it crashes, run:
# svcs -x nfs/client
on the client, and
# svcs -x nfs/server
on the server. I would expect one or both to be in a "maintenance" state. (You may see it fails to start properly at all). If it is in a maintenance mode, you should see a row marked "Reason:" that says why.
You might see "offline" -- in that case, startd will attempt to reboot the service multiple times and, if it fails after five attempts or hangs indefinitely, places it into "maintenance" state and stops restarting.
Check the logs in
/var/svc/log/<service-name FMRI>.log
There will be one on your client machine under "network-nfs-client:default" (probably, may have a name other than 'default' if it's been changed manually), and one on the server under "network-nfs-server:default"
See what you can glean from those.
svcbundle is all the time taking snapshots as backups of services, so you can try reverting to one of those.
# svcs -s nfs/server:default
svc:/network/nfs/server:default> listsnap
svc:/network/nfs/server:default> revert start [name_of_snapshot]
svc:/network/nfs/server:default> quit
# svcadm refresh nfs/server:default
# svcadm restart nfs/server:default
Make sure to include the ":default" tag, or if you saw a different tag from "svcs nfs/server" include it, that name defines an instance of the service, every running service is an instance.
If the process is failing to boot, you might have to look at the XML manifest under /lib/svc/manifest/network/nfs/ -- inside, you'll see dependencies (and services dependent on this one), then "exec_method"s, which define how the service starts, stops and restarts.
Instead of snapshots, you can can also restore it to default: use svccfg -s <FMRI> delete to clear it, then svcadm refresh <FMRI> and svcadm enable <FMRI>.
If the service is in maintenance state, once you've isolated and fixed the problem, you can manually clear that state by running svcadm clear <FMRI>.

Derby app created using 'derby new test' not working

I've installed derbyjs using npm install -g derby and then created a test app using derby new test.
Then i started the app using node server.js and I got the following output:
info - socket.io started
Starting cluster with 1 workers in undefined mode
`kill -s SIGUSR2 7161` to force cluster reload
Go to: http://localhost:3000/
info - socket.io started
So I tried to request http://localhost:3000/, but the site does not finish loading and I get the following exception:
TEMPLATE ERROR
Error: Model mutation performed after bundling for clientId: 79f4a9f3-25fc-438d-b9f4-a450dccf9566
at Model.errorOnCommit [as _commit] (/web/derby/test/node_modules/derby/node_modules/racer/lib/bundle/bundle.Model.js:64:9)
at /web/derby/test/node_modules/derby/node_modules/racer/lib/txns/txns.Model.js:120:15
at next (/web/derby/test/node_modules/derby/node_modules/racer/lib/middleware.js:7:26)
at /web/derby/test/node_modules/derby/node_modules/racer/lib/txns/txns.Model.js:107:16
at next (/web/derby/test/node_modules/derby/node_modules/racer/lib/middleware.js:7:26)
at /web/derby/test/node_modules/derby/node_modules/racer/lib/txns/txns.Model.js:101:16
at next (/web/derby/test/node_modules/derby/node_modules/racer/lib/middleware.js:7:26)
at /web/derby/test/node_modules/derby/node_modules/racer/lib/txns/txns.Model.js:92:16
at next (/web/derby/test/node_modules/derby/node_modules/racer/lib/middleware.js:7:26)
at Object.run (/web/derby/test/node_modules/derby/node_modules/racer/lib/middleware.js:10:12)
events.js:68
throw arguments[1]; // Unhandled 'error' event
^
Error: Cannot write after end
at Gzip.write (zlib.js:311:31)
at ServerResponse.res.write (/web/derby/test/node_modules/express/node_modules/connect/lib/middleware/compress.js:82:18)
at Object.View._render (/web/derby/test/node_modules/derby/lib/View.server.js:337:9)
at /web/derby/test/node_modules/derby/lib/View.server.js:276:10
at Array.2 (/web/derby/test/node_modules/derby/lib/View.server.js:300:5)
at Object.Promise.resolve (/web/derby/test/node_modules/derby/node_modules/racer/lib/util/Promise.js:21:19)
at /web/derby/test/node_modules/derby/lib/View.server.js:136:17
at /web/derby/test/node_modules/derby/lib/files.js:224:7
at Object.oncomplete (fs.js:308:15)
at process._makeCallback (node.js:248:20)
I have absolutely no idea how I can get it running... Is it a bug, or am I doing something wrong?
Update
I've created a bug report on the derby repository: https://github.com/codeparty/derby/issues/170