the hook orm taking too long to load - sails.js

i am using two database adapters with sails.
one for mondoDB and second for mysql.whenever i run command "sails lift".once it gives an error
error: Error: The hook `orm` is taking too long to load.
Make sure it is triggering its `initialize()` callback, or else set `sails.config.orm._hookTimeout to a higher value (currently 20000)
at tooLong [as _onTimeout] (C:\Users\KAMI\AppData\Roaming\npm\node_modules\sails\lib\app\private\loadHooks.js:92:21)
at Timer.listOnTimeout [as ontimeout] (timers.js:110:15
when i rerun sails without changes it gives no error then.how can i avoid this error everytime.this is my 1st experience with sailsjs so any help will be apreciated....

I ran into this problem last night because of a slow internet connection between my laptop and the DB server. My solution was to create a new file in the config directory called orm.js (name doesn't really matter).
Then add the following code:
// config/orm.js
module.exports.orm = {
_hookTimeout: 60000 // I used 60 seconds as my new timeout
};
I also found I had to change my pubsub timeout but that may not be necessary for you.
// config/pubsub.js
module.exports.pubsub = {
_hookTimeout: 60000 // I used 60 seconds as my new timeout
};
Note: The other answer recommends changing the sails files inside the node_modules folder. This is almost always a bad idea because any npm update could revert your changes.

It is likely best to do this on a per env basis. Under config directory, you will have something like:
Then enter, inside module.exports of each:
module.exports = {
hookTimeout: 40000
}
Notice, there is no need for an underscore in front of the attribute name either.

I realise this is quite an old question, but I also had the same problem. I was convinced it wasn't my connection.
My solution is to change your migration option for your models and you'll have a choice of 3
safe - never auto-migrate my database(s). I will do it myself (by hand)
alter - auto-migrate, but attempt to keep my existing data (experimental)
drop - wipe/drop ALL my data and rebuild models every time I lift Sails
Got to config/models.js and in there put:
migrate: 'safe'
or whatever option from above you want to use.

There are two ways, which we can probably call them as:
1- System-wide method: (as #arcseldon has told)
Try to add the hookTimeout key to the project's config/env/development.js or config/env/production.js file. Next almost all the hooks (except some hooks, such as moduleloader) will retrieve the timeout value and consider it for themeselves.
2- Hook specific method: (as #davepreston has told)
create a [module-name].js file in the project's config folder and add _hookTimeout key to it. So, it will lead into assigning the timeout value only to that specific module. (Be careful about the specific json structure for the sails config files.)

Go to you node_modules folder and browse to \sails\lib\app\private
In your case you should go to this folder:
C:\Users\KAMI\AppData\Roaming\npm\node_modules\sails\lib\app\private
Then open the file named loadHooks.js and go to the line that says:
var timeoutInterval = (sails.config[hooks[id].configKey || id] && sails.config[hooks[id].configKey || id]._hookTimeout) || sails.config.hookTimeout || 20000;
Change the last value in this line from 20000 to some higher value and save the file then run your application by "sails lift" as you normally do
NB: you may need to try out a few higher values instead of 20000 until you reach a value that works for you. My application successfully lifted when I changed the value to 50000

Go to models.js file and uncomment migrate: 'alter'

while running sails lift run this command in the command line
sails lift hookTimeout=75000

You can also try to add defaults: { timeout: 30000 } to your hook
Reference: https://sailsjs.com/documentation/concepts/extending-sails/hooks/hook-specification/defaults

Related

Stop huge error output from testing-library

I love testing-library, have used it a lot in a React project, and I'm trying to use it in an Angular project now - but I've always struggled with the enormous error output, including the HTML text of the render. Not only is this not usually helpful (I couldn't find an element, here's the HTML where it isn't); but it gets truncated, often before the interesting line if you're running in debug mode.
I simply added it as a library alongside the standard Angular Karma+Jasmine setup.
I'm sure you could say the components I'm testing are too large if the HTML output causes my console window to spool for ages, but I have a lot of integration tests in Protractor, and they are SO SLOW :(.
I would say the best solution would be to use the configure method and pass a custom function for getElementError which does what you want.
You can read about configuration here: https://testing-library.com/docs/dom-testing-library/api-configuration
An example of this might look like:
configure({
getElementError: (message: string, container) => {
const error = new Error(message);
error.name = 'TestingLibraryElementError';
error.stack = null;
return error;
},
});
You can then put this in any single test file or use Jest's setupFiles or setupFilesAfterEnv config options to have it run globally.
I am assuming you running jest with rtl in your project.
I personally wouldn't turn it off as it's there to help us, but everyone has a way so if you have your reasons, then fair enough.
1. If you want to disable errors for a specific test, you can mock the console.error.
it('disable error example', () => {
const errorObject = console.error; //store the state of the object
console.error = jest.fn(); // mock the object
// code
//assertion (expect)
console.error = errorObject; // assign it back so you can use it in the next test
});
2. If you want to silence it for all the test, you could use the jest --silent CLI option. Check the docs
The above might even disable the DOM printing that is done by rtl, I am not sure as I haven't tried this, but if you look at the docs I linked, it says
"Prevent tests from printing messages through the console."
Now you almost certainly have everything disabled except the DOM recommendations if the above doesn't work. On that case you might look into react-testing-library's source code and find out what is used for those print statements. Is it a console.log? is it a console.warn? When you got that, just mock it out like option 1 above.
UPDATE
After some digging, I found out that all testing-library DOM printing is built on prettyDOM();
While prettyDOM() can't be disabled you can limit the number of lines to 0, and that would just give you the error message and three dots ... below the message.
Here is an example printout, I messed around with:
TestingLibraryElementError: Unable to find an element with the text: Hello ther. This could be because the text is broken up by multiple elements. In this case, you can provide a function for your text matcher to make your matcher more flexible.
...
All you need to do is to pass in an environment variable before executing your test suite, so for example with an npm script it would look like:
DEBUG_PRINT_LIMIT=0 npm run test
Here is the doc
UPDATE 2:
As per the OP's FR on github this can also be achieved without injecting in a global variable to limit the PrettyDOM line output (in case if it's used elsewhere). The getElementError config option need to be changed:
dom-testing-library/src/config.js
// called when getBy* queries fail. (message, container) => Error
getElementError(message, container) {
const error = new Error(
[message, prettyDOM(container)].filter(Boolean).join('\n\n'),
)
error.name = 'TestingLibraryElementError'
return error
},
The callstack can also be removed
You can change how the message is built by setting the DOM testing library message building function with config. In my Angular project I added this to test.js:
configure({
getElementError: (message: string, container) => {
const error = new Error(message);
error.name = 'TestingLibraryElementError';
error.stack = null;
return error;
},
});
This was answered here: https://github.com/testing-library/dom-testing-library/issues/773 by https://github.com/wyze.

Protractor Custom Locator: Not available in production, but working absolutely fine on localhost

I have added a custom locator in protractor, below is the code
const customLocaterFunc = function (locater: string, parentElement?: Element, rootSelector?: any) {
var using = parentElement || (rootSelector && document.querySelector(rootSelector)) || document;
return using.querySelector("[custom-locater='" + locater + "']");
}
by.addLocator('customLocater', customLocaterFunc);
And then, I have configured it inside protractor.conf.js file, in onPrepare method like this:
...
onPrepare() {
require('./path-to-above-file/');
...
}
...
When I run my tests on the localhost, using browser.get('http://localhost:4200/login'), the custom locator function works absolutely fine. But when I use browser.get('http://11.15.10.111/login'), the same code fails to locate the element.
Please note, that the test runs, the browser gets open, user input gets provided, the user gets logged-in successfully as well, but the element which is referred via this custom locator is not found.
FYI, 11.15.10.111 is the remote machine (a virtual machine) where the application is deployed. So, in short the custom locator works as expected on localhost, but fails on production.
Not an answer, but something you'll want to consider.
I remember adding this custom locator, and encounter some problems with it and realised it's just an attribute name... nothing fancy, so I thought it's actually much faster to write
let elem = $('[custom-locator="locator"]')
which is equivalent to
let elem = element(by.css('[custom-locator="locator"]'))
than
let elem = element(by.customLocator('locator'))
And I gave up on this idea. So maybe you'll want to go this way too
I was able to find a solution to this problem, I used data- prefix for the custom attribute in the HTML. Using which I can find that custom attribute on the production build as well.
This is an HTML5 principle to prepend data- for any custom attribute.
Apart from this, another mistake that I was doing, is with the selector's name. In my code, the selector name is in camelCase (loginBtn), but in the production build, it was replaced with loginbtn (all small case), that's why my custom locater was not able to find it on the production build.

SAPUI5 batch submit returns error

I am using the following code, in an attempt to batch upload the changes made on a table:
onConfirmActionPressed: function() {
var oModel = this.getModel();
oModel.setUseBatch(true);
oModel.submitChanges();
}
I am using setProperty() to set the new values, like this:
onSingleSwitchChange: function(oControlEvent) {
var oModel = this.getView().getModel();
var rowBindingContext = oControlEvent.getSource().getBindingContext();
oModel.setProperty(rowBindingContext.sPath + "/Zlspr", "A");
}
When onConfirmActionPressed is executed, I get a server error, saying that "Commit work during changeset processing not allowed" on SAP R3.
When I upload the lines of the table one-by-one, it works fine. However, uploading this way is very slow, and in some cases it takes more than 10 minutes for the process to complete.
Am I doing something wrong while batch submitting? Is there a chance the issue is due to server (R3) misconfiguration?
You need to override methods:
/IWBEP/IF_MGW_APPL_SRV_RUNTIME~CHANGESET_BEGIN
/IWBEP/IF_MGW_APPL_SRV_RUNTIME~CHANGESET_END
Keep track of the errors across all calls to update methods and if everything went OK then in changeset_end perform commit on the database
edit:
To clarify:
In your Data Provider Class Extension in SAP Gateway you need to find your YOURENTITY_UPDATE_ENTITY method and get rid off any COMMIT WORK statements.
Then you need to redefine /IWBEP/IF_MGW_APPL_SRV_RUNTIME~CHANGESET_BEGIN method and, which is a method which is fired before any batch operation. You could define a class attribute such as table mt_batch_errors which would be emptied in this method.
When you post batch changes from UI5 using oModel.submitChanges() all single changes to Entities are directed to appropriate ..._UPDATE_ENTITY methods. You need to keep track of any possible errors and if any occurs then fill your mt_batch_errors table.
After all entities have been updated /IWBEP/IF_MGW_APPL_SRV_RUNTIME~CHANGESET_END method is fired in which you are able to check mt_batch_errors table if any errors occurred during the batch process. If there were errors then you should probably ROLLBACK WORK, and if not then you are free to COMMIT WORK.
That is just an example of how it could be done, I'm curious of other suggestions.
Good luck!

meteor: database is undefined

I'm having some troubles understanding, what i believe is trivial but i can't seem to get my head around it.
I have this publish function in server.js (server only)
Meteor.publish("tikiMainFind", function(){
return tikiDB.find()
})
In app.js (server + client) i'm declaring this mongo collection:
tikiDB = new Mongo.Collection("tiki")
Why is it that this doesn't work in client.js
console.log(tikiDB.find())
//ReferenceError: tikiDB is not defined
Without any idea how you have your app structured, I agree with David Weldon's answer. Check File Load Order to see what order your files are getting loaded.

Zend_Session_SaveHandler_Interface and a session_id mysterie

I'm trying to setup my own Zend_Session_SaveHandler based on this code
http://blog.digitalstruct.com/2010/10/24/zend-framework-cache-backend-libmemcached-session-cache/
This works great, except that my session_id behave mysteriously.
I'm using the Zend_Session_SaveHandler_Cache class as you can find it in the blog above (except that I parked it in my own library, so it's name now starts with My_).
In my bootstrap I have:
protected function _initSession()
{
$session = $this->getPluginResource('session');
$session->init();
Zend_Session::getSaveHandler()->setCache( $this->_manager->getCache( 'memcached' ) );
}
To get my session going based on this code in my .ini file
resources.cachemanager.memcached.frontend.name = Core
resources.cachemanager.memcached.frontend.options.automatic_serialization = On
resources.cachemanager.memcached.backend.name = Libmemcached
resources.cachemanager.memcached.backend.options.servers.one.host = localhost
resources.cachemanager.memcached.backend.options.servers.one.port = 11213
So far so good. Until somebody tries to login and Zend_Session::rememberMe() is called. In the comments of Zend_Session one can read
normally "rememberMe()" represents a security context change, so
should use new session id
This of course is very true, and a new session id is generated. The users Zend_Auth data, after a successful log in, is written into this new session. I can see this because I added some logging functionality to the original class from the blog.
And here is where things go wrong. This new id isn't passed on the Zend_Session apparently, because Zend_Session keeps on reading the old id's session data. In other words, the one without the Zend_Auth instance. Hence, the user can no longer log in.
So the question is, how to make my saveHandler work with the new id after the regeneration?
Cheers for any help.
Ok, I'm blushing here....
I was looking at the wrong place to find this error. My session saveHandler was working just fine (so I can recommend Mike Willbanks his work if you want libmemcached session management).
What did go wrong then? Well, besides switching from file to libmemcached, I also switched from setting up my session in bootstrap to setting it up in my application.ini. So, instead of putting lines like
session.cookie_domain = mydomain.com
in my application.ini (which were then used in bootstrap as options to setup my session), I now, properly, wrote
resources.session.cookie_domain = mydomain.com
And this is were things went wrong, because.... I only changed those lines for production, I forgot to change them further down the ini file. In other words, my development env. got the cookie_domain of my production env., which is wrong as I use an other domain name during devolepment. So, on every page load, my cookie was invalidaded and a new session started. Mysterie solved...