Check if a command exists using qmake - command

I am working on a project which incorporates C code, as well as (MASM-like) assembly. I want to be able to compile it on Linux, as well as Windows, thus I am using a third-party assembler (jwasm) as follows:
QMAKE_PRE_LINK += jwasm -coff -Fo$$assembly_obj $$PWD/assembly.asm
(here, assembly_obj holds the directory I want jwasm to save the output. Oh, by the way: when using jwasm it is critical to first specify all the parameters, and only at the end the input files, otherwise it will ignore the parameters)
To make it easier for other people to compile the project, I would like to be able to check if jwasm is in their path, and if not, emit an error() telling them how to fix this. However, I am not sure if this is even possible using qmake. I have tried:
exists("jwasm") { # Always false
message("jwasm found!")
}
as well as:
packagesExist(jwasm) { # Always true
message("jwasm found!")
}
I have looked around in the qmake docs, but couldn't find any other alternatives...

Related

How to check which file is importing this file in Lua?

I've been constructing a packet analising system for wireshark with Lua script.
Now I'm managing three following lua files where VPS.lua and DSS.lua are importing some functions from DSS_function.lua by require module.
VPS.lua DSS.lua DSS_function.lua
Question is how to know which file, VPS.lua or DSS.lua, DSS_function.lua are imported from.
DSS_function.lua has to know this information, because each time it declares the protocol depending on the file which is importing itself.
Thank you.
You can use something along these lines to figure out where a library is required from:
local name = debug.getinfo(3).short_src
if name:find "foo.lua" then
print("Required from Foo")
elseif name:find "bar.lua" then
print("Required from Bar")
end
But the problem is that it will only work the first time because Lua caches modules after the first time they are required.
Setting that aside, what you are trying to build is an abomination and whatever reason you think you have for wanting this is just a sign of a deeper problem with your code structure.
If you use require("name_of_the_Lua_file") then the return of the required Lua file goes to...
package.loaded.name_of_the_Lua_file
The next require("name_of_the_Lua_file") looks for this name in package.loaded.name_of_the_Lua_file and if there it loads it from there.
Only if the Lua file dont return anything then package.loaded.name_of_the_Lua_file returns a: true
...and not what you expected.
Example what i mean...
package.loaded.example={test = function() print "Hi - I am a test" end}
require('example').test()
-- Output: Hi - I am a test
So, check what is allready required with a look in package.loaded.
It could look like...
if package.loaded.example ~= nil then
print("Already loaded")
return true
else
print("Not exists - Load it first")
return false
end
-- Output: Already loaded
-- true

Stop huge error output from testing-library

I love testing-library, have used it a lot in a React project, and I'm trying to use it in an Angular project now - but I've always struggled with the enormous error output, including the HTML text of the render. Not only is this not usually helpful (I couldn't find an element, here's the HTML where it isn't); but it gets truncated, often before the interesting line if you're running in debug mode.
I simply added it as a library alongside the standard Angular Karma+Jasmine setup.
I'm sure you could say the components I'm testing are too large if the HTML output causes my console window to spool for ages, but I have a lot of integration tests in Protractor, and they are SO SLOW :(.
I would say the best solution would be to use the configure method and pass a custom function for getElementError which does what you want.
You can read about configuration here: https://testing-library.com/docs/dom-testing-library/api-configuration
An example of this might look like:
configure({
getElementError: (message: string, container) => {
const error = new Error(message);
error.name = 'TestingLibraryElementError';
error.stack = null;
return error;
},
});
You can then put this in any single test file or use Jest's setupFiles or setupFilesAfterEnv config options to have it run globally.
I am assuming you running jest with rtl in your project.
I personally wouldn't turn it off as it's there to help us, but everyone has a way so if you have your reasons, then fair enough.
1. If you want to disable errors for a specific test, you can mock the console.error.
it('disable error example', () => {
const errorObject = console.error; //store the state of the object
console.error = jest.fn(); // mock the object
// code
//assertion (expect)
console.error = errorObject; // assign it back so you can use it in the next test
});
2. If you want to silence it for all the test, you could use the jest --silent CLI option. Check the docs
The above might even disable the DOM printing that is done by rtl, I am not sure as I haven't tried this, but if you look at the docs I linked, it says
"Prevent tests from printing messages through the console."
Now you almost certainly have everything disabled except the DOM recommendations if the above doesn't work. On that case you might look into react-testing-library's source code and find out what is used for those print statements. Is it a console.log? is it a console.warn? When you got that, just mock it out like option 1 above.
UPDATE
After some digging, I found out that all testing-library DOM printing is built on prettyDOM();
While prettyDOM() can't be disabled you can limit the number of lines to 0, and that would just give you the error message and three dots ... below the message.
Here is an example printout, I messed around with:
TestingLibraryElementError: Unable to find an element with the text: Hello ther. This could be because the text is broken up by multiple elements. In this case, you can provide a function for your text matcher to make your matcher more flexible.
...
All you need to do is to pass in an environment variable before executing your test suite, so for example with an npm script it would look like:
DEBUG_PRINT_LIMIT=0 npm run test
Here is the doc
UPDATE 2:
As per the OP's FR on github this can also be achieved without injecting in a global variable to limit the PrettyDOM line output (in case if it's used elsewhere). The getElementError config option need to be changed:
dom-testing-library/src/config.js
// called when getBy* queries fail. (message, container) => Error
getElementError(message, container) {
const error = new Error(
[message, prettyDOM(container)].filter(Boolean).join('\n\n'),
)
error.name = 'TestingLibraryElementError'
return error
},
The callstack can also be removed
You can change how the message is built by setting the DOM testing library message building function with config. In my Angular project I added this to test.js:
configure({
getElementError: (message: string, container) => {
const error = new Error(message);
error.name = 'TestingLibraryElementError';
error.stack = null;
return error;
},
});
This was answered here: https://github.com/testing-library/dom-testing-library/issues/773 by https://github.com/wyze.

Protractor Custom Locator: Not available in production, but working absolutely fine on localhost

I have added a custom locator in protractor, below is the code
const customLocaterFunc = function (locater: string, parentElement?: Element, rootSelector?: any) {
var using = parentElement || (rootSelector && document.querySelector(rootSelector)) || document;
return using.querySelector("[custom-locater='" + locater + "']");
}
by.addLocator('customLocater', customLocaterFunc);
And then, I have configured it inside protractor.conf.js file, in onPrepare method like this:
...
onPrepare() {
require('./path-to-above-file/');
...
}
...
When I run my tests on the localhost, using browser.get('http://localhost:4200/login'), the custom locator function works absolutely fine. But when I use browser.get('http://11.15.10.111/login'), the same code fails to locate the element.
Please note, that the test runs, the browser gets open, user input gets provided, the user gets logged-in successfully as well, but the element which is referred via this custom locator is not found.
FYI, 11.15.10.111 is the remote machine (a virtual machine) where the application is deployed. So, in short the custom locator works as expected on localhost, but fails on production.
Not an answer, but something you'll want to consider.
I remember adding this custom locator, and encounter some problems with it and realised it's just an attribute name... nothing fancy, so I thought it's actually much faster to write
let elem = $('[custom-locator="locator"]')
which is equivalent to
let elem = element(by.css('[custom-locator="locator"]'))
than
let elem = element(by.customLocator('locator'))
And I gave up on this idea. So maybe you'll want to go this way too
I was able to find a solution to this problem, I used data- prefix for the custom attribute in the HTML. Using which I can find that custom attribute on the production build as well.
This is an HTML5 principle to prepend data- for any custom attribute.
Apart from this, another mistake that I was doing, is with the selector's name. In my code, the selector name is in camelCase (loginBtn), but in the production build, it was replaced with loginbtn (all small case), that's why my custom locater was not able to find it on the production build.

Debugging test cases when they are combination of Robot framework and python selenium

Currently I'm using Eclipse with Nokia/Red plugin which allows me to write robot framework test suites. Support is Python 3.6 and Selenium for it.
My project is called "Automation" and Test suites are in .robot files.
Test suites have test cases which are called "Keywords".
Test Cases
Create New Vehicle
Create new vehicle with next ${registrationno} and ${description}
Navigate to data section
Those "Keywords" are imported from python library and look like:
#keyword("Create new vehicle with next ${registrationno} and ${description}")
def create_new_vehicle_Simple(self,registrationno, description):
headerPage = HeaderPage(TestCaseKeywords.driver)
sideBarPage = headerPage.selectDaten()
basicVehicleCreation = sideBarPage.createNewVehicle()
basicVehicleCreation.setKennzeichen(registrationno)
basicVehicleCreation.setBeschreibung(description)
TestCaseKeywords.carnumber = basicVehicleCreation.save()
The problem is that when I run test cases, in log I only get result of this whole python function, pass or failed. I can't see at which step it failed- is it at first or second step of this function.
Is there any plugin or other solution for this case to be able to see which exact python function pass or fail? (of course, workaround is to use in TC for every function a keyword but that is not what I prefer)
If you need to "step into" a python defined keyword you need to use python debugger together with RED.
This can be done with any python debugger,if you like to have everything in one application, PyDev can be used with RED.
Follow below help document, if you will face any problems leave a comment here.
RED Debug with PyDev
If you are wanting to know which statement in the python-based keyword failed, you simply need to have it throw an appropriate error. Robot won't do this for you, however. From a reporting standpoint, a python based keyword is a black box. You will have to explicitly add logging messages, and return useful errors.
For example, the call to sideBarPage.createNewVehicle() should throw an exception such as "unable to create new vehicle". Likewise, the call to basicVehicleCreation.setKennzeichen(registrationno) should raise an error like "failed to register the vehicle".
If you don't have control over those methods, you can do the error handling from within your keyword:
#keyword("Create new vehicle with next ${registrationno} and ${description}")
def create_new_vehicle_Simple(self,registrationno, description):
headerPage = HeaderPage(TestCaseKeywords.driver)
sideBarPage = headerPage.selectDaten()
try:
basicVehicleCreation = sideBarPage.createNewVehicle()
except:
raise Exception("unable to create new vehicle")
try:
basicVehicleCreation.setKennzeichen(registrationno)
except:
raise exception("unable to register new vehicle")
...

Babel plugins run order

TL;DR: Is there a way how to specify the order in which the Babel plugins are supposed to be run? How does Babel determine this order? Is there any spec how this works apart from diving into Babel sources?
I'm developing my own Babel plugin. I noticed, that when I run it, my plugin is run before other es2015 plugins. For example having code such as:
const a = () => 1
and visitor such as:
visitor: {
ArrowFunctionExpression(path) {
console.log('ArrowFunction')
},
FunctionExpression(path) {
console.log('Function')
},
}
my plugin observes ArrowFunction (and not Function). I played with the order in which the plugins are listed in Babel configuration, but that didn't change anything:
plugins: ['path_to_myplugin', 'transform-es2015-arrow-functions'],
plugins: ['transform-es2015-arrow-functions', 'path_to_myplugin'],
OTOH, this looks like the order DOES somehow matter:
https://phabricator.babeljs.io/T6719
---- EDIT ----
I found out that if I write my visitor as follows:
ArrowFunctionExpression: {
enter(path) {
console.log('ArrowFunction')
}
},
FunctionExpression: {
exit(path) {
console.log('Function')
}
},
both functions are called. So it looks like the order of execution is: myplugin_enter -> other_plugin -> myplugin_exit. In other words, myplugin seems to be before other_plugin in some internal pipeline. The main question however stays the same - the order of plugins in the pipeline should be determined & configurable somehow.
The order of plugins is based on the order of things in your .babelrc with plugins running before presets, and each group running later plugins/presets before earlier ones.
The key thing though is that the ordering is per AST Node. Each plugin does not do a full traversal, Babel does a single traversal running all plugins in parallel, with each node processed one at a time running each handler for each plugin.
Basically, what #loganfsmyth wrote is correct; there is (probably) no more magic in plugin ordering itself.
As for the my problem specifically, my confusion was caused by how arrow function transformation works. Even if the babel-plugin-transform-es2015-arrow-functions plugin mangles the code sooner than my plugin, it does not remove the original arrow-function ast node from the ast, so even the later plugin sees it.
Learning: when dealing with Babel, don't underestimate the amount of debug print statements needed to understand what's happening.