Making Wildfly setup-script output more useful information - jboss

When setting up Wildfly with a .cli script (jboss-cli.sh/bat --file=something.cli) there is a lot of output in the form:
{
"outcome" : "success",
"result" : {}
}
or similar for failure.
This is unfortunately not very useful for debugging, and using IF (output == success) does not really help as any echo commands in the IF block are run regardless of the outcome of the conditional. See: https://issues.jboss.org/browse/WFLY-3662
Furthermore wildfly does not allow for empty IF blocks.
Is there any other useful way of filtering out the existing outcome and instead outputting something more useful for tracing and debugging?

Related

Firebase Error in Retrieving - Syntax Error

Stumbled on something odd and was wondering for some input. I have a dynamic child naming system and want to retrieve data. Now, when I try to input the dynamic naming of child it does not retrieve it however when I do it manually it does. I have debugged via print, and it seems to be working as well as other forms of observation. Any clue?
Assuming test = code test (to make it easier). Could it be the way the statements are wrote? The working one uses queryOrdered where as the other uses observe?
Working Type:
let refPosts = Database.database().reference().child("postsof"+"\(test)").queryOrdered(byChild: "username").queryEqual(toValue: "\(result)")
Does Not Work:
Database.database().reference().child("postsof"+"\(test)").observe(DataEventType.value)
Works Manually:
Database.database().reference().child("postsoftest").observe(DataEventType.value)
Edit:
Print: For all print statements, it shows as "postsoftest". This confirms that for the second example, "postsof"+" \ (test)" is being read correctly on the debugger as postsoftest, however does not retrieve the data as the other 2 examples.
JSON:
"postsoftest": {
"autoID": {
"name": "jack"
"link": "..."
}
},

Stop huge error output from testing-library

I love testing-library, have used it a lot in a React project, and I'm trying to use it in an Angular project now - but I've always struggled with the enormous error output, including the HTML text of the render. Not only is this not usually helpful (I couldn't find an element, here's the HTML where it isn't); but it gets truncated, often before the interesting line if you're running in debug mode.
I simply added it as a library alongside the standard Angular Karma+Jasmine setup.
I'm sure you could say the components I'm testing are too large if the HTML output causes my console window to spool for ages, but I have a lot of integration tests in Protractor, and they are SO SLOW :(.
I would say the best solution would be to use the configure method and pass a custom function for getElementError which does what you want.
You can read about configuration here: https://testing-library.com/docs/dom-testing-library/api-configuration
An example of this might look like:
configure({
getElementError: (message: string, container) => {
const error = new Error(message);
error.name = 'TestingLibraryElementError';
error.stack = null;
return error;
},
});
You can then put this in any single test file or use Jest's setupFiles or setupFilesAfterEnv config options to have it run globally.
I am assuming you running jest with rtl in your project.
I personally wouldn't turn it off as it's there to help us, but everyone has a way so if you have your reasons, then fair enough.
1. If you want to disable errors for a specific test, you can mock the console.error.
it('disable error example', () => {
const errorObject = console.error; //store the state of the object
console.error = jest.fn(); // mock the object
// code
//assertion (expect)
console.error = errorObject; // assign it back so you can use it in the next test
});
2. If you want to silence it for all the test, you could use the jest --silent CLI option. Check the docs
The above might even disable the DOM printing that is done by rtl, I am not sure as I haven't tried this, but if you look at the docs I linked, it says
"Prevent tests from printing messages through the console."
Now you almost certainly have everything disabled except the DOM recommendations if the above doesn't work. On that case you might look into react-testing-library's source code and find out what is used for those print statements. Is it a console.log? is it a console.warn? When you got that, just mock it out like option 1 above.
UPDATE
After some digging, I found out that all testing-library DOM printing is built on prettyDOM();
While prettyDOM() can't be disabled you can limit the number of lines to 0, and that would just give you the error message and three dots ... below the message.
Here is an example printout, I messed around with:
TestingLibraryElementError: Unable to find an element with the text: Hello ther. This could be because the text is broken up by multiple elements. In this case, you can provide a function for your text matcher to make your matcher more flexible.
...
All you need to do is to pass in an environment variable before executing your test suite, so for example with an npm script it would look like:
DEBUG_PRINT_LIMIT=0 npm run test
Here is the doc
UPDATE 2:
As per the OP's FR on github this can also be achieved without injecting in a global variable to limit the PrettyDOM line output (in case if it's used elsewhere). The getElementError config option need to be changed:
dom-testing-library/src/config.js
// called when getBy* queries fail. (message, container) => Error
getElementError(message, container) {
const error = new Error(
[message, prettyDOM(container)].filter(Boolean).join('\n\n'),
)
error.name = 'TestingLibraryElementError'
return error
},
The callstack can also be removed
You can change how the message is built by setting the DOM testing library message building function with config. In my Angular project I added this to test.js:
configure({
getElementError: (message: string, container) => {
const error = new Error(message);
error.name = 'TestingLibraryElementError';
error.stack = null;
return error;
},
});
This was answered here: https://github.com/testing-library/dom-testing-library/issues/773 by https://github.com/wyze.

Invoke-Pester to only run a single Assert/It block

I am writing unit tests for my Powershell Modules, with a file for each module, and Describe blocks for each function. Context blocks organize the tests along the behavior I am trying to test for with some arrange code, and my It blocks contain minimal arrange/act code and a assert.
I can limit my tests to only run a single test file by using Invoke-Pester "Path/To/Module"
I can also limit based on the Describe blocks by using Invoke-Pester "Path/To/Module" -TestName #("RunThisDescribe","AndRunThisDescribe")
As I am adding a new assertion (via a new It block) to an existing file/Describe/Context, I want to test/debug my new assertion alone, without the rest of the assertions of a given describe/context running (but with any mocks/variables that I set at the describe/context scope still available.
I have been commenting out my existing assertions while I am developing the new one, then remove the block comments and run them all after I am finished with the new test. This works, but is clunky.
Is there a way to run Invoke-Pester to only execute a given list of Its?
Is there a better workflow for developing/debuging new tests other than either letting all of them run or commenting them out?
I know, this question is pretty old, but it deserves an update:
Starting with Pester version 5, you can have a -Tag on everything: Describe, Context, It
This makes it really easy to run specific assertions and nothing else.
You can even exclude specific code with -ExcludeTag.
Please see https://github.com/pester/Pester#tags for details.
Also check out the braking changes, if you plan to migrate from version 4 to 5!
It doesn't look like there's any way to specify tests to run by the name of the It block.
You could split your new test in to a new Describe block and then run it via the -TestName parameter as you described, or give it a -Tag and then specify that tag via Invoke-Pester, however that doesn't seem to work for a nested Describe, it has to be at the top level.
I assume this wouldn't work for you because your Mocks and Variables would be in the other describe.
VSCode with the PowerShell extension installed allows you to run individual Describe blocks via a "Run Tests" link at the top of the Describe and this does work for nested blocks. However i'm not sure if this would result in the Mocks/Variables from the parent Describe block being invoked (my guess would be not).
Example of nested Describe, which can be individually run within VSCode:
Describe 'My-Tests' {
It 'Does something' {
$true | Should -Be $true
}
Describe 'NewTest' {
It 'Does something new' {
$true | Should -Be $true
}
}
}
It's a shame you can't currently put Tags on Context blocks for filtering in/out certain sets of tests. That was requested as a feature 2 years ago but it seems is not simple to implement.
To add to Tofuburger's answer, and based on Pester 5.3.1, you can also manipulate tests programmatically, in your test scripts, based on tags.
Describe 'Colour' -Tag 'Epistemology' {
BeforeAll {
$ParentBlockTags = $____Pester.CurrentBlock.Tag
if ($ParentBlockTags -eq 'Epistemology')
{
Set-ItResult -Inconclusive
}
}
BeforeEach {
$ItTags = $____Pester.CurrentTest.Tag
if ($ItTags -eq 'HSL')
{
Set-ItResult -Skipped -Because 'Not implemented'
}
}
It 'Saturates' -Tag 'HSL' {
1 | Should -Be 2
}
It 'Greens' -Tag 'RGB' {
1 | Should -Be 3
}
}

Better custom error handling for powershell

So I have a powershell script that integrates with several other external third-party EXE utilities. Each one returns its own kind of errors as well as some return non-error related output to stderr (yes badly designed I know, I didn't write these utilities). So What I'm currently doing is parsing the output of each utility and doing some keyword matching. This approach does work but I feel that as I use these scripts and utilties I'll have to add more exceptions to what the error actually is. So I need to create something that is expandable,possibly a kind of structure I can add to an external file like a module.
I was thinking of leveraging the features of a custom PSObject to get this done but I am struggling with the details. Currently my parsing routine for each utility is:
foreach($errtype in {'error','fail','exception'})
{
if($JobOut -match $errtype){ $Status = 'Failure' }
else if($JobOut -match 'Warning'){$Status = 'Warning' }
else { $Status = 'Success' }
}
So this looks pretty straightforward until I run into some utility that contain some of the keywords in $errtype within $JobOut that is not an error. So now I have to add some exceptions to the logic:
foreach($errtype in {'error','fail','exception'})
{
if($JobOut -match 'error' -and(-not($JobOut -match 'Error Log' }
elseif($JobOut -match $errtype){ $Status = 'Failure' }
else if($JobOut -match 'Warning'){$Status = 'Warning' }
else { $Status = 'Success' }
}
So as you can see this method has the potential to get out of control quickly and I would rather not start editing core code to add a new error rule every time I come across a new error.
Is there a way to maybe create a structure of errors for each utility that contains the logic for what is an error. Something that would be easy to add new rules too?
Any help with this is really appreciated.
I would think a switch would do nicely here.
It's very basic, but can be modified very easily and is highly expandable and I like that you can have an action based on the input to the switch, which could be used for logging or remediation.
Create a function that allows you to easily provide input to the switch and then maintain that function with all your error codes, or words, etc. then simply use the function where required.
TechNet Tips on Switches
TechNet Tips on Functions

How to silence DataMapper in Sinatra

So, a simple little question. Every time I perform some transaction with DataMapper inside one of my "get" or "post" blocks, I get output looking something like this...
core.local - - [19/Sep/2012:09:04:54 CEST] "GET /eval_workpiece/state?id=24 HTTP/1.1" 200 4
- -> /eval_workpiece/state?id=24
It's a little too verbose for my liking. Can I turn this feedback off?
This isn’t Datamapper logging, this is the logging done by the WEBrick server, which logs all requests using these two formats by default.
(Note this isn’t Rack logging either, although the Rack::CommonLogger uses the same (or at least very similar) format).
The simplest way to stop this would be to switch to another server that doesn’t add its own logging, such as Thin.
If you want to continue using WEBrick, you’ll need to find a way to pass options to it from your Sinatra app. The current released Sinatra gem (1.3.3) doesn’t allow an easy way to do this, but the current master allows you to set the :server_options setting which Sinatra will then pass on. So in the future you should be able to do this:
set :server_settings, {:AccessLog => []}
in order to silence WEBrick.
For the time being you can add something like this to the end of your app file (I’m assuming you’re launching your app with something like ruby my_app_file.rb):
disable :run
Sinatra::Application.run! do |server|
server.config[:AccessLog] = []
end
To cut off all logging:
DataMapper.logger = nil
To change verbosity:
DataMapper.logger.set_log(logger, :warn) # where logger is Sinatra's logging object
Other levels are :fatal => 7, :error => 6, :warn => 4, :info => 3, :debug => 0 (http://rubydoc.info/gems/dm-core/1.1.0/DataMapper/Logger)
If you're running with ActiveSupport, you can use the Kernel extension:
quietly { perform_a_noisy_task }
This temporarily binds STDOUT and SDTERR to /dev/null for the duration of the block. Since you may not want to suppress all output, theoretically you can do:
with_warnings(:warn) { # or :fatal or :error or :info or :debug
perform_a_noisy_task
}
and appropriate messages will be suppressed. (NB: I say 'theoretically' because using with_warnings gave me a seemingly unrelated error in my Padrino/DataMapper environment. YMMV.)