Protractor - Why should i implement waiting or sleeps in test script - protractor

I have read that "Protractor can automatically execute the next step in your test the moment the webpage finishes pending tasks, so you don’t have to worry about waiting"
But, I had to implement waiting(s) or sleeps in my test script to make them all PASS.
Can anyone help to understand this waiting.
Read At :http://www.protractortest.org/#/
Automatic Waiting:
You no longer need to add waits and sleeps to your test. Protractor can automatically execute the next step in your test the moment the webpage finishes pending tasks, so you don’t have to worry about waiting for your test and webpage to sync.

Right, I find this description as confusing as you. I think it describes the ideal world with no network delays and timeouts, no animations and layout issues.
The description originates from the following:
Protractor runs an extra command before performing any action on the
browser to ensure that the application being tested has stabilized.
This extra command is an async script which asks Angular to respond when the application is done with all timeouts and asynchronous requests, and ready for the test to resume.
Now, what does that "application is ready" statement mean? It basically means that, there are no pending requests, promises and "macro tasks" inside the Angular running application (source for angular testability).
From what I understand, this helps to cover most of the timing and waiting issues, but, if there is a pending JS code executed outside of Angular, or if there are any pending animations or other UI-related changes - this may potentially have an effect on your test stability - for instance, an element might not be yet visible or clickable, an input may not yet get enabled etc.
And, this does not actually contribute to the feedback from the end-to-end tests being stable and helpful - for example, in our project we often find ourselves adding browser.wait()s here and there to tackle occasionally failing tests. Also, here is a set of things that helped us to tackle this flakiness:
Protractor flakiness

Related

How to easily debug flutter end-to-end tests, such as time traveling, action logs, and screenshots?

When doing end-to-end testing for Flutter, I find it very inconvenient to debug them. For example, for an e2e test that taps, drags, and asserts a ton of things, when it fails, I cannot know easily what indeed causes the failure. It may be caused by misbehavior that happens 10 steps ago.
Thus, I hope I can have the well-known time traveling functionality for Flutter tests (or, action logs, or screenshots for every step). In other words, with a button tap I can see "what did the UI look like when that button was tapped 50 steps ago?" Then I can go through the history and easily spot what goes wrong.
Is it possible to implement it? Can I integrate it into integration_test-based tests or do I have to create a brand new framework?
Here it goes: https://github.com/fzyzcjy/flutter_convenient_test - Write and debug tests easily, with full action history, time travel, screenshots, rapid re-execution, video records, interactivity, isolation and more. (With a video demo showing GUI: https://github.com/fzyzcjy/flutter_convenient_test#-quick-demo)
The implementation can be seen from the code. Shortly speaking, when actions like "tap" or "expect widget exists" are detected, some logs are created, and screenshots are automatically generated. Later, they can be displayed in a nice GUI.
It is compatible with integration_test, since we still make use of that framework and only adds automatic logging and screenshoting to it.
(Disclaimer: This is a QA style question, such that people who need it can know there already exists a library and no need to reinvent the wheel, and I am the author of the open-source library)

Is there a way to process the queued webhook in ADO?

We have a service hook created for one of our projects in ADO. It was going fine until last weekend. Suddenly few webhooks started queued and I am not sure how to force it to get processed. Can someone help me if there is a way to force those items to get processed.
Thanks,
Venu
I am afraid that you cannot get that you want during process.
Under the process, the queued service hooks will not be picked again and will not be processed again.
When the main thread, such as a work item, is running, you cannot forcefully intervene or exit the content that is already queued.
And there is a similar issue also discussing about this situation.
And waiting service hooks are actually coupled, which also depends on your memory, because they actually run in memory. If there are occasional memory loss and other problems during execution, this cannot ensure that all service hooks can be executed as expected.
Or you should interrupt the current process and reduce the service hooks for it. But it is not a good solution.
So it is the best way to add a function that can handle the queued service hooks in the process. But currently there is no such function. Therefore we recommend you submit the suggestion ticket to the Team to suggest them add that feature.

How to make Libfuzzer run without stopping similar to AFL?

I have been trying to fuzz using both AFL and Libfuzzer. One of the distinct differences that I have come across is that when the AFL is executed, it runs continuously unless it is manually stopped by the developer.
On the other hand, Libfuzzer stops the fuzzing process when a bug is identified.I know that it allow the addition of parallel fuzzing through the jobs=N command, however those processes still stop when a bug is identified.
Is there any reason behind this behavior?
Also, is there any command that allows the Libfuzzer to run continuously unless the developer stops the fuzzing process?
This question is old but I also was in need to run libFuzzer without stopping.
It can be accomplished with the flags -fork=<N of jobs> combined with -ignore_crashes=1.
Be aware that now Ctrl+C doesn't work anymore. It is considered as a crash and just spawns a new job. But I think this is a bug, see here.

How to persist and replay NestJS CQRS event and saga across restart?

I am making an application which will need to use NestJS' CQRS module, as the requirements naturally lend themselves to that pattern.
Updates to the application logic are expected to be frequent and to happen during busy hours (that's just how my management works...), so the application needs to be able to restart gracefully. However, this means that events started just before the shutdown may not finish, or even if they do, some sagas may not trigger due to some events having happened before the restart... I'd like to ensure that doesn't happen.
I'm aware of NestJS' OnApplicationShutdown and OnApplicationBootstrap hooks, which is exactly for this purpose, but what I'm not sure is what I should do there. How can I capture all events that have unfinished handlers and sagas? Then after a restart, how can I make the event bus aware of the events monitored by sagas, without executing the already executed handlers?
I guess the second part could be worked around with a random ID per event/handler combo, that will be looked up in a log, and if present, the handler will be skipped, and if not, it will be executed and added to the log... But even with such a workaround, I don't see how I could do the first part. There will be a lot of events, and sagas (by definition) execute commands, meaning they have side effects... Even if all commands can become idempotent, the sheer quantity of events and frequent restarts means restarting from the very first command is a no go.
I've seen this package but I'm not sure if it solves this particular use case, or if it's really just logging the events, and pretty much nothing more.

How should I test functionality accessible only during the lifetime of an async method using #testing-library/react?

I have a React component which allows users to submit potentially long-running queries to a remote service. While the query is running, the component shows a cancel button. I want to test that this button shows up when expected, that its click handler cancels the previous API request, and so on.
Since the button is only present while the async API call is active, the tests I wrote for this purpose make their assertions about the button in the mock implementation of the async API itself. They're not super elegant but I confirmed that they do go red as I expect when I remove parts of the production code.
On upgrading #testing-library/react from 8.0.1 to 9.3.2, although the tests still pass, I now get the following warning several times:
console.error node_modules/#testing-library/react/dist/act-compat.js:52
Warning: You seem to have overlapping act() calls, this is not supported. Be sure to await previous act() calls before making a new one.
I have reproduced the issue in the following CodeSandbox (be sure to selected the "Tests" tab on the right-hand side and then view the "Console" messages at the bottom right).
The final comment on this GitHub issue says that I shouldn't need to worry about act() as long as I'm using the React Testing Library helpers and functions (which I am). What am I missing?
Raising an issue (here) against react-testing-library has helped me reach the conclusion that the best approach seems to be not to worry about it too much. The library's author was kind enough to propose a simpler test pattern of just making assertions directly after the action that causes the component to enter the transient state that you're trying to test. For example:
fireEvent.click(getByText("Submit")); // <-- action which triggers an async API call
expect(getByText("Fetching answers")).toBeInTheDocument();
Previously, I had the same expect statement in my mock implementation of the API call triggered by the click. My reasoning was that this was the only way I could be certain that the assertions would run while the component was in the transient state I was trying to test.
AFAICT, these tests are not strictly correct with respect to asynchronous actions because the promise returned by the mock implementation could resolve before the expectation was checked. In practice though, I have observed that the tests rewritten to the simpler approach:
do not provoke the warning about overlapping act() calls in the OP
can be made to fail as expected if I change the non-test code to break the behaviour under test
have not so far shown any intermittent failures
are far easier to read and understand
This question has never attracted much attention but I hope recording the answer I eventually arrived at myself might help others out in future.