I am working on an Eclipse Plug-in which provides property auto-completions with a ICompletionProposalComputer contributed via the org.eclipse.wst.sse.ui.completionProposal.
I'd like to create automated tests for the functionality but have no idea where to start. How can I write automated tests for my proposal computer?
Some time ago a colleague and I had a similar problem while implementing a IContentAssistProcessor for a SourceViewer based editor in a console view.
We started with an integration test that simulated a Ctrl+Space key stroke within the console editor and expected a shell with a table that holds the proposal(s) to show up.
Here is such a test case: ConsoleContentAssistPDETest. It uses a ConsoleBot that encapusulates the key stroke simulation and a custom AssertJ assertion that hides the details of waiting for the shell to open and finding the table, etc. (ConsoleAssert)
With such a test in place we were able to implement a walking skeleton. We developed individual parts of the content proposal code test-driven with unit tests.
Instead of writing your own bot you may also look into SWTBot which provides an API to write UI/functional tests.
I ended up writing a simple SWTBot test. Once I have the editor open, it's pretty simple to get a list of autocompletions:
bot.editorByTitle("index.html").toTextEditor();
editor.insertText("<html>\n<div ></div>\n</html>");
editor.navigateTo(1, 5);
editor.getAutoCompleteProposals("")
Related
I want to implement end to end testing for my vscode extension. For command we implemented in package.json & run tasks intergated that takes input parameter as pickString, promptString, etc. How to write intergation test case to pick UI elements in vscode?
Do we have any sample UI integration tests for vscode extensions?
I was facing the same problem with an extension I am working on.
You should check the vscode docs on testing as they have a way to run tests.
Another project I have been working on is a way to run tests using Cypress.io. That enabled me to write more functional test cases. I am still exploring best practices for that, but if you are interested here is my blog post: https://juanmanuelalloron.com/2020/05/05/testing-vscode-extensions-with-cypress-and-code-server/
and some boilerplate code that I have on github: https://github.com/juanallo/vscode-e2e-cypress-boilerplate
I am trying to write e2e integration tests for a vscode extension. I didn't find any ui integration tests. Can you please provide me the links if any
I recommend using extensions/vscode-api-tests/src/singlefolder-tests/editor.test.ts in the vscode sources as a starting point for integration tests. If that particular test isn't quite what you want, there are a bunch of tests adjacent to it that might be.
See also this answer I gave to a related question about using the API from within tests.
I'm analyzing Apache ant source code for my research. When a test suite runs, test cases are executed and Eclipse shows the test result as the image below. My goal is to get the executed test case names as Eclipse does. If I can see source code where Eclipse handles this, I think I can get the name list. Therefore, I'd like to know the source code location where Eclipse handle this or an easier way to achieve the goal... I tried using JUnit task in an ant build script to generate the test report so that I can get the test list by parsing the text/xml report. I could get the report with some warnings stating that duplicate classes are detected because I'm testing ant with ant. However, the report showed the test method name without its full class name...
Well, there is a short workaround for what you want to achieve (to have a specific, deterministic test run order), it is the #FixMethodOrder annotation on your test class. That will make your tests run in a specified order in every environment (or at least it should :-)).
If you want to do it in the hard way (analyze the source code of the Eclipse JUnit runner), I would advise installing the PDT tools and sources of the plugins first, and also set the Include all plug-ins from target in Java search under the Plug-in Development options (so you can find the plug-in types easily with Ctrl+Shift+T/R).
Then you have the Plug-in Spy which is a rock of a feature: press Alt+Shift+F2 and click on any element on the UI. Then you get all the info where the given component in defined (alternatively, you can press Alt+Shift+F1 zo display data of the selected component). An example is below:
We use gradle as our build tool and use the idea plugin to be able to generate the project/module files. The process for a new developer on the project would look like this:
pull from source control.
run 'gradle idea'.
open idea and be able to develop without any further setup.
This all works nicely, but generally only gets exercised when a new developer joins or someone gets a new machine. I would really like to automate the testing of this more frequently in the same way we automate our unit/integration tests as part of our continuous integration process.
Does anyone know if this is possible and if there is any libraries for doing this kind of thing?
You can also substitue idea for eclipse as we have a similar process for those that prefer using eclipse.
The second step (with or without step one) is easy to smoke test (just execute the task as part of a CI build), the third one less so. However, if you are following best practices and regenerate IDEA files rather than committing them to source control, developers will likely perform both steps more or less regularly (e.g. every time a dependency changes).
As Peter noted, the real challenge is step #3. The first 2 ones are solved by your SCM plugin and gradle task. You could try automating the last task by doing something like this
identify the proper command line option, on your platform, that opens a specified intellij project from the command line
find a simple good enough scenario that could validate that the generated project is working as it should. E.g. make a clean then build. Make sure you can reproduce these steps using keyboard shortcuts only. Validation could be made by validating either produced artifacts or test result reports, etc
use an external library, like Robot, to program the starting of intellij and the running of your keyboards. Here's a simple example with Robot. Use a dynamic language with inbuilt console instead of pure Java for that, it will speed your scripting a lot...
Another idea would be to include a daemon plugin in intellij to pass back the commands from external CLI. Otherwise take contact with the intellij team, they may have something to ease your work here.
Notes:
beware of false negatives: any failure could be caused by external issues, like project instability. Try to make sure you only build from a validated working project...
beware of false positives: any assumption / unchecked result code could hide issues. Make sure you clean properly the workspace, installation, to have a repeatable state and standard scenario matching first use.
Final thoughts: while interesting from a theoretical angle, this automation exercise may not bring all the required results, i.e. the validation of the platform. Still it's an interesting learning experience and could serve as a material for a nice short talk, especially if you find out interesting stuff. Make it a beer challenger with your team when you have a few idle hours to try to see who can implement the fastest a working solution ;) Good luck!
I'm working on a plugin which editor augments on the existing JDT (Java) editor
using aspects.
Now, Eclipse text editors that derive from AbstractTextEditor are
organized in clear components, following an MVC architecture. Those
components are then accessed through precise paths,
e.g. reconciliation. You can find one example of a custom reconciler and
the assumptions it can (and does) use on the behavior of the editor
here.
I'd like to write headless unit tests against those assumptions, that
would check that my weaving through aspects has not broken anything along
the way. For example, in the case of reconciliation, I would like to
open an editor, input some incorrect content (with respect to some
reconciliation strategy), wait for a while, and check that Problems are
indeed reported.
Note that which problems are reported, or how they would be signaled to
the user in a UI component doesn't concern me : I want
to test that my swapping a SourceViewer for a custom one through aspects
doesn't break editor logic, not my specific reconciliation strategy.
(In fact, I'd
probably mock it for that test. Moreover, the UI testing, being
presumably not runnable in a headless fashion, is beyond the scope of my
question.)
It seems this should be easy to do if the appropriate structures
existed. Do they ? Is there any test framework or mocks in sync with Eclipse's
architectural assumptions that would let me do what I have in mindĀ ? Those would have to reproduce workflow behavior of the existing Eclipse editor. Surely this would be among Eclipse's own unit tests, right ? ... though
I can't seem to find anything of the sort. Any ideasĀ ?
I have asked that same question on the eclipse core platform mailing-list, and got a great answer from Dani Megert giving me pointers to Eclipse's own test framework. The Junit plugin tests are released for download as part of the extended SDK, and browsing the git source lets you see there are already tests against the model or interacting with some of the Editor components.