I'm currently using POSTMAN and Advanced Rest Client application to test my REST endpoints. These tool are great and make it very nice for testing. I am currently entering these calls manually and testing. However, I have a number of endpoints which have prerequisite calls that need to be made to handle their dependencies.
This is not a big deal, however if there is a way I can chain these calls to run in a certain flow waiting for the prerequisite to be completed before running the next I could harshness this to craft a fully automated API testing suite. Which, would give more flexibility rather than manually entering them.
Postman let's you do this using the Jetpacks upgrade. You can then use the recently released Newman tool to run them through the command line.
Related
I am using rest assured to automate my project. In the same project, I want to do performance testing in the API. I want to know how can I achieve this task??
If you have existing set of tests and want to run them in multithreaded manner the options are in:
use ExecutorService to run them in parallel
"wrap" them into functions with JMH annotations
use a load testing tool capable of running JUnit tests (or whatever is your xUnit framework) like JUnit Sampler of Apache JMeter
However the above approaches will only allow you to kick off your tests in parallel and you won't be able to collect a lot of metrics like:
number of active threads
number of hits per seconds
response time
HTTP-protocol-based metrics like response code, connect time, latency
so it makes sense considering converting your restassured tests into "real" tests driven by the "normal" load testing tool, the majority of load testing tools provide record-and-replay capability by exposing a HTTP Proxy so if you run your restassured tests via this proxy the load testing tool will capture them and convert them into corresponding HTTP requests.
I have a web api (REST) project that is written in .NET and I have written a few webtests (.webtest) that test those apis.
While those tests run fine locally from visual studio, I want to integrate them into my VSTS (Azure Devops) build pipeline, so as to identify and breaking changes that could break any of those APIs.
I am not able to find any task in build pipeline which can run the webtests as part of build. I see option for running unit-tests though.
So, wanted to check what am I missing here.
You might want to find an alternative approach as this link implies it has been deprecated.
Visual Studio web performance test (.webtest file) is tied to the load
test functionality and is deprecated. Some customers have used
.webtest for other purposes such as running API tests, even though it
was not designed for that purpose. Many API testing alternatives are
available in the market. SOAP UI is a free, open source alternative to
consider, and is also available as a commercial option with additional
capabilities.
You could try to use cmd task command line to run MSTest with arguments.
Add Run Command Line step/task to execute MSTest command
Add Publish Test Results step/task
On the other hand, you can do test in Unit Test too, just send the request and check the response, related thread.
Also as Matt mentioned, since Visual Studio web performance tests (.webtest files) are tied to the load test functionality and is also deprecated. You could take a look at this blog here: Cloud-based load testing service end of life
Currently, I'm creating a project that incorporates the MEAN stack, Docker, and Travis CI. I'm using Travis CI to automate builds for unit testing, integration testing, etc. I'm using Docker to help create a test environment. I've already successfully created unit tests thanks to resources via Medium. However, I haven't found many resources on writing integration tests for a MEAN application. I want to create tests to see if I get expected values in the Angular application when it connects to the REST API endpoints from Express, and the Express application is connected to a MongoDB server. Does anyone have any resources or advice on how to write these tests, and to execute them in a Dockerized test environment?
Having done something similar myself, just a piece of advice.
Test the services independently, like e2e tests for the api server, mail service for the frontend web app. If the selenium tests run alright with the webpage/app, and the api end point is on the local machine then everything looks to be working. There is nothing magic in docker. Your local configs should reflect what you're trying to test, and avoid overcomplicating things and write the testing yourself.
Tools often take more time to learn than the actual thing you're trying to acomplish if you do it yourself. Document it adequatly so the consumer of the container can replicate with minimal effort.
It's actually pretty hard, good luck.
What is the best way to achieve DevOps with XPages.
Multiple Developers working as a team, On Premises Servers [Dev, QA,
Prod] can we replicate to Bluemix? Source Control Automated Testing UI
/ Application, Unit testing business logic with testing framework, Automated Deployment
IDE/Tools
Domino Designer; are there other ways?
Note: Use of Views when the data is in a NSF, otherwise data is in the cloud, or SQL. No Forms or other classic Notes design elements.
What are your approaches to this?
This is a high level overview of the topics required to attempt what you're describing. I'm breezing past lots of details, so please search them out; I've tried to reference what I'm currently aware of as far as supporting documentation and blog posts, etc. of others. If anyone has anything good to add, I'm happy to add it in.
There are several components involved with what you're describing, generally amounting to:
scm workflow
building the app (NSF)
deploying the built app to a Domino server
Everything else, such as release workflow through a QA/QC environment, is secondary to the primary steps above. I'll outline what I'm currently doing with, attempting to highlight where I'm working on improving the process.
1. SCM Workflow
This can be incredibly opinionated and will depend a lot on how your team does/wants to use source control with your deployment / release process. Below I'll touch on performing tests, conceptually, during/around the build step.
I've switched from a fairly generic scm server implementation to a GitLab instance. Even running a CE instance is pretty fantastic with their CI runner capabilities. Previously, I had a Jenkins CI instance performing about the same tasks, but had to bake more "workflow" into the Jenkins task, whereas now most of that logic is in a unified script, referenced from a config file (.gitlab-ci.yml). This is similar to how a Travis CI or other similar CI config file works.
This config calls some additional helper work, but ultimately revolves around an adapted version of Egor Margineanu's PowerShell script which invokes the headless DDE build task.
2. Building an NSF from Source
I've blogged about my general build process, with my previous Jenkins CI implementation. I followed the blogging of Cameron Gregor and Martin Pradny for this. Ultimately, you need to:
configure a Windows environment with Domino Designer
set up Domino Designer to import from ODP (disable export), ensuring Build Automatically is enabled
the notes.ini will need a flag of DESIGNER_AUTO_ENABLED=true
the Jenkins CI or GitLab CI runner (or other) will need to run as the logged in user, not a Windows service; this allows it to invoke the "headless dde" command correctly, since it runs in the background as opposed to a true headless invocation
ensure that Domino Designer can start without prompting for a user's password
My blog post covers additional topics such as flagging the build as success or failure, by scanning the output logs for being marked as a failed build. It also touches on how to submit the code to a SonarQube instance.
Ref: IBM Notes/Domino App Dev Wiki page on headless designer
Testing
Any additional testing or other workflow considerations (e.g.- QA/QC approval) should go around the build phase, depending on how you set up your SCM workflow. A lot of the implementation will revolve around the specifics of your setup. A general idea is to allow/prevent deployment based on the outcome of the build + test phase.
Bluemix Concerns
IBM Bluemix, the only PaaS that runs IBM XPages applications, will require some additional consideration, such as:
their Git deploy process will only accept a pre-built NSF
the NSF must be signed by the account owner's Bluemix ID
Ref:
- IBM XPages on Bluemix
- Bluemix Docs: Building XPages apps for the Bluemix Runtime
3. Deploy
To Bluemix
If you're looking to deploy an XPages app to run on Bluemix, you would want to either ensure your headless build runs with the Bluemix ID, or is at least signed with it, and then deploy it for a production push either via a git connection or the cf/bluemix command line utility. Bluemix's receive hooks handle all the rest of the deployment concerns, such as starting/stopping the server instance, etc.
To On-Premise Server
A user ID with appropriate level credentials needs to perform the work of either performing a design replace/refresh or stopping a dev/test/staging server, performing the file copy of the .nsf, then starting it back up. I've heard rumors of Cameron Gregor making use of a plugin to Domino Designer to perform the operations needed for OSGi plugin development, which sounds pretty useful. As most of my Domino application development is almost purely NSF based, I'm focusing more on an approach of deploying to a staging/dev/test server, which I can then perform a design task on to do the needed refresh/replace; closer to the "normal" Domino way of doing things.
Summary
Again, there are a lot of moving pieces involved here, some of which gets rather opinionated rather quickly. For example, I'm currently virtualizing my build machine, so I can spool up a couple virtual machines of it, allowing for more than one build at a time. If there are major gaps in the process, let me know and I'll fill it what I can.
I am in flux for integrating an automated GUI testing with my build system. My GUI application is developed in GWT. I use HUDSON as my automated build system. I would like to perform sanity test of my application. As I understand, the entire test setup will have following steps.
Build and deploy the application in predefined application server. In my case, it would be create and install the application in Android emulator.
Start/Launch the application.
Perform pre-defined user actions(UI Test cases) and validate them.
Somehow include validations for different browsers. I am really not sure how can I do this.
Generate report of test cases performed.
I am not posting the details of application as I think this detail will not make any difference in the approach. Can somebody guide me using past experience if this is possible and if it is then to what extent. The best UI automation tool (preferably open source) which can fit easily here.
We use TeamCity as build server for a GWT application. We just use it as a build server with two tasks: compile sources into Javascript, and deply war file to Tomcat application server. Although I didn't manually set it up yet, I believe it's possible to add a third task for UI testing using Selenium (which we used for another JSF web application testing).
A fairly good example of using Selenium automated testing is RichFaces. If you download its source code package, it includes hundreds of UI-testnig codes written generated by Selenium.