Unit testing component which has ContextProvider using React Testing Library or Jest - react-testing-library

I have following component, where MyConfigProvider is ContextProvider. In RTL unit testing I want to test whether envConfig is set correctly or not. I cannot set data-testid to MyConfigProvider. Is there a way to test this scenario?
return (
<MyConfigProvider value = {envConfig}>
<CustomComponent/>
</MyConfigProvider>
)

Related

Pytest + Appium test framework

I'm very new to automation development, and currently starting to write an appium+pytest based Android app testing framework.
I managed to run tests on a connected device using this code, that seems to use unittest:
class demo(unittest.TestCase):
reportDirectory = 'reports'
reportFormat = 'xml'
dc = {}
driver = None
# testName = 'test_setup_tmotg_demo'
def setUp(self):
self.dc['reportDirectory'] = self.reportDirectory
self.dc['reportFormat'] = self.reportFormat
# self.dc['testName'] = self.testName
self.dc['udid'] = 'RF8MA2GW1ZF'
self.dc['appPackage'] = 'com.tg17.ud.internal'
self.dc['appActivity'] = 'com.tg17.ud.ui.splash.SplashActivity'
self.dc['platformName'] = 'android'
self.dc['noReset'] = 'true'
self.driver = webdriver.Remote('http://localhost:4723/wd/hub',self.dc)
# def test_function1():
# code
# def test_function2():
# code
# def test_function3():
# code
# etc...
def tearDown(self):
self.driver.quit()
if __name__ == '__main__':
unittest.main()
As you can see all the functions are currently within 'demo' class.
The intention is to create several test cases for each part of the app (for example: registration, main screen, premium subscription, etc.). That could sum up to hundreds of test cases eventually.
It seems to me that simply continuing listing them all in this same class would be messy and would give me a very limited control. However I didn't find any other way to arrange my tests while keeping the device connected via appium.
The question is what would be the right way to organize the project so that I can:
Set up the device with appium server
Run all the test suites in sequential order (registration, main screen, subscription, etc...).
Perform the cleaning... export results, disconnect device, etc.
I hope I described the issue clearly enough. Would be happy to elaborate if needed.
Well you have a lot of questions here so it might be good to split them up into separate threads. But first of all you can learn a lot about how Appium works by checking out the documentation here. And for the unittest framework here.
All Appium cares about is the capabilities file (or variable). So you can either populate it manually or white some helper function to do that for you. Here is a list of what can be used.
You can create as many test classes(or suites) as you want and add them together in any order you wish. This helps to break things up into manageable chunks. (See example below)
You will have to create some helper methods here as well, since Appium itself will not do much cleaning. You can use the adb command in the shell for managing android devices.
import unittest
from unittest import TestCase
# Create a Base class for common methods
class BaseTest(unittest.TestCase):
# setUpClass method will only be ran once, and not every suite/test
#classmethod
def setUpClass(cls) -> None:
# Init your driver and read the capabilites here
pass
#classmethod
def tearDownClass(cls) -> None:
# Do cleanup, close the driver, ...
pass
# Use the BaseTest class from before
# You can then duplicate this class for other suites of tests
class TestLogin(BaseTest):
#classmethod
def setUpClass(cls) -> None:
super(TestLogin, cls).setUpClass()
# Do things here that are needed only once (like loging in)
def setUp(self) -> None:
# This is executed before every test
pass
def testOne(self):
# Write your tests here
pass
def testTwo(self):
# Write your tests here
pass
def tearDown(self) -> None:
# This is executed after every test
pass
if __name__ == '__main__':
# Load the tests from the suite class we created
test_cases = unittest.defaultTestLoader.loadTestsFromTestCase(TestLogin)
# If you want do add more
test_cases.addTests(TestSomethingElse)
# Run the actual tests
unittest.TextTestRunner().run(test_cases)

How to handle test case fails in Katalon Studio

In my test case I decide whether I want to toggle my radio button. I check if my radio button does not have the checked attribute as follows:
TestObject srcCurrOpModeRadioBtn = guiUtils.createControl('config-src-op-mode-currentradio')
if(WebUI.verifyElementNotHasAttribute(srcCurrOpModeRadioBtn, 'checked', 5)){
TestObject srcCurrOpModeToggle = guiUtils.createControl('config-src-op-mode-current')
WebUI.click(srcCurrOpModeToggle)
WebUI.delay(1)
}
This works fine when my object does not have the checked attribute, but when my object is already checked (in the state that I want it to be), my test case fails. How do I make it so that instead of failing, it carries out the remaining test?
In simpler words, I have a toggle that toggles between 2 modes, lets call them mode1 and mode2. This specific test tests mode1's preferences, so before testing mode1's preferences, I have to toggle to mode1. My logic works when the toggle is at mode2 at the start of the test but it fails when I am already on mode1. I know that it fails because when mode1 is selected, its radio button already has a checked attribute and so the if statement fails but I don't want my test case to fail because of this, I want to test mode1's preferences.
There is another parameter to WebUI.verifyElementNotHasAttribute() - flow control.
You need to set FailureHandling.OPTIONAL so the test continues execution even if the function returns false. Then you can write the "else" part with corresponding logic.

How we can make cypress scripts easily maintainable like POM in other tools like selenium

This is just a general clarification about building framework using cypress.io.
In cypress can we write a test framework like page object model in selenium?
These model make our life easy to maintain tests.
For eg if ID or class of a particular element which is used across multiple tests /files has changed with a new version of Application-In cypress it is hard to go to multiple test files/tests and change the ID right?
Can we follow the same page object model concept like declaring all elements as variables in each page and use the variable names in tests/functions?
Also can we reuse these variables across different test .js files ?
If yes - can you please give a sample
Thanks
I have seen only a few people using POM concept while creating an automation framework using Cypress. Is that advisable to follow POM model, it depends on reading the following link from team. I would say this may depend upon automation tools/ architecture. According to Cypress team this is not recommendable, may be a debatable topic, read this: https://www.cypress.io/blog/2019/01/03/stop-using-page-objects-and-start-using-app-actions/#
We can declare the variable names in Cypress.env.json file or cypress.json file like below:
{
"weight": "85",
"height": "180",
"age": "35"
}
Then if you want to use them in a test-spec, create a new variable and receive it like below in test-spec.
const t_weight = Cypress.env('weight');
const t_height = Cypress.env('height');
Now you can use the variable in respective textbox input of pages as below:
cy.get('#someheighttextfieldID').type(t_weight);
cy.get('#someweighttextfieldID').type(t_height);
or receive it directly;
cy.get('#someweighttextfieldID').type(Cypress.env('weight'));
example:
/* declare varaibles in 'test-spec.js' file*/
const t_weight = Cypress.env('weight');
const t_height = Cypress.env('height');
//Cypress test - assume below test to test some action and receive the variable to text box
describe('Cypress test to receive variable', function(){
it('Cypress test to receive variable', function(){
cy.visit('/')
cy.get('#someweighttextfieldID').type(t_weight);
cy.get('#someheighttextfieldID').type(t_height);
//even receive the variable straight away
cy.get('#someweighttextfieldID').type(Cypress.env('weight'));
})
});

Debugging test cases when they are combination of Robot framework and python selenium

Currently I'm using Eclipse with Nokia/Red plugin which allows me to write robot framework test suites. Support is Python 3.6 and Selenium for it.
My project is called "Automation" and Test suites are in .robot files.
Test suites have test cases which are called "Keywords".
Test Cases
Create New Vehicle
Create new vehicle with next ${registrationno} and ${description}
Navigate to data section
Those "Keywords" are imported from python library and look like:
#keyword("Create new vehicle with next ${registrationno} and ${description}")
def create_new_vehicle_Simple(self,registrationno, description):
headerPage = HeaderPage(TestCaseKeywords.driver)
sideBarPage = headerPage.selectDaten()
basicVehicleCreation = sideBarPage.createNewVehicle()
basicVehicleCreation.setKennzeichen(registrationno)
basicVehicleCreation.setBeschreibung(description)
TestCaseKeywords.carnumber = basicVehicleCreation.save()
The problem is that when I run test cases, in log I only get result of this whole python function, pass or failed. I can't see at which step it failed- is it at first or second step of this function.
Is there any plugin or other solution for this case to be able to see which exact python function pass or fail? (of course, workaround is to use in TC for every function a keyword but that is not what I prefer)
If you need to "step into" a python defined keyword you need to use python debugger together with RED.
This can be done with any python debugger,if you like to have everything in one application, PyDev can be used with RED.
Follow below help document, if you will face any problems leave a comment here.
RED Debug with PyDev
If you are wanting to know which statement in the python-based keyword failed, you simply need to have it throw an appropriate error. Robot won't do this for you, however. From a reporting standpoint, a python based keyword is a black box. You will have to explicitly add logging messages, and return useful errors.
For example, the call to sideBarPage.createNewVehicle() should throw an exception such as "unable to create new vehicle". Likewise, the call to basicVehicleCreation.setKennzeichen(registrationno) should raise an error like "failed to register the vehicle".
If you don't have control over those methods, you can do the error handling from within your keyword:
#keyword("Create new vehicle with next ${registrationno} and ${description}")
def create_new_vehicle_Simple(self,registrationno, description):
headerPage = HeaderPage(TestCaseKeywords.driver)
sideBarPage = headerPage.selectDaten()
try:
basicVehicleCreation = sideBarPage.createNewVehicle()
except:
raise Exception("unable to create new vehicle")
try:
basicVehicleCreation.setKennzeichen(registrationno)
except:
raise exception("unable to register new vehicle")
...

Issue Unit Testing Sitecore and Glass with NUNIT

We've been testing our Sitecore code with the codeflood but wanted to do more to automate our tests on local and CI builds. I've been following the solution laid out by Mike Edwards on how to use NUNIT to run Sitecore tests -
http://www.experimentsincode.com/?p=232
Later, Dan Solovay had posted some thoughts on how to improve that -
http://www.dansolovay.com/2013/01/sitecore-nunit-testing-simplified.html
So far this works great in a visual studio build. Config is copied from the Sitecore web site to the test project and NUNIT can execute tests that retrieve items from Sitecore, all without a context.
My problem - we make use of Glass Mapper for things like this:
Database database = global::Sitecore.Configuration.Factory.GetDatabase("master");
ISitecoreService SitecoreService = new SitecoreService(database);
var catalogItem = database.GetItem([guid to our item]);
Assert.IsNotNull(catalogItem);
var catalog = SitecoreService.CreateType<ProductCatalog>(catalogItem, true, true);
Assert.NotNull(catalog);
Assert other things on our ProductCatalog class...
The problem seems to be that Glass Mapper's SitecoreService constructor needs a context and if it doesn't get one, it uses "Default". Since we're executing in NUNIT, there isn't a context and the creation of Sitecore Service fails.
I doubt there is a clear cut answer that fixes this but I'd be interested in anyone's thoughts.
Maybe the use of Glass Mapper in the test just isn't possible without the Sitecore context. On the other hand, I am by no means a Glass expert - maybe there is a different way to go about mapping my class in the test?
The Glass SitecoreService and SitecoreContext both have interfaces, your tests should mock this interfaces using a mock framework like NSubstitute or MOQ. For example using NSubstitute:
var product = new ProductCatalog();
product.Title = "Hello world";
ISitecoreService service = Substitute.For<ISitecoreService();
service.GetItem([Guid]).Returns(product);
var result = service.GetItem([Guid]);
Assert.AreEqual("Hello world", result.Title);
Your test above seems to be testing if Glass returns an item rather than testing the business logic of your application. You should avoid these sorts of tests.