How do I read environment variables in Postman tests? - rest

I'm using the packaged app version of Postman to write tests against my Rest API. I'm trying to manage state between consecutive tests. To faciliate this, the Postman object exposed to the Javascript test runtime has methods for setting variables, but none for reading.
postman.setEnvironmentVariable("key", value );
Now, I can read this value in the next call via the {{key}} structure that sucks values in from the current environment. BUT, this doesn't work in the tests; it only works in the request building stuff.
So, is there away to read this stuff from the tests?

According to the docs here you can use
environment["foo"] OR environment.foo
globals["bar"] OR globals.bar
to access them.
ie;
postman.setEnvironmentVariable("foo", "bar");
tests["environment var foo = bar"] = environment.foo === "bar";
postman.setGlobalVariable("foobar", "1");
tests["global var foobar = true"] = globals.foobar == true;
postman.setGlobalVariable("bar", "0");
tests["global var bar = false"] = globals.bar == false;

Postman updated their sandbox and added a pm.* API. Although the older syntax for reading variables in the test scripts still works, according to the docs:
Once a variable has been set, use the pm.variables.get() method or,
alternatively, use the pm.environment.get() or pm.globals.get()
method depending on the appropriate scope to fetch the variable. The
method requires the variable name as a parameter to retrieve the
stored value in a script.

Related

Stop huge error output from testing-library

I love testing-library, have used it a lot in a React project, and I'm trying to use it in an Angular project now - but I've always struggled with the enormous error output, including the HTML text of the render. Not only is this not usually helpful (I couldn't find an element, here's the HTML where it isn't); but it gets truncated, often before the interesting line if you're running in debug mode.
I simply added it as a library alongside the standard Angular Karma+Jasmine setup.
I'm sure you could say the components I'm testing are too large if the HTML output causes my console window to spool for ages, but I have a lot of integration tests in Protractor, and they are SO SLOW :(.
I would say the best solution would be to use the configure method and pass a custom function for getElementError which does what you want.
You can read about configuration here: https://testing-library.com/docs/dom-testing-library/api-configuration
An example of this might look like:
configure({
getElementError: (message: string, container) => {
const error = new Error(message);
error.name = 'TestingLibraryElementError';
error.stack = null;
return error;
},
});
You can then put this in any single test file or use Jest's setupFiles or setupFilesAfterEnv config options to have it run globally.
I am assuming you running jest with rtl in your project.
I personally wouldn't turn it off as it's there to help us, but everyone has a way so if you have your reasons, then fair enough.
1. If you want to disable errors for a specific test, you can mock the console.error.
it('disable error example', () => {
const errorObject = console.error; //store the state of the object
console.error = jest.fn(); // mock the object
// code
//assertion (expect)
console.error = errorObject; // assign it back so you can use it in the next test
});
2. If you want to silence it for all the test, you could use the jest --silent CLI option. Check the docs
The above might even disable the DOM printing that is done by rtl, I am not sure as I haven't tried this, but if you look at the docs I linked, it says
"Prevent tests from printing messages through the console."
Now you almost certainly have everything disabled except the DOM recommendations if the above doesn't work. On that case you might look into react-testing-library's source code and find out what is used for those print statements. Is it a console.log? is it a console.warn? When you got that, just mock it out like option 1 above.
UPDATE
After some digging, I found out that all testing-library DOM printing is built on prettyDOM();
While prettyDOM() can't be disabled you can limit the number of lines to 0, and that would just give you the error message and three dots ... below the message.
Here is an example printout, I messed around with:
TestingLibraryElementError: Unable to find an element with the text: Hello ther. This could be because the text is broken up by multiple elements. In this case, you can provide a function for your text matcher to make your matcher more flexible.
...
All you need to do is to pass in an environment variable before executing your test suite, so for example with an npm script it would look like:
DEBUG_PRINT_LIMIT=0 npm run test
Here is the doc
UPDATE 2:
As per the OP's FR on github this can also be achieved without injecting in a global variable to limit the PrettyDOM line output (in case if it's used elsewhere). The getElementError config option need to be changed:
dom-testing-library/src/config.js
// called when getBy* queries fail. (message, container) => Error
getElementError(message, container) {
const error = new Error(
[message, prettyDOM(container)].filter(Boolean).join('\n\n'),
)
error.name = 'TestingLibraryElementError'
return error
},
The callstack can also be removed
You can change how the message is built by setting the DOM testing library message building function with config. In my Angular project I added this to test.js:
configure({
getElementError: (message: string, container) => {
const error = new Error(message);
error.name = 'TestingLibraryElementError';
error.stack = null;
return error;
},
});
This was answered here: https://github.com/testing-library/dom-testing-library/issues/773 by https://github.com/wyze.

Protractor Custom Locator: Not available in production, but working absolutely fine on localhost

I have added a custom locator in protractor, below is the code
const customLocaterFunc = function (locater: string, parentElement?: Element, rootSelector?: any) {
var using = parentElement || (rootSelector && document.querySelector(rootSelector)) || document;
return using.querySelector("[custom-locater='" + locater + "']");
}
by.addLocator('customLocater', customLocaterFunc);
And then, I have configured it inside protractor.conf.js file, in onPrepare method like this:
...
onPrepare() {
require('./path-to-above-file/');
...
}
...
When I run my tests on the localhost, using browser.get('http://localhost:4200/login'), the custom locator function works absolutely fine. But when I use browser.get('http://11.15.10.111/login'), the same code fails to locate the element.
Please note, that the test runs, the browser gets open, user input gets provided, the user gets logged-in successfully as well, but the element which is referred via this custom locator is not found.
FYI, 11.15.10.111 is the remote machine (a virtual machine) where the application is deployed. So, in short the custom locator works as expected on localhost, but fails on production.
Not an answer, but something you'll want to consider.
I remember adding this custom locator, and encounter some problems with it and realised it's just an attribute name... nothing fancy, so I thought it's actually much faster to write
let elem = $('[custom-locator="locator"]')
which is equivalent to
let elem = element(by.css('[custom-locator="locator"]'))
than
let elem = element(by.customLocator('locator'))
And I gave up on this idea. So maybe you'll want to go this way too
I was able to find a solution to this problem, I used data- prefix for the custom attribute in the HTML. Using which I can find that custom attribute on the production build as well.
This is an HTML5 principle to prepend data- for any custom attribute.
Apart from this, another mistake that I was doing, is with the selector's name. In my code, the selector name is in camelCase (loginBtn), but in the production build, it was replaced with loginbtn (all small case), that's why my custom locater was not able to find it on the production build.

how to access secrets from jenkins in .scala file?

I have a Secret text binding Variable SECRET_TKN in jenkins job configuration. I want to access this var in .scala file . How do I access this var generically in my code?
I have tried the following but it doesn't seem to work correctly:
val token =sys.env("${SECRET_TKN}")
println ("value = " +token)
Console output shows value as SOME(***) thus causing the api calls to fail as I believe the keyword SOME is coming along with the actual fetched value.
Also, tried sys.env("${?STG_SERVICE_TKN}") but no luck.
sys.env is keyed by variable name, so this should work:
val token = sys.env("SECRET_TKN")
For your case, it would work if the SECRET_TKN is a variable being populated, it would work fine:
val SECRET_TKN = "SECRET_TKN"
val token =sys.env(s"${SECRET_TKN}")
It's better practice to use sys.env.get("mySecret") which will give you an Option[String] rather than throw an error if that variable is missing.

TYPO3: repository->findAll() not working

I am building an extension with a backend module. When I call the findAll() method it returns a "QueryResult" object.
I tried to retrieve objects with findByUid() and it does work.
I set the storage pid in the typoscript:
plugin.tx_hwforms.persistence.storagePid = 112
I can also see it in the typoscript object browser.
I also added this to my repository class:
public function initializeObject()
{
$defaultQuerySettings = $this->objectManager->get(\TYPO3\CMS\Extbase\Persistence\Generic\Typo3QuerySettings::class);
$defaultQuerySettings->setRespectStoragePage(false);
$this->setDefaultQuerySettings($defaultQuerySettings);
}
so that the storage pid is ignored ...
It's still not working, findAll doesn't return an array of entites as it should
Repository must return a QueryResult from the findAll methods. Only methods which return a single object (findOneByXYZ) will return anything else.
All of the following operations will cause a QueryResult to load the actual results it contains. Until you perform one of these, no results are loaded and debugging the QueryResult will yield no information except for the original Query.
$queryResult->toArray();
$queryResult->offsetGet($offset); and $queryResult[$offset];
$queryResult->offsetExists($offset);
$queryResult->offsetSet($offset, $value); and $queryResult[$offset] = $value; (but be aware that doing this yourself with a QueryResult is illogical).
$queryResult->offsetUnset($offset); and unset($queryResult[$offset]); (again, illogical to use this yourself)
$queryResult->current(), ->key(), ->next(), ->prev(), ->rewind() and ->valid() which can all be called directly or will be called if you begin iterating the QueryResult.
Note that ->getFirst() and ->count() do not cause the original query to fire and will not fill results if they are not already filled. Instead, they will perform an optimised query.
Summa summarum: when you get a QueryResult you must trigger it somehow, which normally happens when you begin to render the result set. It is not a prefilled array; it is a dynamically filled Iterator.
This should work.there must be issue with your storage page in FindAll() extbase check for storage but in findByXXX() it ignore storage.
$objectManager = \TYPO3\CMS\Core\Utility\GeneralUtility::makeInstance('TYPO3\\CMS\Extbase\\Object\\ObjectManager');
$querySettings = $objectManager->get('TYPO3\\CMS\\Extbase\\Persistence\\Generic\\Typo3QuerySettings');
$querySettings->setRespectStoragePage(FALSE);
$this->cityRepository->setDefaultQuerySettings($querySettings);
$cities = $this->cityRepository->findAll();
Use additionally typoscript module configuration, like
module.tx_hwforms.persistence.storagePid = 112
Ensure your Typoscript is loaded in root. For BE modules I prefere to use
EXT:hwforms/ext_typoscript_setup.txt
where you write your module and extbase configuration.
Try to debbug like below and check findAll() method present for this repositry. I think this is useful for you click here
\TYPO3\CMS\Extbase\Utility\DebuggerUtility::var_dump(get_class_methods($this->yourRepositryName)); exit();
Afetr added all your changes once you need to uninsatll/install extension.
I would inspect the generated query itself. Configure the following option in the install tool:
$GLOBALS["TYPO3_CONF_VARS"]["sqlDebug"]
Note: dont do this in production environemnt!!
Explanation for sqlDebug:
Setting it to "0" will avoid any information being print on screen.
Setting it to "1" will show errors only.
Setting it to "2" will print all queries on screen.
So in Production you want to keep it at "0", in development environments you should set it to "1", if you want to know why some result is empty, configure it to "2".
I would guess that some enablefield configuration causes your problem.
If you retrieve an object by findByUid you will have the return because enablefields are ignored. In every other case enablefields are applied and that may cause your empty result.

Disable logging on FileConfigurationSourceChanged - LogEnabledFilter

I want Administrators to enable/disable logging at runtime by changing the enabled property of the LogEnabledFilter in the config.
There are several threads on SO that explain workarounds, but I want it this way.
I tried to change the Logging Enabled Filter like this:
private static void FileConfigurationSourceChanged(object sender, ConfigurationSourceChangedEventArgs e)
{
var fcs = sender as FileConfigurationSource;
System.Diagnostics.Debug.WriteLine("----------- FileConfigurationSourceChanged called --------");
LoggingSettings currentLogSettings = e.ConfigurationSource.GetSection("loggingConfiguration") as LoggingSettings;
var fdtl = currentLogSettings.TraceListeners.Where(tld => tld is FormattedDatabaseTraceListenerData).FirstOrDefault();
var currentLogFileFilter = currentLogSettings.LogFilters.Where(lfd => { return lfd.Name == "Logging Enabled Filter"; }).FirstOrDefault();
var filterNewValue = (bool)currentLogFileFilter.ElementInformation.Properties["enabled"].Value;
var runtimeFilter = Logger.Writer.GetFilter<LogEnabledFilter>("Logging Enabled Filter");
runtimeFilter.Enabled = filterNewValue;
var test = Logger.Writer.IsLoggingEnabled();
}
But test reveals always the initially loaded config value, it does not change.
I thought, that when changing the value in the config the changes will be propagated automatically to the runtime configuration. But this isn't the case!
Setting it programmatically as shown in the code above, doesn't work either.
It's time to rebuild Enterprise Library or shut it down.
You are right that the code you posted does not work. That code is using a config file (FileConfigurationSource) as the method to configure Enterprise Library.
Let's dig a bit deeper and see if programmatic configuration will work.
We will use the Fluent API since it is the preferred method for programmatic configuration:
var builder = new ConfigurationSourceBuilder();
builder.ConfigureLogging()
.WithOptions
.DoNotRevertImpersonation()
.FilterEnableOrDisable("EnableOrDisable").Enable()
.LogToCategoryNamed("General")
.WithOptions.SetAsDefaultCategory()
.SendTo.FlatFile("FlatFile")
.ToFile(#"fluent.log");
var configSource = new DictionaryConfigurationSource();
builder.UpdateConfigurationWithReplace(configSource);
var defaultWriter = new LogWriterFactory(configSource).Create();
defaultWriter.Write("Test1", "General");
var filter = defaultWriter.GetFilter<LogEnabledFilter>();
filter.Enabled = false;
defaultWriter.Write("Test2", "General");
If you try this code the filter will not be updated -- so another failure.
Let's try to use the "old school" programmatic configuration by using the classes directly:
var flatFileTraceListener = new FlatFileTraceListener(
#"program.log",
"----------------------------------------",
"----------------------------------------"
);
LogEnabledFilter enabledFilter = new LogEnabledFilter("Logging Enabled Filter", true);
// Build Configuration
var config = new LoggingConfiguration();
config.AddLogSource("General", SourceLevels.All, true)
.AddTraceListener(flatFileTraceListener);
config.Filters.Add(enabledFilter);
LogWriter defaultWriter = new LogWriter(config);
defaultWriter.Write("Test1", "General");
var filter = defaultWriter.GetFilter<LogEnabledFilter>();
filter.Enabled = false;
defaultWriter.Write("Test2", "General");
Success! The second ("Test2") message was not logged.
So, what is going on here? If we instantiate the filter ourselves and add it to the configuration it works but when relying on the Enterprise Library configuration the filter value is not updated.
This leads to a hypothesis: when using Enterprise Library configuration new filter instances are being returned each time which is why changing the value has no effect on the internal instance being used by Enterprise Library.
If we dig into the Enterprise Library code we (eventually) hit on LoggingSettings class and the BuildLogWriter method. This is used to create the LogWriter. Here's where the filters are created:
var filters = this.LogFilters.Select(tfd => tfd.BuildFilter());
So this line is using the configured LogFilterData and calling the BuildFilter method to instantiate the applicable filter. In this case the BuildFilter method of the configuration class LogEnabledFilterData BuildFilter method returns an instance of the LogEnabledFilter:
return new LogEnabledFilter(this.Name, this.Enabled);
The issue with this code is that this.LogFilters.Select returns a lazy evaluated enumeration that creates LogFilters and this enumeration is passed into the LogWriter to be used for all filter manipulation. Every time the filters are referenced the enumeration is evaluated and a new Filter instance is created! This confirms the original hypothesis.
To make it explicit: every time LogWriter.Write() is called a new LogEnabledFilter is created based on the original configuration. When the filters are queried by calling GetFilter() a new LogEnabledFilter is created based on the original configuration. Any changes to the object returned by GetFilter() have no affect on the internal configuration since it's a new object instance and, anyway, internally Enterprise Library will create another new instance on the next Write() call anyway.
Firstly, this is just plain wrong but it is also inefficient to create new objects on every call to Write() which could be invoked many times..
An easy fix for this issue is to evaluate the LogFilters enumeration by calling ToList():
var filters = this.LogFilters.Select(tfd => tfd.BuildFilter()).ToList();
This evaluates the enumeration only once ensuring that only one filter instance is created. Then the GetFilter() and update filter value approach posted in the question will work.
Update:
Randy Levy provided a fix in his answer above.
Implement the fix and recompile the enterprise library.
Here is the answer from Randy Levy:
Yes, you can disable logging by setting the LogEnabledFiter. The main
way to do this would be to manually edit the configuration file --
this is the main intention of that functionality (developers guide
references administrators tweaking this setting). Other similar
approaches to setting the filter are to programmatically modify the
original file-based configuration (which is essentially a
reconfiguration of the block), or reconfigure the block
programmatically (e.g. using the fluent interface). None of the
programmatic approaches are what I would call simple – Randy Levy 39
mins ago
If you try to get the filter and disable it I don't think it has any
affect without a reconfiguration. So the following code still ends up
logging: var enabledFilter = logWriter.GetFilter();
enabledFilter.Enabled = false; logWriter.Write("TEST"); One non-EntLib
approach would just to manage the enable/disable yourself with a bool
property and a helper class. But I think the priority approach is a
pretty straight forward alternative.
Conclusion:
In your custom Logger class implement a IsLoggenabled property and change/check this one at runtime.
This won't work:
var runtimeFilter = Logger.Writer.GetFilter<LogEnabledFilter>("Logging Enabled Filter");
runtimeFilter.Enabled = false/true;