I was trying to test my code getting this pytest error - pytest

def test_login():
print("Login to application")
def test_checkout():
print("Checkout")
def test_logout():
print("Logout From application")
warnings summary
....\anaconda3\lib\site-packages\pyreadline\py3k_compat.py:19
C:\Users\hp\anaconda3\lib\site-packages\pyreadline\py3k_compat.py:19: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collect
ions.abc' is deprecated since Python 3.3, and in 3.10 it will stop working
return isinstance(x, collections.Callable)

As stated in the Pytest docs related to warning.
By default pytest will display DeprecationWarning and PendingDeprecationWarning warnings from user code and third-party libraries, as recommended by PEP-0565. This helps users keep their code modern and avoid breakages when deprecated warnings are effectively removed.
In your case, it seems to come from a third party library (pyreadline) that is importing collections in a way that will not be supported anymore in Python 3.10. So you may upgrade the library if a further version fixes this import or choose to disable or filter this warning in Pytest. In this case, you will find in the Pytest docs a set of gradual options to do so.

Related

Pytest pluggy._manager.PluginValidationError due to the pytest-json-report plugin

I use pytest together with the pytest-json-report plugin. I have the pytest_json_modifyreport hook in the conftest.py file.
When I run the command pytest --json-report, it is OK. But when I run the simple pytest command, it yields the following: pluggy._manager.PluginValidationError: unknown hook 'pytest_json_modifyreport' in plugin.
Is it possible to get rid of that error without commenting out the hook?
According to the official documentation:
A note on hooks
If you're using a pytest_json_* hook although the plugin is not installed or
not active (not using --json-report), pytest doesn't recognize it and may fail with an internal error like this:
INTERNALERROR> pluggy.manager.PluginValidationError: unknown hook 'pytest_json_runtest_metadata' in plugin <module 'conftest' from 'conftest.py'>
You can avoid this by declaring the hook implementation optional:
import pytest
#pytest.hookimpl(optionalhook=True)
def pytest_json_runtest_metadata(item, call):
...

How to debug unit test while developping a package in Julia

Say I develop a package with a limited set of dependencies (for example, LinearAlgebra).
In the Unit testing part, I might need additional dependencies (for instance, CSV to load a file). I can configure that in the Project.toml all good.
Now from there and in VS Code, how can I debug the Unit tests? I tried running the "runtests.jl" in the debugger; however, it unsurprisingly complains that the CSV package is unavailable.
I could add the CSV package (as a temporary solution), but I would prefer that the debugger run with the configuration for the unit testing; how can I achieve that?
As requested, here is how it can be reproduced (it is not quite minimal, but instead I used a commonly used package as it give confidence the package is not the problem). We will use DataFrames and try to execute the debugger for its unit tests.
Make a local version of DataFrames for the purpose of developing a feature in it. I execute dev DataFrames in a new REPL.
Select the correct environment (in .julia/dev/DataFrames) through the VS-code user interface.
Execute the "proper" unit testing by executing test DataFrames at the pkg prompt. Everything should go smoothly.
Try to execute the tests directly (open the runtests.jl and use the "Run" button in vs-code). I see some errors of the type:
LoadError: ArgumentError: Package CategoricalArrays not found in current path:
- Run `import Pkg; Pkg.add("CategoricalArrays")` to install the CategoricalArrays package.
which is consistent with CategoricalArrays being present in the [extras] section of the Project.toml but not present in the [deps].
Finally, instead of the "Run" command, execute the "Run and Debug". I encounter similar errors here is the first one:
Test Summary: | Pass Total
merge | 19 19
PASSED: index.jl
FAILED: dataframe.jl
LoadError: ArgumentError: Package DataStructures not found in current path:
- Run `import Pkg; Pkg.add("DataStructures")` to install the DataStructures package.
So I can't debug the code after the part requiring the extras packages.
After all that I delete this package with the command free DataFrames at the pkg prompt.
I see the same behavior in my package.
I'm not certain I understand your question, but I think you might be looking for the TestEnv package. It allows you to activate a temporary environment containing the [extras] dependencies. The discourse announcement contains a good description of the use cases.
Your runtest.jl file should contain all necessary imports to run tests.
Hence you are expected to have in your runtests.jl file lines such as:
using YourPackageName
using CSV
# the lines with tests now go here.
This is a standard in Julia package layout. For an example have a look at any mature Julia such as DataFrames.jl (https://github.com/JuliaData/DataFrames.jl/blob/main/test/runtests.jl).

How to fix '[WARNING] The callable Microsoft.Quantum.Canon.InverseMod has been deprecated.' warning in Q#?

In IntegerFactorization Q# sample in Microsoft/Quantum repository, there isn't InverseMod function. But when I compile and run the code, it produces a number of warnings "The callable Microsoft.Quantum.Canon.InverseMod has been deprecated in favor of Microsoft.Quantum.Math.InverseModI.". How can I fix it?
There is no word InverseMod anywhere in Shor.qs file.
I expect warning is disappear. Plese help me TT
This was caused by the use of the deprecated function InverseMod in arithmetic libraries used by the IntegerFactorization project.
The project has dependencies on several NuGet packages, among them Microsoft.Quantum.Standard which provides standard library functions, including modular arithmetic. This package used the deprecated function in its version 0.6.1905.301, which caused this runtime warning. If you check the source code of the package in Microsoft/QuantumLibraries repository, you'll notice that this has been fixed two days ago, so with the next release of the NuGet package this warning will disappear.
Edit: This should be fixed in release 0.7.1905.3109. The samples repository has been updated to use the new release; if you get the latest version of the repository and try running the project again the warning should go away.

my coffeescript file compiles but mocha gives an error

I have a project that uses "coffee-script": "^1.7.1" in its package.json.
The code has this line in it:
[{id: id, name: name}, ...] = result.rows
This compiles fine using coffeescript version 1.7.1
The problem is that I am trying to use mocha for unit tests and it gives me an error on this line:
Parse error on line xyz: Unexpected '...'
Apparently mocha uses an older coffeescript. Is there a way to make it work without adjusting the source for mocha?
EDIT:
my Gruntfile.coffee:
'use strict'
module.exports = ->
#initConfig
cafemocha:
src: ['test/*.coffee']
options:
reporter: 'spec'
ui: 'bdd'
coffee:
compile:
files:
'lib/mylib.js': ['src/*.coffee']
#loadNpmTasks 'grunt-cafe-mocha'
#loadNpmTasks 'grunt-contrib-coffee'
#registerTask 'default', ['coffee', 'cafemocha']
I added mocha.opts to the test directory:
--require coffee-script/register
--compilers coffee:coffee-script/register
--reporter spec
--ui bdd
but, still, when I run grunt, it gives me the same error. I am new to this environment, and I find it too complicated, please help.
Starting from version 1.7.x CoffeeScript compiler should be explicitly registered (see change log for version 1.7.0).
So, the problem is that CoffeeScript compiler is not registered when you're running your mocha tests, so node.js treats all your .coffee files as .js files.
The best possible solution is to specify --compilers option for your mocha tests:
--compilers coffee:coffee-script/register
If you don't want to include it to every mocha call, you could set it up using mocha.opts file.
Here are some useful links:
issue about it on github
reference in mocha docs
the reason behind this breaking change in CoffeeScript engine
Update
Looks like your issue is much deeper then I thought.
First, grunt-cafe-mocha doesn't respect mocha.opts because it's running tests by requireing mocha as a dependency, instead of calling mocha test runner.
So, it would've been enough to add require('coffee-script/register') to the top of your gruntfile, if not for this old grunt issue.
In short, grunt uses coffee-script 1.3.x, forcing all its tasks to use the same version of coffee. I had the same problem with grunt-contrib-connect, being unable to use latest coffee-script in my express app.
So, the only help I can offer you is a small grunt task I wrote to solve similar problem in one of my projects. It runs mocha in a separate child process, thus completely isolating it from grunt.
N.B. I had a thought about releasing this task to npm, but considered it too minor.

Play 2.0 - access running (Fake)Application from scala console

I'm just getting started with the Play Framework 2.0 (using current trunk 2.1-SNAPSHOT, Scala) and I'm finding it very useful to experiment with the Scala API in the play console.
For some things, however, for example stuff that depends on play.libs.WS API, I'm getting the There is no started application error. Fair enough, but I can't figure out how to set up a fake one up to use from the console, or whether this is even possible.
It seems that play.api.test._ isn't even accessible from the console. Any suggestions?
Update: Thanks to #charroch, I needed to run play test:console, so I can now do:
import play.api.test.Helpers.running
import play.api.test.FakeApplication
val res = running(FakeApplication()) {
MyWebservice.someFunction()
}
try test:console to start the console with test api in classpath
You need to have running(FakeApplication) {...} in your test as per:
http://www.playframework.org/documentation/2.0/ScalaTest