Making member package protected using wildcards - scala

I have the following situation:
package com.my.organisation.common.gateway
object Singleton {
val entrypoint = Entrypoint()
}
I would like to make entrypoint available to classes within com.my.organisation.common.gateway but also within com.my.organisation.other.gateway and also other future com.my.organisation.*.gateway packages.
Is there a way of doing this in Scala? protected val[com.my.organisation.*.gateway] does not compile, but that's the behaviour that I am aiming for.

try this.
package com.my.organisation.common.gateway
object Singleton {
private[organisation] val entrypoint = Entrypoint()
}
see also: https://alvinalexander.com/scala/how-to-control-scala-method-scope-object-private-package/

I suggest using a package object declared in the most common path of the 2 packages. Then this can be shared to all sub-packages, because package objects acts as convenient containers shared across an entire package.
Assuming the code for Entrypoint is defined somewhere in the earlier packages.
// in file com/my/organisation/package.scala
package com.my
package object organisation {
val entrypoint: Entrypoint = Entrypoint()
}
Then you can import it with import com.my.organisation._ wherever you need it.

Related

Pass configuration object to pytest.main()

I'm wrapping pytests in a python program that does some setup and builds the argument list to invoke pytest.main.
arg_list = [ ... ] //build arg_list
pytest.main(args=arg_list)
I also need to pass a configuration object from this wrapper to the tests run by pytest. I was thinking creating a fixture called conf and reference it the test functions
#pytest.fixture
def conf(request):
# Obtain configuration object
def test_mytest(conf):
#use configuration
However, I haven't found a way to pass an arbitrary object to fixtures (only options from the pytest arguments list).
Maybe using a hook? or a plugin injected or initialized from the wrapper?
You can create a module that is shared between your wrapper and your tests or serialize the object first.
Pickle the object and load it before tests
This solution keeps your wrapper and tests mostly independent. You could still execute the tests directly and pass the configuration object from the command line if you want to reproduce the test output for a certain object.
It does not work for all objects, because not all objects can be pickled. See "What can be pickled and unpickled?" for more details. This solution respects the scope of the fixture, because the object is reloaded from disk when the fixture is created.
Add a command line option for the path of the pickled file in conftest.py
import pickle
import pytest
def pytest_addoption(parser):
parser.addoption("--cfg-obj", help="path to the pickled configuration object")
#pytest.fixture
def conf(request):
path = request.config.getoption("--cfg-obj")
with open(path, 'rb') as fp:
return pickle.load(fp)
Pickle the object in wrapper.py and save it in a temporary file.
import pickle
import tempfile
import pytest
config_obj = {"answer": 42}
with tempfile.NamedTemporaryFile(delete=False) as fp:
pickle.dump(config_obj, fp)
fp.close()
args_list = ["tests.py", "--cfg-obj", fp.name]
pytest.main(args=args_list)
Use the conf fixture in tests.py
def test_something(conf):
assert conf == {'answer': 42}
Share the object between the wrapper and the tests
This solution does not seem very "clean" to me, because the tests can't be executed without the wrapper anymore (unless you add a fallback if the object is not set), but it has the advantage that the wrapper and the tests access the same object. This will work for arbitrary objects. It also introduces a possible dependency between your tests if you modify the state of the object, because the scope parameter of the fixture decorator has no effect here (it always loads the same object).
Create a shared.py module which is imported by the tests and the wrapper. It provides a setter and getter for the shared object.
_cfg_obj = None
def set_config_obj(obj):
global _cfg_obj
_cfg_obj = obj
def get_config_obj():
global _cfg_obj
return _cfg_obj
Set the shared object in wrapper.py
import pytest
from shared import set_config_obj
set_config_obj({"answer": 42})
args_list = ["tests.py"]
pytest.main(args=args_list)
Load the shared object in your conf fixture
import pytest
from shared import get_config_obj
#pytest.fixture
def conf():
return get_config_obj()
def test_something(conf):
assert conf == {"answer": 42}
Note that the shared.py module does not have to be outside your tests directory. If you turn the tests directory into a package by adding __init__.py files and add the shared object there, then you can import the tests package from your wrapper and set it with tests.set_config_obj(...).

PowerShell 5 class: Import module needed for type

I have a written a PowerShell 5 class:
class ConnectionManager
{
# Public Properties
static [String] $SiteUrl
static [System.Management.Automation.PSCredential] $Credentials
static [SharePointPnP.PowerShell.Commands.Base.SPOnlineConnection] $Connection
...
}
The type "SharePointPnP.PowerShell.Commands.Base.SPOnlineConnection" is from a custom (installed module), named "SharePointPnPPowerShell2016"
My class is inside another module/file, called "connection.manager.psm1".
When I load this module to make use of this class, it returns me the following error:
> Import-Module connection.manager.psm1
At connection.manager.psm1:6 char:11
+ static [SharePointPnP.PowerShell.Commands.Base.SPOnlineConnection] ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Unable to find type
[SharePointPnP.PowerShell.Commands.Base.SPOnlineConnection].
When I manually load the (PNP) module in the PowerShell session before loading my module it is loaded correctly and I can use it.
But I don't want to always have to manually load the other module first before I use my module. I tried to import the PNP-Module directly inside my module by adding:
Import-Module "SharePointPnPPowerShell2016"
at the beginning, before the class declaration, but it changes nothing, the error "Unable to find type" still appears.
Any ideas how to do this correctly?
I think you can fix this problem by using a module manifest
There is a "Required Module" and a "required Assembly" section you could use. This should handle loading the requirements (as long as they are installed) when you load your custom module, which holds the class.
If you have declared a class in a module, you cannot use it if you Import-Module; that only brings in cmdlets, functions, and aliases. Instead, your script should Using module the module; that will bring in the class as well as the other exported items.
(I actually misread the problem; this does not work to solve the querent's specific problem - but for a class that does not use classes from other modules, this will allow importing of classes from modules. The querent has found a known issue in PowerShell; see the comments for further information.)

Postgres PL/JAVA: java.lang.ClassNotFoundException error after loading JAR file in database

I am getting the java.lang.ClassNotFoundException: error inside Postgres when running a function that calls a JAR file I have loaded. I have installed and configured PL/JAVA (including the delivered examples) in my database and can run the examples to success. I am not attempting to load/install my first JAR, but I am doing something wrong.
My host controls the OS version: CentOS 6.8. Postgres is version 8.4.
I am attempting to install my own very simple java class, which is a derivative of the delivered example Parameters.addOne class. All my code is in /tmp. Here are the steps I've followed:
Doug.java:
package com.msmetric;
import java.math.BigDecimal;
import java.sql.Date;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Time;
import java.sql.Timestamp;
import java.text.DateFormat;
import java.text.SimpleDateFormat;
import java.util.TimeZone;
import java.util.logging.Logger;
public class Doug {
public static int addOne(int value) {
return value + 1;
}
}
Compile Doug.java using 'javac Doug.java' succeeds.
Create JAR file with Doug.class file in it using 'jar -cvf Doug.jar Doug.class. This works fine.
Now I load the JAR file into Postgres (public schema), change the classpath, create the function that calls the JAR, then attempt to run at psql prompt.
Run sqlj.install_jar from psql:
select sqlj.install_jar('file:/tmp/Doug.jar','Doug',false);
Set the classpath inside Postgres (from psql prompt postgres=#):
select sqlj.set_classpath('public','Doug');
Create the function that calls the JAR. This create function code is taken directly from the examples.ddr file that came with PL/JAVA. I simply changed org.postgres to com.msmetric.
create or replace function addone(int) returns int as 'com.msmetric.Doug.addOne(java.lang.Integer)' language java;
Now with the JAR loaded and function created, I attempt to run it. This function should simply add 1 to the number provided.
select addone(3);
Results:
ERROR: java.lang.ClassNotFoundException: com.msmetric.Doug
Thoughts?
I'm very sorry I didn't see your question sooner. Underneath all the exotic details (PostgreSQL, PL/Java, schemas, classpaths...), there's just a bit of basic Java going on here: if a jar file contains a class Doug.class in package com.msmetric, its path within the jar has to reflect that: it has to be com/msmetric/Doug.class. Otherwise, it won't be found.
You can set up that whole structure step by step:
javac Doug.java
mkdir com
mkdir com/msmetric
mv Doug.class com/msmetric/
jar -cvf Doug.jar com/msmetric/Doug.class
Or, you can let javac do more of the work for you:
mkdir classes
javac -d classes Doug.java
jar -cvf Doug.jar -C classes .
When you give javac a -ddirectory option, instead of just writing class files next to their .java sources, it will put them all in their proper places under the directory you named, and then you can just tell jar to change into that directory and slurp them all up (don't overlook the . at the end of that jar command).
Once you fix that, if you retry your original steps, you'll see that you now get a different error:
ERROR: Unable to find static method com.msmetric.Doug.addOne with signature (Ljava/lang/Integer;)I
That happens because you declared the function in Doug.java with int addOne(int value) (that is, taking a primitive int argument), but you declared it in SQL with returns int as 'com.msmetric.Doug.addOne(java.lang.Integer)' taking an Integer object.
Once you correct that:
create or replace function addone(int) returns int as 'com.msmetric.Doug.addOne(int)' language java;
you'll be able to see:
# select addone(3);
addone
--------
4
(1 row)
If you happen to see this belated answer, may I ask what version of PL/Java you are using? That's one detail you didn't mention. If it is older than 1.5.0, there are newer features that can help you out. For one, you can just annotate that function:
#Function
public static int addOne(int value) {
return value + 1;
}
and have javac spit out not only the Doug.class file but also a pljava.ddr file with your SQL function declaration already written correctly (no mixing up argument types!). There is a way to include that .ddr file into the jar you create so that you can just call sqlj.install_jar with the last parameter true so it runs the commands in the .ddr and your functions are ready to use. There's a Hello, world example in the docs that shows more of how it's done.
Cheers,
-Chap

pytest implementing a logfile per test method

I would like to create a separate log file for each test method. And i would like to do this in the conftest.py file and pass the logfile instance to the test method. This way, whenever i log something in a test method it would log to a separate log file and will be very easy to analyse.
I tried the following.
Inside conftest.py file i added this:
logs_dir = pkg_resources.resource_filename("test_results", "logs")
def pytest_runtest_setup(item):
test_method_name = item.name
testpath = item.parent.name.strip('.py')
path = '%s/%s' % (logs_dir, testpath)
if not os.path.exists(path):
os.makedirs(path)
log = logger.make_logger(test_method_name, path) # Make logger takes care of creating the logfile and returns the python logging object.
The problem here is that pytest_runtest_setup does not have the ability to return anything to the test method. Atleast, i am not aware of it.
So, i thought of creating a fixture method inside the conftest.py file with scope="function" and call this fixture from the test methods. But, the fixture method does not know about the the Pytest.Item object. In case of pytest_runtest_setup method, it receives the item parameter and using that we are able to find out the test method name and test method path.
Please help!
I found this solution by researching further upon webh's answer. I tried to use pytest-logger but their file structure is very rigid and it was not really useful for me. I found this code working without any plugin. It is based on set_log_path, which is an experimental feature.
Pytest 6.1.1 and Python 3.8.4
# conftest.py
# Required modules
import pytest
from pathlib import Path
# Configure logging
#pytest.hookimpl(hookwrapper=True,tryfirst=True)
def pytest_runtest_setup(item):
config=item.config
logging_plugin=config.pluginmanager.get_plugin("logging-plugin")
filename=Path('pytest-logs', item._request.node.name+".log")
logging_plugin.set_log_path(str(filename))
yield
Notice that the use of Path can be substituted by os.path.join. Moreover, different tests can be set up in different folders and keep a record of all tests done historically by using a timestamp on the filename. One could use the following filename for example:
# conftest.py
# Required modules
import pytest
import datetime
from pathlib import Path
# Configure logging
#pytest.hookimpl(hookwrapper=True,tryfirst=True)
def pytest_runtest_setup(item):
...
filename=Path(
'pytest-logs',
item._request.node.name,
f"{datetime.datetime.now().strftime('%Y%m%dT%H%M%S')}.log"
)
...
Additionally, if one would like to modify the log format, one can change it in pytest configuration file as described in the documentation.
# pytest.ini
[pytest]
log_file_level = INFO
log_file_format = %(name)s [%(levelname)s]: %(message)
My first stackoverflow answer!
I found the answer i was looking for.
I was able to achieve it using the function scoped fixture like this:
#pytest.fixture(scope="function")
def log(request):
test_path = request.node.parent.name.strip(".py")
test_name = request.node.name
node_id = request.node.nodeid
log_file_path = '%s/%s' % (logs_dir, test_path)
if not os.path.exists(log_file_path):
os.makedirs(log_file_path)
logger_obj = logger.make_logger(test_name, log_file_path, node_id)
yield logger_obj
handlers = logger_obj.handlers
for handler in handlers:
handler.close()
logger_obj.removeHandler(handler)
In newer pytest version this can be achieved with set_log_path.
#pytest.fixture
def manage_logs(request, autouse=True):
"""Set log file name same as test name"""
request.config.pluginmanager.get_plugin("logging-plugin")\
.set_log_path(os.path.join('log', request.node.name + '.log'))

Why is my object not a member of package <root> if it's in a separate source file?

I'm having a problem accessing an object defined in the root package. If I have all my code in one file, it works fine, but when I split it across two files, I can't get it past the compiler.
This works fine:
All in one file called packages.scala:
object Foo
val name = "Brian"
}
package somepackage {
object Test extends App {
println(Foo.name)
}
}
Witness:
$ scalac packages.scala
$ scala -cp . somepackage.Test
Brian
But if I split the code across two files:
packages.scala
object Foo {
val name = "Brian"
}
packages2.scala
package somepackage {
object Test extends App {
println(Foo.name)
}
}
it all fails:
$ scalac packages.scala packages2.scala
packages2.scala:3: error: not found: value Foo
So I try to make the reference to Foo absolute:
...
println(_root_.Foo.name)
...
But that doesn't work either:
$ scalac packages.scala packages2.scala
packages2.scala:3: error: object Foo is not a member of package <root>
If Foo is not a member of the root package, where on earth is it?
I think this is the relevant part in the spec:
Top-level definitions outside a packaging are assumed to be injected into a special empty package. That package cannot be named and therefore cannot be imported. However, members of the empty package are visible to each other without qualification.
Source Scala Reference §9.2 Packages.
But don’t ask me why it works if you have the following in packages2.scala:
object Dummy
package somepackage {
object Test extends App {
println(Foo.name)
}
}
Foo is a member of the root package, but you can't refer to it. It's a generic thing with JVM languages, (see How to access java-classes in the default-package? for Groovy, What's the syntax to import a class in a default package in Java?). It's the same for Scala.
From the Java answer:
You can't import classes from the default package. You should avoid
using the default package except for very small example programs.
From the Java language specification:
It is a compile time error to import a type from the unnamed package.
The reason it works in one single file is because everything is available to the compiler at once, and the compiler copes with it. I suspect that this is to allow scripting.
Moral of the story: don't use the default package if you're not scripting.
Ran into this when trying to import the main App entrypoint into a test. This may be an evil hack, but putting package scala at the top of the entrypoint definition seems to have made the object globally available. This may be evil, but it works.
E.g.
/src/main/scala/EntryPoint.scala
package scala
object EntryPoint extends App {
val s = "Foo"
}
/src/test/scala/integration/EntryPointSuite.scala
package integration
import org.scalatest.FlatSpec
class EntryPointSuite extends FlatSpec {
"EntryPoint" should "have property s" in {
val ep = EntryPoint.main()
assert(ep.s == "Foo")
}
}