Trouble connecting to SSL enabled mongo cluster from Spark Application - mongodb

I'm trying to connect to a SSL enabled mongo cluster from a spark application. I'm trying to use self signed cert and getting the following error.
Exception in monitor thread while connecting to server CLUSTER_NAME
com.mongodb.MongoSocketWriteException: Exception sending message
at com.mongodb.internal.connection.InternalStreamConnection.translateWriteException(InternalStreamConnection.java:525)
at com.mongodb.internal.connection.InternalStreamConnection.sendMessage(InternalStreamConnection.java:413)
at com.mongodb.internal.connection.InternalStreamConnection.sendCommandMessage(InternalStreamConnection.java:269)
at com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:253)
at com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:83)
at com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:33)
at com.mongodb.internal.connection.InternalStreamConnectionInitializer.initializeConnectionDescription(InternalStreamConnectionInitializer.java:106)
at com.mongodb.internal.connection.InternalStreamConnectionInitializer.initialize(InternalStreamConnectionInitializer.java:63)
at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:127)
at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: No name matching CLUSTER_NAME found
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1509)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)
My read config uri looks something like this:
val uri: String = "mongodb://" + URLEncoder.encode(Login, "UTF-8") + ":" + URLEncoder.encode(Password, "UTF-8") + "#" + cluster + ":27017/" + database + "." + collection + "?authSource=" + (if (authenticationDatabase != "") authenticationDatabase else "admin") + (if (replicaset == null) "" else "&replicaSet=" + replicaset) + "&ssl=true"
I want to use self signed cert something like :
class TrustAllX509TrustManager extends X509TrustManager {
override def getAcceptedIssuers = new Array[X509Certificate](0)
override def checkClientTrusted(certs: Array[X509Certificate], authType: String): Unit = {
}
override def checkServerTrusted(certs: Array[X509Certificate], authType: String): Unit = {
}
}
The version of the env's I'm using:
Spark: 2.2.0
Mongo: 3.4
Any help will be appreciated.
Thanks!

This is same as making any other SSL connection. Import your cert in keystore and refer to that key store using below code
System.setProperty("javax.net.ssl.trustStore", "keystoreFilefullpath")
System.setProperty("javax.net.ssl.trustStorePassword", "password")
Once these params are set then Kafka SSL should work. If you are publishing from Spark then keystore file must be uploaded to driver/executor using --files option

Related

java.lang.NoClassDefFoundError: org/apache/flink/streaming/api/scala/StreamExecutionEnvironment

package com.knoldus
import org.apache.flink.api.java.utils.ParameterTool
import org.apache.flink.streaming.api.scala._
import org.apache.flink.streaming.api.windowing.time.Time
object SocketWindowWordCount {
def main(args: Array[String]) : Unit = {
var hostname: String = "localhost"
var port: Int = 9000
try {
val params = ParameterTool.fromArgs(args)
hostname = if (params.has("hostname")) params.get("hostname") else "localhost"
port = params.getInt("port")
} catch {
case e: Exception => {
System.err.println("No port specified. Please run 'SocketWindowWordCount " +
"--hostname <hostname> --port <port>', where hostname (localhost by default) and port " +
"is the address of the text server")
System.err.println("To start a simple text server, run 'netcat -l <port>' " +
"and type the input text into the command line")
return
}
}
// get the execution environment
val env: StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment
// get input data by connecting to the socket
val text: DataStream[String] = env.socketTextStream(hostname, port, '\n')
// parse the data, group it, window it, and aggregate the counts
val windowCounts = text
.flatMap { w => w.split("\\s") }
.map { w => WordWithCount(w, 1) }
.keyBy("word")
.timeWindow(Time.seconds(5))
.sum("count")
// print the results with a single thread, rather than in parallel
windowCounts.print().setParallelism(1)
env.execute("Socket Window WordCount")
}
/** Data type for words with count */
case class WordWithCount(word: String, count: Long)
}
In the end while running this code on Intellij i am getting this error
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/flink/streaming/api/scala/StreamExecutionEnvironment$
at com.knoldus.SocketWindowWordCount$.main(SocketWindowWordCount.scala:43)
at com.knoldus.SocketWindowWordCount.main(SocketWindowWordCount.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.flink.streaming.api.scala.StreamExecutionEnvironment$
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
... 2 more
Select menu item "Run" => "Edit Configurations...",
then in the "Build and run" section select "Modify options" => Java => Add dependencies with "Provided" scope to classpath in your local configuration.
In this way you don't have to remove the <scope>provided</scope>.
I resolved this by removing the <scope>provided</scope> which was present in the maven import.

javax.security.sasl.SaslException: Authentication failed: the server presented no authentication mechanisms in Wildfly 10.1

I am new to EJBs, and I am trying to perform remote invocations on stateless and stateful beans that I have deployed on a pod in my project that is based on Wildfly 10.1 in the new OpenShift 3 (Origin). The code that I am using for initializing the client context looks like:
Properties clientProperties = new Properties();
clientProperties.put("remote.connectionprovider.create.options.org.xnio.Options.SSL_ENABLED", "false");
clientProperties.put("remote.connections", "default");
clientProperties.put("remote.connection.default.host", "localhost");
clientProperties.put("remote.connection.default.port", "8080");
clientProperties.put("remote.connection.default.username", "****");
clientProperties.put("remote.connection.default.password", "****"); clientProperties.put("remote.connection.default.connect.options.org.xnio.Options.SASL_POLICY_NOANONYMOUS", "false");
clientProperties.put("remote.connection.default.connect.options.org.xnio.Options.SASL_POLICY_NOPLAINTEXT", "false");
EJBClientContext.setSelector(new ConfigBasedEJBClientContextSelector(new
PropertiesBasedEJBClientConfiguration(clientProperties)));
Properties contextProperties = new Properties();
contextProperties.put(Context.URL_PKG_PREFIXES, "org.jboss.ejb.client.naming");
contextProperties.put(Context.SECURITY_PRINCIPAL, "****"); //username
contextProperties.put(Context.SECURITY_CREDENTIALS, "****"); //password
Context context = new InitialContext(contextProperties);
String appName = "CloudEAR";
String moduleName = "CloudEjb";
String distinctName = "";
String beanName = "Calculator";
String qualifiedRemoteView = "cloudEJB.view.CalculatorRemote";
String lookupString = "ejb:" + appName + "/" + moduleName + "/" + distinctName + "/" + beanName + "!" + qualifiedRemoteView;
Calculator calculator = (CalculatorRemote) context.lookup(lookupString);
int sum = calculator.sum(10, 10);
And the error message that I get is:
WARN: Could not register a EJB receiver for connection to localhost:8080
javax.security.sasl.SaslException: Authentication failed: the server presented no authentication mechanisms
at org.jboss.remoting3.remote.ClientConnectionOpenListener$Capabilities.handleEvent(ClientConnectionOpenListener.java:378)
at org.jboss.remoting3.remote.ClientConnectionOpenListener$Capabilities.handleEvent(ClientConnectionOpenListener.java:240)
at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92)
at org.xnio.channels.TranslatingSuspendableChannel.handleReadable(TranslatingSuspendableChannel.java:198)
at org.xnio.channels.TranslatingSuspendableChannel$1.handleEvent(TranslatingSuspendableChannel.java:112)
at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92)
at org.xnio.ChannelListeners$DelegatingChannelListener.handleEvent(ChannelListeners.java:1092)
at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92)
at org.xnio.conduits.ReadReadyHandler$ChannelListenerHandler.readReady(ReadReadyHandler.java:66)
at org.xnio.nio.NioSocketConduit.handleReady(NioSocketConduit.java:89)
at org.xnio.nio.WorkerThread.run(WorkerThread.java:567)
at ...asynchronous invocation...(Unknown Source)
at org.jboss.remoting3.EndpointImpl.doConnect(EndpointImpl.java:272)
at org.jboss.remoting3.EndpointImpl.connect(EndpointImpl.java:388)
Initially I tried using the "jboss-ejb-client.properties" file, but that wasn't even able to make the remote connection. Now I am manually creating and configuring the EJBClientContext, and at least is successfully connecting to the remote server, but the invocation filas because of authentication failures.
I remember that we used to solve this issue by removing the "security realm" argument in "standalone.xml" files in older versions of OpenShift; however I am not able to find that file in the new version anymore. I have been looking at concepts such as secrets, volumes etc. but I really don't have a clear understanding how this works. When I create a new secret and try to associate it with my pod, the new deployment procedure fails. I would really appreciate any help.

No FileSystem for scheme: cos

I'm trying to connect to IBM Cloud Object Storage from IBM Data Science Experience:
access_key = 'XXX'
secret_key = 'XXX'
bucket = 'mybucket'
host = 'lon.ibmselect.objstor.com'
service = 'mycos'
sqlCxt = SQLContext(sc)
hconf = sc._jsc.hadoopConfiguration()
hconf.set('fs.cos.myCos.access.key', access_key)
hconf.set('fs.cos.myCos.endpoint', 'http://' + host)
hconf.set('fs.cose.myCos.secret.key', secret_key)
hconf.set('fs.cos.service.v2.signer.type', 'false')
obj = 'mydata.tsv.gz'
rdd = sc.textFile('cos://{0}.{1}/{2}'.format(bucket, service, obj))
print(rdd.count())
This returns:
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: java.io.IOException: No FileSystem for scheme: cos
I'm guessing I need to use the 'cos' scheme based on the stocator docs. However, the error suggests stocator isn't available or is an old version?
Any ideas?
Update 1:
I have also tried the following:
sqlCxt = SQLContext(sc)
hconf = sc._jsc.hadoopConfiguration()
hconf.set('fs.cos.impl', 'com.ibm.stocator.fs.ObjectStoreFileSystem')
hconf.set('fs.stocator.scheme.list', 'cos')
hconf.set('fs.stocator.cos.impl', 'com.ibm.stocator.fs.cos.COSAPIClient')
hconf.set('fs.stocator.cos.scheme', 'cos')
hconf.set('fs.cos.mycos.access.key', access_key)
hconf.set('fs.cos.mycos.endpoint', 'http://' + host)
hconf.set('fs.cos.mycos.secret.key', secret_key)
hconf.set('fs.cos.service.v2.signer.type', 'false')
service = 'mycos'
obj = 'mydata.tsv.gz'
rdd = sc.textFile('cos://{0}.{1}/{2}'.format(bucket, service, obj))
print(rdd.count())
However, this time the response was:
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: java.io.IOException: No object store for: cos
at com.ibm.stocator.fs.ObjectStoreVisitor.getStoreClient(ObjectStoreVisitor.java:121)
...
Caused by: java.lang.ClassNotFoundException: com.ibm.stocator.fs.cos.COSAPIClient
The latest version of Stocator (v1.0.9) that supports fs.cos scheme is not yet deployed on Spark aaService (It will be soon). Please use the stocator scheme "fs.s3d" to connect to your COS.
Example:
endpoint = 'endpointXXX'
access_key = 'XXX'
secret_key = 'XXX'
prefix = "fs.s3d.service"
hconf = sc._jsc.hadoopConfiguration()
hconf.set(prefix + ".endpoint", endpoint)
hconf.set(prefix + ".access.key", access_key)
hconf.set(prefix + ".secret.key", secret_key)
bucket = 'mybucket'
obj = 'mydata.tsv.gz'
rdd = sc.textFile('s3d://{0}.service/{1}'.format(bucket, obj))
rdd.count()
Alternatively, you can use ibmos2spark. The lib is already installed on our service. Example:
import ibmos2spark
credentials = {
'endpoint': 'endpointXXXX',
'access_key': 'XXXX',
'secret_key': 'XXXX'
}
configuration_name = 'os_configs' # any string you want
cos = ibmos2spark.CloudObjectStorage(sc, credentials, configuration_name)
bucket = 'mybucket'
obj = 'mydata.tsv.gz'
rdd = sc.textFile(cos.url(obj, bucket))
rdd.count()
Stocator is on the classpath for Spark 2.0 and 2.1 kernels, but the cos scheme is not configured. You can access the config by executing the following in a Python notebook:
!cat $SPARK_CONF_DIR/core-site.xml
Look for the property fs.stocator.scheme.list. What I currently see is:
<property>
<name>fs.stocator.scheme.list</name>
<value>swift2d,swift,s3d</value>
</property>
I recommend that you raise a feature request against DSX to support the cos scheme.
It looks like cos driver is not properly initialized. Try this configuration:
hconf.set('fs.cos.impl', 'com.ibm.stocator.fs.ObjectStoreFileSystem')
hconf.set('fs.stocator.scheme.list', 'cos')
hconf.set('fs.stocator.cos.impl', 'com.ibm.stocator.fs.cos.COSAPIClient')
hconf.set('fs.stocator.cos.scheme', 'cos')
hconf.set('fs.cos.mycos.access.key', access_key)
hconf.set('fs.cos.mycos.endpoint', 'http://' + host)
hconf.set('fs.cos.mycos.secret.key', secret_key)
hconf.set('fs.cos.service.v2.signer.type', 'false')
UPDATE 1:
You also need to ensure stocator classes are on the classpath. You can use packages system by exceuting pyspark in the following way:
./bin/pyspark --packages com.ibm.stocator:stocator:1.0.24
This works with swift2d and cos scheme.
UPDATE 2:
Just follow Stocator documentation (https://github.com/CODAIT/stocator). It contains all details how to install it, what branch to use, etc.
I found the same issue, and to solve it I just changed environment:
Within IBM Watson Studio, if you start a a Jupyter notebook in an environment without a pre-configured spark cluster, than you get that error. Installing PySpark is not enough.
Instead, if you start a notebook with the Spark cluster available, you will be just fine.
You have to set .config("spark.hadoop.fs.stocator.scheme.list", "cos") along with some others fs.cos... configurations.
Here's an end-to-end snippet code example that works (tested with pyspark==2.3.2 and Python 3.7.3):
from pyspark.sql import SparkSession
stocator_jar = '/path/to/stocator-1.1.2-SNAPSHOT-IBM-SDK.jar'
cos_instance_name = '<myCosIntanceName>'
bucket_name = '<bucketName>'
s3_region = '<region>'
cos_iam_api_key = '*******'
iam_servicce_id = 'crn:v1:bluemix:public:iam-identity::<****************>'
spark_builder = (
SparkSession
.builder
.appName('test_app'))
spark_builder.config('spark.driver.extraClassPath', stocator_jar)
spark_builder.config('spark.executor.extraClassPath', stocator_jar)
spark_builder.config(f"fs.cos.{cos_instance_name}.iam.api.key", cos_iam_api_key)
spark_builder.config(f"fs.cos.{cos_instance_name}.endpoint", f"s3.{s3_region}.cloud-object-storage.appdomain.cloud")
spark_builder.config(f"fs.cos.{cos_instance_name}.iam.service.id", iam_servicce_id)
spark_builder.config("spark.hadoop.fs.stocator.scheme.list", "cos")
spark_builder.config("spark.hadoop.fs.cos.impl", "com.ibm.stocator.fs.ObjectStoreFileSystem")
spark_builder.config("fs.stocator.cos.impl", "com.ibm.stocator.fs.cos.COSAPIClient")
spark_builder.config("fs.stocator.cos.scheme", "cos")
spark_sess = spark_builder.getOrCreate()
dataset = spark_sess.range(1, 10)
dataset = dataset.withColumnRenamed('id', 'user_idx')
dataset.repartition(1).write.csv(
f'cos://{bucket_name}.{cos_instance_name}/test.csv',
mode='overwrite',
header=True)
spark_sess.stop()
print('done!')

AWS: InvalidSignature exception while adding record

InvalidSignatureException occurs when trying to add user record using Kinesis Producer library.
AWS_JAVA_SDK_VERSION=1.10.26
AWS_KINESIS_PRODUCER_VERSION=0.10.1
ERROR:
PutRecords failed: {"__type":"InvalidSignatureException","message":"The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method.
SCALA KINESIS PRODUCER CODE
private val configuration: KinesisProducerConfiguration = new KinesisProducerConfiguration
val credentialsProvider: AWSCredentialsProvider = AwsUtil.getAwsCredentials(config.awsAccessKey, config.awsSecretKey)
configuration.setCredentialsProvider(credentialsProvider)
configuration.setRecordMaxBufferedTime(config.timeLimit)
configuration.setAggregationMaxCount(1)
configuration.setRegion(config.streamRegion)
configuration.setMetricsLevel("none")
private val kinesisProducer = new KinesisProducer(configuration)
kinesisProducer.addUserRecord(streamName, key, eventBytes)`
The above code is not working. But its possible for me to add records to kinesis stream through aws cli from terminal and KinesisClient in code which is specified below.
private def createKinesisClient = {
val accessKey = config.awsAccessKey
val secretKey = config.awsSecretKey
val credentialsProvider: AWSCredentialsProvider = AwsUtil.getAwsCredentials(accessKey, secretKey)
val client = new AmazonKinesisClient(credentialsProvider)
client.setEndpoint(config.streamEndpoint)
client
}
This happens because your VM/PC/Server clock might by skewed.
If you're running ubuntu, try updating your system time:
sudo ntpdate ntp.ubuntu.com
If you are using docker-machine on Mac, you can resolve with this command:
docker-machine ssh default 'sudo ntpclient -s -h pool.ntp.org'

AKKA remote (with SSL) can't find keystore/truststore files on classpath

I'm trying to configure AKKA SSL connection to use my keystore and trustore files, and I want it to be able to find them on a classpath.
I tried to set application.conf to:
...
remote.netty.ssl = {
enable = on
key-store = "keystore"
key-store-password = "passwd"
trust-store = "truststore"
trust-store-password = "passwd"
protocol = "TLSv1"
random-number-generator = "AES128CounterSecureRNG"
enabled-algorithms = ["TLS_RSA_WITH_AES_128_CBC_SHA"]
}
...
This works fine if keystore and trustore files are in the current directory. In my application these files get packaged into WAR and JAR archives, and because of that I'd like to read them from the classpath.
I tried to use getResource("keystore") in application.conf as described here without any luck. Config reads it literally as a string.
I also tried to parse String conf and force it to read the value:
val conf: Config = ConfigFactory parseString (s"""
...
"${getClass.getClassLoader.getResource("keystore").getPath}"
...""")
In this case it finds proper path on the classpath as file://some_dir/target/scala-2.10/server_2.10-1.1-one-jar.jar!/main/server_2.10-1.1.jar!/keystore which is exactly where it's located (in the jar). However, underlying Netty SSL transport can't find the file given this path, and I get:
Oct 03, 2013 1:02:48 PM org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink
WARNING: Failed to initialize an accepted socket.
45a13eb9-6cb1-46a7-a789-e48da9997f0fakka.remote.RemoteTransportException: Server SSL connection could not be established because key store could not be loaded
at akka.remote.netty.NettySSLSupport$.constructServerContext$1(NettySSLSupport.scala:113)
at akka.remote.netty.NettySSLSupport$.initializeServerSSL(NettySSLSupport.scala:130)
at akka.remote.netty.NettySSLSupport$.apply(NettySSLSupport.scala:27)
at akka.remote.netty.NettyRemoteTransport$PipelineFactory$.defaultStack(NettyRemoteSupport.scala:74)
at akka.remote.netty.NettyRemoteTransport$PipelineFactory$$anon$1.getPipeline(NettyRemoteSupport.scala:67)
at akka.remote.netty.NettyRemoteTransport$PipelineFactory$$anon$1.getPipeline(NettyRemoteSupport.scala:67)
at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.registerAcceptedChannel(NioServerSocketPipelineSink.java:277)
at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.run(NioServerSocketPipelineSink.java:242)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.io.FileNotFoundException: file:/some_dir/server/target/scala-2.10/server_2.10-1.1-one-jar.jar!/main/server_2.10-1.1.jar!/keystore (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileInputStream.<init>(FileInputStream.java:97)
at akka.remote.netty.NettySSLSupport$.constructServerContext$1(NettySSLSupport.scala:118)
... 10 more
I wonder if there is any way to configure this in AKKA without implementing custom SSL transport. Maybe I should configure Netty in the code?
Obviously I can hardcode the path or read it from an environment variable, but I would prefer a more flexible classpath solution.
I decided to look at the akka.remote.netty.NettySSLSupport at the code where exception is thrown from, and here is the code:
def initializeServerSSL(settings: NettySettings, log: LoggingAdapter): SslHandler = {
log.debug("Server SSL is enabled, initialising ...")
def constructServerContext(settings: NettySettings, log: LoggingAdapter, keyStorePath: String, keyStorePassword: String, protocol: String): Option[SSLContext] =
try {
val rng = initializeCustomSecureRandom(settings.SSLRandomNumberGenerator, settings.SSLRandomSource, log)
val factory = KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm)
factory.init({
val keyStore = KeyStore.getInstance(KeyStore.getDefaultType)
val fin = new FileInputStream(keyStorePath)
try keyStore.load(fin, keyStorePassword.toCharArray) finally fin.close()
keyStore
}, keyStorePassword.toCharArray)
Option(SSLContext.getInstance(protocol)) map { ctx ⇒ ctx.init(factory.getKeyManagers, null, rng); ctx }
} catch {
case e: FileNotFoundException ⇒ throw new RemoteTransportException("Server SSL connection could not be established because key store could not be loaded", e)
case e: IOException ⇒ throw new RemoteTransportException("Server SSL connection could not be established because: " + e.getMessage, e)
case e: GeneralSecurityException ⇒ throw new RemoteTransportException("Server SSL connection could not be established because SSL context could not be constructed", e)
}
It looks like it must be a plain filename (String) because that's what FileInputStream takes.
Any suggestions would be welcome!
I also got stuck in similar issue and was getting similar errors. In my case I was trying to hit an https server with self-signed certificates using akka-http, with following code I was able to get through:
val trustStoreConfig = TrustStoreConfig(None, Some("/etc/Project/keystore/my.cer")).withStoreType("PEM")
val trustManagerConfig = TrustManagerConfig().withTrustStoreConfigs(List(trustStoreConfig))
val badSslConfig = AkkaSSLConfig().mapSettings(s => s.withLoose(s.loose
.withAcceptAnyCertificate(true)
.withDisableHostnameVerification(true)
).withTrustManagerConfig(trustManagerConfig))
val badCtx = Http().createClientHttpsContext(badSslConfig)
Http().superPool[RequestTracker](badCtx)(httpMat)
At the time of writing this question there was no way to do it AFAIK. I'm closing this question but I welcome updates if new versions provide such functionality or if there are other ways to do that.