I am trying to create a verticle by using a config.json and am not experiencing what I expect by reading the docs. I will attempt to explain the steps I've taken as best I can but I have tried many variations to the startup steps of my verticle, so I may not be 100% accurate. This is using vert.x 3.7.0.
First, I have successfully used my config to launch my verticle when I include the config file in the expected location, conf/config.json:
{
"database" : {
"port" : 5432,
"host" : "127.0.0.1",
"name" : "linked",
"user" : "postgres",
"passwd" : "postgres",
"connectionPoolSize" : 5
},
"chatListener" : {
"port" : 8080,
"host" : "localhost"
}
}
and use the launcher to pass the config to start the verticle (pseudocode):
public static void main(String[] args){
//preprocessing
Launcher.executeCommand("run", "MyVerticle")
...
and
public static void main(String[] args){
//preprocessing
Launcher.executeCommand("run", "MyVerticle -config conf/config.json")
...
both work correctly. My config is loaded I can pull the data from config() inside my verticle:
JsonObject chatDbOpts = new JsonObject().put( "config", config.getJsonObject( "database" ) );
....
But when I pass a file reference that is not the default location to the launcher,
$ java -jar vert.jar -config /path/to/config.json
it ignores it and uses the built-in config, which is empty, ignoring my config. Yet the debug output from the vertx Config loader indicates it is using the default location:
conf/config.json
which it doesn't actually do, because my config file is there. So the config loader isn't loading from the default location when a different config is specified on the CLI.
So I changed the code to digest the config in main and validated the json file can be found and read. Then passed the file reference to the launcher but got the same behaviour. So then I changed to using a DeploymentOptions object with deployVerticle.
Output from my preprocessor steps of loading the config and converting it to a JsonObject:
Command line arguments: [-conf, d:/cygwin64/home/rcoe/conf/config.json]
Launching application with the following config:
{
"database" : {
"port" : 5432,
"host" : "127.0.0.1",
"name" : "linked",
"user" : "postgres",
"passwd" : "postgres",
"connectionPoolSize" : 5
},
"chatListener" : {
"port" : 8080,
"host" : "localhost"
}
}
This JsonObject is used to create a DeploymentOptions reference:
DeploymentOptions options = new DeploymentOptions(jsonCfg);
Vertx.vertx().deployVerticle( getClass().getName(), options );
Didn't work.
So then I tried creating an empty DeploymentOptions reference and setting the config:
DeploymentOptions options = new DeploymentOptions();
Map<String,Object> m = new HashMap<>();
m.put("config", jsonObject);
JsonObject cfg = new JsonObject(m);
options.setConfig( cfg );
Vertx.vertx().deployVerticle( getClass().getName(), options );
which also fails to pass my desired config. Instead, it uses config from the default location.
Here's the output from the verticle's starting up. It is using the conf/config.json file,
Config file path: conf\config.json, format:json
-Dio.netty.buffer.checkAccessible: true
-Dio.netty.buffer.checkBounds: true
Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#552c2b11
Config options:
{
"port" : 5432,
"host" : "not-a-real-host",
"name" : "linked",
"user" : "postgres",
"passwd" : "postgres",
"connectionPoolSize" : 5
}
versus the config that is given to the DeploymentOptions reference:
Launching application with the following config:
{
"database" : {
"port" : 5432,
"host" : "127.0.0.1",
"name" : "linked",
"user" : "postgres",
"passwd" : "postgres",
"connectionPoolSize" : 5
},
...
Anyway, hope these steps make sense and show I've tried a variety of methods to load custom config. I have seen my config get passed into the vert.x code that is responsible for invoking verticles but by the time my verticle's start() method gets called, my config is gone.
Thanks.
As is usual, authoring a question leads to a better view to the problem. The solution, as I understand it, is to always create a map with a key called "config" and a value of the JsonObject you want to pass.
To deploy:
private void launch( final JsonObject jsonObject )
{
DeploymentOptions options = new DeploymentOptions();
Map<String, Object> m = new HashMap<>();
m.put( "config", jsonObject );
JsonObject cfg = new JsonObject( m );
options.setConfig( cfg );
Vertx.vertx().deployVerticle( MainVerticle.class.getName(), options );
}
#override
public void start( final Future<Void> startFuture )
{
ConfigRetriever cfgRetriever = ConfigRetriever.create( vertx.getDelegate() );
cfgRetriever.getConfig( ar -> {
try {
if( ar.succeeded() ) {
JsonObject config = ar.result();
JsonObject cfg = config.getJsonObject( "config" );
JsonObject chatDbOpts = cfg.getJsonObject( "database" );
LOGGER.debug( "Launching ChatDbServiceVerticle with the following config:\n{}",
chatDbOpts.encodePrettily() );
JsonObject chatHttpOpts = cfg.getJsonObject( "chatListener" );
LOGGER.debug( "Launching HttpServerVerticle with the following config:\n{}",
chatHttpOpts.encodePrettily() );
...
produces the output:
Launching ChatDbServiceVerticle with the following config:
{
"port" : 5432,
"host" : "127.0.0.1",
"name" : "linked",
"user" : "postgres",
"passwd" : "postgres",
"connectionPoolSize" : 5
}
Launching HttpServerVerticle with the following config:
{
"port" : 8080,
"host" : "localhost"
}
But this begs the question as to the point of a DeploymentOptions(JsonObject) constructor if the config() ignores any object that can't be retrieved with a specific key? It required stepping through the debugger to find this. There's no hint of this requirement in the docs, https://vertx.io/blog/vert-x-application-configuration/.
Related
I am new to using snowpark, recently released by Snowflake. I am using Intellij to build udf(user defined functions). However struggling to use proxy using Intellij to connect to snowflake. Below are few things I already tried:
putting proxy in Intellij (under Preferences)
adding proxy before building session
System.setProperty("https.useProxy", "true")
System.setProperty("http.proxyHost", "xxxxxxx")
System.setProperty("http.proxyPort", "443")
System.setProperty("no_proxy", "snowflakecomputing.com")
Below is my code -
val configs = Map (
"URL" -> "xxxxx.snowflakecomputing.com:443",
"USER" -> "xxx",
"PASSWORD" -> "xxxx",
"ROLE" -> "ROLE_xxxxx",
"WAREHOUSE" -> "xxxx",
"DB" -> "xxxx",
"SCHEMA" -> "xxxx",
)
val session = Session.builder.configs(configs).create
Snowpark uses JDBC driver to connect to Snowflake, therefore the proxy properties from JDBC connector can be used here as well.
In your Map add:
"proxyHost" -> "proxyHost Value"
"proxyPort" -> "proxyPort Value"
More information here
If you're specifying a proxy by setting Java system properties, then you can call System.setProperty, like:
System.setProperty("http.useProxy", "true");
System.setProperty("http.proxyHost", "proxyHost Value");
System.setProperty("http.proxyPort", "proxyPort Value");
System.setProperty("https.proxyHost", "proxyHost HTTPS Value");
System.setProperty("https.proxyPort", ""proxyPort HTTPS Value"")
or directly to JVM like:
-Dhttp.useProxy=true
-Dhttps.proxyHost=<proxy_host>
-Dhttp.proxyHost=<proxy_host>
-Dhttps.proxyPort=<proxy_port>
-Dhttp.proxyPort=<proxy_port>
More information here
Faced with a problem running micronaut application that was packed in native-image.
I created simple demo application with micronaut-data-hibernate-jpa and based on documentation I need to add some db connection pool. I chose hikari and added such dependency micronaut-jdbc-hikari.
I use maven as build tool and add plugin to build native-image native-image-maven-plugin
native-image.properties
Args = -H:IncludeResources=logback.xml|application.yml|bootstrap.yml \
-H:Name=demo \
-H:Class=com.example.Application \
-H:+TraceClassInitialization \
--initialize-at-run-time=org.apache.commons.logging.LogAdapter$Log4jLog,org.hibernate.secure.internal.StandardJaccServiceImpl,org.postgresql.sspi.SSPIClient,org.hibernate.dialect.OracleTypesHelper \
--initialize-at-build-time=org.postgresql.Driver,org.postgresql.util.SharedTimer,org.hibernate.engine.spi.EffectiveEntityGraph,org.hibernate.engine.spi.LoadQueryInfluencers
When I run application with the jvm then everything works. But when I try to run same application that was packed as native-image then I get such error
Caused by: java.lang.IllegalArgumentException: Class com.zaxxer.hikari.util.ConcurrentBag$IConcurrentBagEntry[] is instantiated reflectively but was never registered. Register the class by using org.graalvm.nativeimage.hosted.RuntimeReflection
at com.oracle.svm.core.graal.snippets.SubstrateAllocationSnippets.arrayHubErrorStub(SubstrateAllocationSnippets.java:280)
at java.lang.ThreadLocal$SuppliedThreadLocal.initialValue(ThreadLocal.java:305)
at java.lang.ThreadLocal.setInitialValue(ThreadLocal.java:195)
at java.lang.ThreadLocal.get(ThreadLocal.java:172)
at com.zaxxer.hikari.util.ConcurrentBag.borrow(ConcurrentBag.java:129)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:179)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:161)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:100)
at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:122)
at org.hibernate.internal.NonContextualJdbcConnectionAccess.obtainConnection(NonContextualJdbcConnectionAccess.java:38)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.acquireConnectionIfNeeded(LogicalConnectionManagedImpl.java:104)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.getPhysicalConnection(LogicalConnectionManagedImpl.java:134)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.getConnectionForTransactionManagement(LogicalConnectionManagedImpl.java:250)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.begin(LogicalConnectionManagedImpl.java:258)
at org.hibernate.resource.transaction.backend.jdbc.internal.JdbcResourceLocalTransactionCoordinatorImpl$TransactionDriverControlImpl.begin(JdbcResourceLocalTransactionCoordinatorImpl.java:246)
at org.hibernate.engine.transaction.internal.TransactionImpl.begin(TransactionImpl.java:83)
at org.hibernate.internal.AbstractSharedSessionContract.beginTransaction(AbstractSharedSessionContract.java:471)
at io.micronaut.transaction.hibernate5.HibernateTransactionManager.doBegin(HibernateTransactionManager.java:352)
... 99 common frames omitted
UPDATE/SOLUTION
Based on #Airy answer I have added reflection config in native-image.properties. In my case it is looks like so
[
{
"name" : "com.zaxxer.hikari.util.ConcurrentBag",
"allDeclaredConstructors" : true,
"allPublicConstructors" : true,
"allDeclaredMethods" : true,
"allPublicMethods" : true,
"allDeclaredClasses" : true,
"allPublicClasses" : true
},
{
"name" : "com.zaxxer.hikari.pool.PoolEntry",
"allDeclaredConstructors" : true,
"allPublicConstructors" : true,
"allDeclaredMethods" : true,
"allPublicMethods" : true,
"allDeclaredClasses" : true,
"allPublicClasses" : true
}
]
And another solution is to change scope of hikari dependency to compile and add missed fields/classes into hints annotation like so
#TypeHint(value = {
PostgreSQL95Dialect.class,
SessionFactoryImpl.class,
org.postgresql.PGProperty.class,
UUIDGenerator.class,
com.zaxxer.hikari.util.ConcurrentBag.class, // In my case I have just added this line
}, accessType = {TypeHint.AccessType.ALL_PUBLIC})
whole example you can find here
You should declare reflection configuration in your native-image.properties with -H:ReflectionConfigurationFiles=/path/to/reflectconfig
Here is the documentation for doing so
I added database.runMigration: true to my build.gradle file but I'm getting this error when running deployNodes. What's causing this?
[ERROR] 14:05:21+0200 [main] subcommands.ValidateConfigurationCli.logConfigurationErrors$node - Error(s) while parsing node configuration:
- for path: "database.runMigration": Unknown property 'runMigration'
Here's my build.gradle's deployNode task
task deployNodes(type: net.corda.plugins.Cordform, dependsOn: ['jar']) {
directory "./build/nodes"
ext.drivers = ['.jdbc_driver']
ext.extraConfig = [
'dataSourceProperties.dataSourceClassName' : "org.postgresql.ds.PGSimpleDataSource",
'dataSourceProperties.dataSource.user' : "corda",
'dataSourceProperties.dataSource.password' : "corda1234",
'database.transactionIsolationLevel' : 'READ_COMMITTED',
'database.runMigration' : "true"
]
nodeDefaults {
projectCordapp {
deploy = false
}
cordapp project(':cordapp-contracts-states')
cordapp project(':cordapp')
}
node {
name "O=HUS,L=Helsinki,C=FI"
p2pPort 10008
rpcSettings {
address "localhost:10009"
adminAddress "localhost:10049"
}
webPort 10017
rpcUsers = [[ user: "user1", "password": "test", "permissions": ["ALL"]]]
extraConfig = ext.extraConfig + [
'dataSourceProperties.dataSource.url' :
"jdbc:postgresql://localhost:5432/hus_db?currentSchema=corda_schema"
]
drivers = ext.drivers
}
}
The database.runMigration is a Corda Enterprise property only.
To control database migration in Corda Open Source use initialiseSchema.
initialiseSchema
Boolean which indicates whether to update the database schema at startup (or create the schema when node starts for the first time). If set to false on startup, the node will validate if it’s running against a compatible database schema.
Default: true
You may refer to the below link to look out for other database properties which you can set.
https://docs.corda.net/corda-configuration-file.html
I am trying to use cygnus with Mongo DB, but no data have been persisted in the data base.
Here is the notification got in cygnus:
15/07/21 14:48:01 INFO handlers.OrionRestHandler: Starting transaction (1437482681-118-0000000000)
15/07/21 14:48:01 INFO handlers.OrionRestHandler: Received data ({ "subscriptionId" : "55a73819d0c457bb20b1d467", "originator" : "localhost", "contextResponses" : [ { "contextElement" : { "type" : "enocean", "isPattern" : "false", "id" : "enocean:myButtonA", "attributes" : [ { "name" : "ButtonValue", "type" : "", "value" : "ON", "metadatas" : [ { "name" : "TimeInstant", "type" : "ISO8601", "value" : "2015-07-20T21:29:56.509293Z" } ] } ] }, "statusCode" : { "code" : "200", "reasonPhrase" : "OK" } } ]})
15/07/21 14:48:01 INFO handlers.OrionRestHandler: Event put in the channel (id=1454120446, ttl=10)
Here is my agent configuration:
cygnusagent.sources = http-source
cygnusagent.sinks = OrionMongoSink
cygnusagent.channels = mongo-channel
#=============================================
# source configuration
# channel name where to write the notification events
cygnusagent.sources.http-source.channels = mongo-channel
# source class, must not be changed
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnusagent.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
# URL target
cygnusagent.sources.http-source.handler.notification_target = /notify
# Default service (service semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service = def_serv
# Default service path (service path semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service_path = def_servpath
# Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
cygnusagent.sources.http-source.handler.events_ttl = 10
# Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts gi
# TimestampInterceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
# GroupinInterceptor, do not change
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
# Grouping rules for the GroupingInterceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /home/egm_demo/usr/fiware-cygnus/conf/grouping_rules.conf
# ============================================
# OrionMongoSink configuration
# sink class, must not be changed
cygnusagent.sinks.mongo-sink.type = com.telefonica.iot.cygnus.sinks.OrionMongoSink
# channel name from where to read notification events
cygnusagent.sinks.mongo-sink.channel = mongo-channel
# FQDN/IP:port where the MongoDB server runs (standalone case) or comma-separated list of FQDN/IP:port pairs where the MongoDB replica set members run
cygnusagent.sinks.mongo-sink.mongo_hosts = 127.0.0.1:27017
# a valid user in the MongoDB server (or empty if authentication is not enabled in MongoDB)
cygnusagent.sinks.mongo-sink.mongo_username =
# password for the user above (or empty if authentication is not enabled in MongoDB)
cygnusagent.sinks.mongo-sink.mongo_password =
# prefix for the MongoDB databases
#cygnusagent.sinks.mongo-sink.db_prefix = kura
# prefix pro the MongoDB collections
#cygnusagent.sinks.mongo-sink.collection_prefix = button
# true is collection names are based on a hash, false for human redable collections
cygnusagent.sinks.mongo-sink.should_hash = false
# ============================================
# mongo-channel configuration
# channel type (must not be changed)
cygnusagent.channels.mongo-channel.type = memory
# capacity of the channel
cygnusagent.channels.mongo-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnusagent.channels.mongo-channel.transactionCapacity = 100
Here is my rule :
{
"grouping_rules": [
{
"id": 1,
"fields": [
"button"
],
"regex": ".*",
"destination": "kura",
"fiware_service_path": "/kuraspath"
}
]
}
Any ideas of what I have missed? Thanks in advance for your help!
This configuration parameter is wrong:
cygnusagent.sinks = OrionMongoSink
According to your configuration, it must be mongo-sink (I mean, you are configuring a Mongo sink named mongo-sink when you configure lines such as cygnusagent.sinks.mongo-sink.type).
In addition, I would recommend you to not using the grouping rules feature; it is an advanced feature about sending the data to a collection different than the default one, and in a first stage I would play with the default behaviour. Thus, my recommendation is to leave the path to the file in cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file, but comment all the JSON within it :)
I want the Grails' Mail plugin to read configuration properties from external properties file in class path. I have added this line in Config.groovy,
grails.config.locations = [
"classpath:app-${grails.util.Environment.current.name}-config.properties"]
and I have put properties in that file like this,
grails.mail.host = smtp.gmail.com
grails.mail.port = 465
grails.mail.username = username
grails.mail.password = password
all this work fine. The problem is that, the Mail plugin requires one more property that is of type Map. If we put that property in Config.groovy, I looks like this,
grails {
mail {
props = ["mail.smtp.auth" : "true",
"mail.smtp.socketFactory.port" : "465",
"mail.smtp.socketFactory.class" : "javax.net.ssl.SSLSocketFactory",
"mail.smtp.socketFactory.fallback": "false"]
}
}
Now if I put this in external file as following,
grails.mail.props = ["mail.smtp.auth" : "true",
"mail.smtp.socketFactory.port" : "465",
"mail.smtp.socketFactory.class" : "javax.net.ssl.SSLSocketFactory",
"mail.smtp.socketFactory.fallback": "false"]
than it does not work. I need to read props Map from external file. I have searched a lot but in vain. Help is appreciated.
You can load configuration from external *.groovy file where you can have maps etc. like in Config.groovy. Create for example mail-config.groovy with content as below:
grails {
mail {
host = smtp.gmail.com
port = 465
username = username
password = password
props = ["mail.smtp.auth" : "true",
"mail.smtp.socketFactory.port" : "465",
"mail.smtp.socketFactory.class" : "javax.net.ssl.SSLSocketFactory",
"mail.smtp.socketFactory.fallback": "false"]
}
}
And point Grails to use it:
grails.config.locations = ["classpath:mail-config.groovy"]