Externalizing properties for Mail plugin in Grails - email

I want the Grails' Mail plugin to read configuration properties from external properties file in class path. I have added this line in Config.groovy,
grails.config.locations = [
"classpath:app-${grails.util.Environment.current.name}-config.properties"]
and I have put properties in that file like this,
grails.mail.host = smtp.gmail.com
grails.mail.port = 465
grails.mail.username = username
grails.mail.password = password
all this work fine. The problem is that, the Mail plugin requires one more property that is of type Map. If we put that property in Config.groovy, I looks like this,
grails {
mail {
props = ["mail.smtp.auth" : "true",
"mail.smtp.socketFactory.port" : "465",
"mail.smtp.socketFactory.class" : "javax.net.ssl.SSLSocketFactory",
"mail.smtp.socketFactory.fallback": "false"]
}
}
Now if I put this in external file as following,
grails.mail.props = ["mail.smtp.auth" : "true",
"mail.smtp.socketFactory.port" : "465",
"mail.smtp.socketFactory.class" : "javax.net.ssl.SSLSocketFactory",
"mail.smtp.socketFactory.fallback": "false"]
than it does not work. I need to read props Map from external file. I have searched a lot but in vain. Help is appreciated.

You can load configuration from external *.groovy file where you can have maps etc. like in Config.groovy. Create for example mail-config.groovy with content as below:
grails {
mail {
host = smtp.gmail.com
port = 465
username = username
password = password
props = ["mail.smtp.auth" : "true",
"mail.smtp.socketFactory.port" : "465",
"mail.smtp.socketFactory.class" : "javax.net.ssl.SSLSocketFactory",
"mail.smtp.socketFactory.fallback": "false"]
}
}
And point Grails to use it:
grails.config.locations = ["classpath:mail-config.groovy"]

Related

How to use proxy with Snowpark session builder to connect to snowflake

I am new to using snowpark, recently released by Snowflake. I am using Intellij to build udf(user defined functions). However struggling to use proxy using Intellij to connect to snowflake. Below are few things I already tried:
putting proxy in Intellij (under Preferences)
adding proxy before building session
System.setProperty("https.useProxy", "true")
System.setProperty("http.proxyHost", "xxxxxxx")
System.setProperty("http.proxyPort", "443")
System.setProperty("no_proxy", "snowflakecomputing.com")
Below is my code -
val configs = Map (
"URL" -> "xxxxx.snowflakecomputing.com:443",
"USER" -> "xxx",
"PASSWORD" -> "xxxx",
"ROLE" -> "ROLE_xxxxx",
"WAREHOUSE" -> "xxxx",
"DB" -> "xxxx",
"SCHEMA" -> "xxxx",
)
val session = Session.builder.configs(configs).create
Snowpark uses JDBC driver to connect to Snowflake, therefore the proxy properties from JDBC connector can be used here as well.
In your Map add:
"proxyHost" -> "proxyHost Value"
"proxyPort" -> "proxyPort Value"
More information here
If you're specifying a proxy by setting Java system properties, then you can call System.setProperty, like:
System.setProperty("http.useProxy", "true");
System.setProperty("http.proxyHost", "proxyHost Value");
System.setProperty("http.proxyPort", "proxyPort Value");
System.setProperty("https.proxyHost", "proxyHost HTTPS Value");
System.setProperty("https.proxyPort", ""proxyPort HTTPS Value"")
or directly to JVM like:
-Dhttp.useProxy=true
-Dhttps.proxyHost=<proxy_host>
-Dhttp.proxyHost=<proxy_host>
-Dhttps.proxyPort=<proxy_port>
-Dhttp.proxyPort=<proxy_port>
More information here

Having trouble making sense of vert.x config loading

I am trying to create a verticle by using a config.json and am not experiencing what I expect by reading the docs. I will attempt to explain the steps I've taken as best I can but I have tried many variations to the startup steps of my verticle, so I may not be 100% accurate. This is using vert.x 3.7.0.
First, I have successfully used my config to launch my verticle when I include the config file in the expected location, conf/config.json:
{
"database" : {
"port" : 5432,
"host" : "127.0.0.1",
"name" : "linked",
"user" : "postgres",
"passwd" : "postgres",
"connectionPoolSize" : 5
},
"chatListener" : {
"port" : 8080,
"host" : "localhost"
}
}
and use the launcher to pass the config to start the verticle (pseudocode):
public static void main(String[] args){
//preprocessing
Launcher.executeCommand("run", "MyVerticle")
...
and
public static void main(String[] args){
//preprocessing
Launcher.executeCommand("run", "MyVerticle -config conf/config.json")
...
both work correctly. My config is loaded I can pull the data from config() inside my verticle:
JsonObject chatDbOpts = new JsonObject().put( "config", config.getJsonObject( "database" ) );
....
But when I pass a file reference that is not the default location to the launcher,
$ java -jar vert.jar -config /path/to/config.json
it ignores it and uses the built-in config, which is empty, ignoring my config. Yet the debug output from the vertx Config loader indicates it is using the default location:
conf/config.json
which it doesn't actually do, because my config file is there. So the config loader isn't loading from the default location when a different config is specified on the CLI.
So I changed the code to digest the config in main and validated the json file can be found and read. Then passed the file reference to the launcher but got the same behaviour. So then I changed to using a DeploymentOptions object with deployVerticle.
Output from my preprocessor steps of loading the config and converting it to a JsonObject:
Command line arguments: [-conf, d:/cygwin64/home/rcoe/conf/config.json]
Launching application with the following config:
{
"database" : {
"port" : 5432,
"host" : "127.0.0.1",
"name" : "linked",
"user" : "postgres",
"passwd" : "postgres",
"connectionPoolSize" : 5
},
"chatListener" : {
"port" : 8080,
"host" : "localhost"
}
}
This JsonObject is used to create a DeploymentOptions reference:
DeploymentOptions options = new DeploymentOptions(jsonCfg);
Vertx.vertx().deployVerticle( getClass().getName(), options );
Didn't work.
So then I tried creating an empty DeploymentOptions reference and setting the config:
DeploymentOptions options = new DeploymentOptions();
Map<String,Object> m = new HashMap<>();
m.put("config", jsonObject);
JsonObject cfg = new JsonObject(m);
options.setConfig( cfg );
Vertx.vertx().deployVerticle( getClass().getName(), options );
which also fails to pass my desired config. Instead, it uses config from the default location.
Here's the output from the verticle's starting up. It is using the conf/config.json file,
Config file path: conf\config.json, format:json
-Dio.netty.buffer.checkAccessible: true
-Dio.netty.buffer.checkBounds: true
Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#552c2b11
Config options:
{
"port" : 5432,
"host" : "not-a-real-host",
"name" : "linked",
"user" : "postgres",
"passwd" : "postgres",
"connectionPoolSize" : 5
}
versus the config that is given to the DeploymentOptions reference:
Launching application with the following config:
{
"database" : {
"port" : 5432,
"host" : "127.0.0.1",
"name" : "linked",
"user" : "postgres",
"passwd" : "postgres",
"connectionPoolSize" : 5
},
...
Anyway, hope these steps make sense and show I've tried a variety of methods to load custom config. I have seen my config get passed into the vert.x code that is responsible for invoking verticles but by the time my verticle's start() method gets called, my config is gone.
Thanks.
As is usual, authoring a question leads to a better view to the problem. The solution, as I understand it, is to always create a map with a key called "config" and a value of the JsonObject you want to pass.
To deploy:
private void launch( final JsonObject jsonObject )
{
DeploymentOptions options = new DeploymentOptions();
Map<String, Object> m = new HashMap<>();
m.put( "config", jsonObject );
JsonObject cfg = new JsonObject( m );
options.setConfig( cfg );
Vertx.vertx().deployVerticle( MainVerticle.class.getName(), options );
}
#override
public void start( final Future<Void> startFuture )
{
ConfigRetriever cfgRetriever = ConfigRetriever.create( vertx.getDelegate() );
cfgRetriever.getConfig( ar -> {
try {
if( ar.succeeded() ) {
JsonObject config = ar.result();
JsonObject cfg = config.getJsonObject( "config" );
JsonObject chatDbOpts = cfg.getJsonObject( "database" );
LOGGER.debug( "Launching ChatDbServiceVerticle with the following config:\n{}",
chatDbOpts.encodePrettily() );
JsonObject chatHttpOpts = cfg.getJsonObject( "chatListener" );
LOGGER.debug( "Launching HttpServerVerticle with the following config:\n{}",
chatHttpOpts.encodePrettily() );
...
produces the output:
Launching ChatDbServiceVerticle with the following config:
{
"port" : 5432,
"host" : "127.0.0.1",
"name" : "linked",
"user" : "postgres",
"passwd" : "postgres",
"connectionPoolSize" : 5
}
Launching HttpServerVerticle with the following config:
{
"port" : 8080,
"host" : "localhost"
}
But this begs the question as to the point of a DeploymentOptions(JsonObject) constructor if the config() ignores any object that can't be retrieved with a specific key? It required stepping through the debugger to find this. There's no hint of this requirement in the docs, https://vertx.io/blog/vert-x-application-configuration/.

Grails Send mail form Gmail Business Mail Exception : Authentication failed; nested exception is javax.mail.AuthenticationFailedException

I am trying to send an email using Google's Business email Account.
I am using Grails 3.2.7 and Grails's mail plugin ('org.grails.plugins:mail:2.0.0.RC6').
I am getting the following exception, even though I have set all the authentication credentials and I have set my account to allow less secure apps to access.
org.springframework.mail.MailAuthenticationException: Authentication failed; nested exception is javax.mail.AuthenticationFailedException
Following are the settings for sending mail
mail {
host = 'smtp.gmail.com'
port = 587
username = "##### BUSINESS EMAIL from GMAIL###"
password = "PASSWORD"
props = ["mail.smtp.auth" : "true",
"mail.smtp.socketFactory.port" : "465",
"mail.smtp.socketFactory.class" : "javax.net.ssl.SSLSocketFactory",
"mail.smtp.socketFactory.fallback": "false"]
}
Anyone can help on why is it not sending email and failing on Authentication.
Please note that if I try to send the email using normal GMAIL account with same settings it is working fine.
Additional information
I am trying to send email to other domains from sender email being "BUSINESS EMAIL from GMAIL"
I have sent mail using Grails 3.1.8 and dependency
compile "org.grails.plugins:mail:2.0.0.RC2".
You can try it in grails 3.2.7
mail {
host = "smtp.gmail.com"
port = 465
username = "test#gmail.com"
password = "password"
ssl = "on"
props = ["mail.smtp.auth" : "true",
"mail.smtp.port" : "465",
"mail.smtp.socketFactory.port" : "465",
"mail.smtp.socketFactory.class" : "javax.net.ssl.SSLSocketFactory",
"mail.imap.ssl.checkserveridentity": "false",
"mail.imap.ssl.trust" : "*",
"mail.smtp.socketFactory.fallback" : "true"]
}

Trying to Configure Serilog Email sink with appsettings.json to work with Gmail

In in a POC I got the Smtp client to send emails through Gmail, so I know my information regarding connecting to Gmail's SMTP server is correct. I am now trying to configure Serilog through appsettings.json to send my log entries through Gmail. I need to be able to configure it different for different environments. I currently have it set to Verbose so that I get anything...it won't be that way later. I am not getting anything but my file log entry. I had this working with a local network SMTP server that took defaults and no network credentials. Now I need to set the port, ssl, and network credentials to be able to send through Gmail.
Here is my WriteTo section...
"WriteTo": [
{
"Name": "RollingFile",
"Args": {
"pathFormat": "C:/log/log-{Date}.json",
"formatter": "Serilog.Formatting.Json.JsonFormatter, Serilog",
"fileSizeLimitBytes": 2147483648,
"retainedFileCountLimit": 180,
"restrictedToMinimumLevel": "Verbose"
}
},
{
"Name": "Email",
"Args": {
"connectionInfo": {
"FromEmail": "{email address}",
"ToEmail": "{email address}",
"MailServer": "smtp.gmail.com",
"EmailSubject": "Fatal Error",
"NetworkCredentials": {
"userName": "{gmailuser}#gmail.com",
"password": "{gmailPassword}"
},
"Port": 587,
"EnableSsl" : true
},
"restrictedToMinimumLevel": "Verbose"
}
}
]
},
Any help is appreciated.
Change your port number to 465 and it should work for you. Here's a some info on gmail smtp settings: https://www.lifewire.com/what-are-the-gmail-smtp-settings-1170854
I'm using Core 2.0 and couldn't get the serilog email sink to work with the appsettings.json file, but I do have it working by setting the configs in the program.cs file like so:
var logger = new LoggerConfiguration()
.WriteTo.RollingFile(
pathFormat: "..\\..\\log\\AppLog.Web-{Date}.txt",
outputTemplate: "{Timestamp:yyyy-MM-dd HH:mm:ss.fff zzz} [{Level}] [{SourceContext}] [{EventId}] {Message}{NewLine}{Exception}"
)
.WriteTo.Email(new EmailConnectionInfo
{
FromEmail = appConfigs.Logger.EmailSettings.FromAddress,
ToEmail = appConfigs.Logger.EmailSettings.ToAddress,
MailServer = "smtp.gmail.com",
NetworkCredentials = new NetworkCredential {
UserName = appConfigs.Logger.EmailSettings.Username,
Password = appConfigs.Logger.EmailSettings.Password
},
EnableSsl = true,
Port = 465,
EmailSubject = appConfigs.Logger.EmailSettings.EmailSubject
},
outputTemplate: "{Timestamp:yyyy-MM-dd HH:mm:ss.fff zzz} [{Level}] {Message}{NewLine}{Exception}",
batchPostingLimit: 10
, restrictedToMinimumLevel: Serilog.Events.LogEventLevel.Error
)
.CreateLogger();
appsettings won´t work for Serilog.Synk.Email if you need NetworkCredentials. It will raise this exception: System.InvalidOperationException: 'Cannot create instance of type 'System.Net.ICredentialsByHost' because it is either abstract or an interface.'. Use #rcf113 answer to make thing work.
To make things work with your gmail account, you have to:
EnableSsl: true
Port: 465
Create app password
To make appsettings work, you'll have to come up with a custom implementation. This solution was given from #adriangutowski github answer
Create a custom static extension class to instance a NetworkCredential for ICredentialsByHost property
namespace MyWebApi.Extensions
{
public static class SerilogCustomEmailExtension
{
const string DefaultOutputTemplate = "{Timestamp:yyyy-MM-dd HH:mm:ss.fff zzz} [{Level}] {Message}{NewLine}{Exception}";
public static LoggerConfiguration CustomEmail(
this LoggerSinkConfiguration loggerConfiguration,
CustomEmailConnectionInfo connectionInfo,
string outputTemplate = DefaultOutputTemplate,
LogEventLevel restrictedToMinimumLevel = LevelAlias.Minimum
)
{
return loggerConfiguration.Email(
connectionInfo,
outputTemplate,
restrictedToMinimumLevel
);
}
public class CustomEmailConnectionInfo : EmailConnectionInfo
{
public CustomEmailConnectionInfo()
{
NetworkCredentials = new NetworkCredential();
}
}
}
}
Then configure your appsettings Serilog.Using with your assembly name (not namespace) and add a new entry to Serilog.WriteTo[]
"Serilog": {
"Using": [ "Serilog.Sinks.Email", "MyWebApi" ],
"MinimumLevel": "Error",
"WriteTo": [
{
"Name": "CustomEmail",
"Args": {
"ConnectionInfo": {
"NetworkCredentials": {
"UserName": "aaaaaaaaaaaaaa#gmail.com",
"Password": "aaaaaaaaaaaaaa"
},
"FromEmail": "aaaaaaaaaaaaaa#gmail.com",
"MailServer": "smtp.gmail.com",
"EmailSubject": "[{Level}] Log Email",
"Port": "465",
"IsBodyHtml": false,
"EnableSsl": true,
"ToEmail": "aaaaaaaaaaaaaa#gmail.com"
},
"RestrictedToMinimumLevel": "Error",
"OutputTemplate": "{Timestamp:yyyy-MM-dd HH:mm} [{Level}] {Message}{NewLine}{Exception}"
}
}
]
}
This configuration worked fine using
Dotnet Core 6
Serilog.Sinks.Email Version="2.4.0"

Fiware cygnus: no data have been persisted in mongo DB

I am trying to use cygnus with Mongo DB, but no data have been persisted in the data base.
Here is the notification got in cygnus:
15/07/21 14:48:01 INFO handlers.OrionRestHandler: Starting transaction (1437482681-118-0000000000)
15/07/21 14:48:01 INFO handlers.OrionRestHandler: Received data ({ "subscriptionId" : "55a73819d0c457bb20b1d467", "originator" : "localhost", "contextResponses" : [ { "contextElement" : { "type" : "enocean", "isPattern" : "false", "id" : "enocean:myButtonA", "attributes" : [ { "name" : "ButtonValue", "type" : "", "value" : "ON", "metadatas" : [ { "name" : "TimeInstant", "type" : "ISO8601", "value" : "2015-07-20T21:29:56.509293Z" } ] } ] }, "statusCode" : { "code" : "200", "reasonPhrase" : "OK" } } ]})
15/07/21 14:48:01 INFO handlers.OrionRestHandler: Event put in the channel (id=1454120446, ttl=10)
Here is my agent configuration:
cygnusagent.sources = http-source
cygnusagent.sinks = OrionMongoSink
cygnusagent.channels = mongo-channel
#=============================================
# source configuration
# channel name where to write the notification events
cygnusagent.sources.http-source.channels = mongo-channel
# source class, must not be changed
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnusagent.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
# URL target
cygnusagent.sources.http-source.handler.notification_target = /notify
# Default service (service semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service = def_serv
# Default service path (service path semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service_path = def_servpath
# Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
cygnusagent.sources.http-source.handler.events_ttl = 10
# Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts gi
# TimestampInterceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
# GroupinInterceptor, do not change
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
# Grouping rules for the GroupingInterceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /home/egm_demo/usr/fiware-cygnus/conf/grouping_rules.conf
# ============================================
# OrionMongoSink configuration
# sink class, must not be changed
cygnusagent.sinks.mongo-sink.type = com.telefonica.iot.cygnus.sinks.OrionMongoSink
# channel name from where to read notification events
cygnusagent.sinks.mongo-sink.channel = mongo-channel
# FQDN/IP:port where the MongoDB server runs (standalone case) or comma-separated list of FQDN/IP:port pairs where the MongoDB replica set members run
cygnusagent.sinks.mongo-sink.mongo_hosts = 127.0.0.1:27017
# a valid user in the MongoDB server (or empty if authentication is not enabled in MongoDB)
cygnusagent.sinks.mongo-sink.mongo_username =
# password for the user above (or empty if authentication is not enabled in MongoDB)
cygnusagent.sinks.mongo-sink.mongo_password =
# prefix for the MongoDB databases
#cygnusagent.sinks.mongo-sink.db_prefix = kura
# prefix pro the MongoDB collections
#cygnusagent.sinks.mongo-sink.collection_prefix = button
# true is collection names are based on a hash, false for human redable collections
cygnusagent.sinks.mongo-sink.should_hash = false
# ============================================
# mongo-channel configuration
# channel type (must not be changed)
cygnusagent.channels.mongo-channel.type = memory
# capacity of the channel
cygnusagent.channels.mongo-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnusagent.channels.mongo-channel.transactionCapacity = 100
Here is my rule :
{
"grouping_rules": [
{
"id": 1,
"fields": [
"button"
],
"regex": ".*",
"destination": "kura",
"fiware_service_path": "/kuraspath"
}
]
}
Any ideas of what I have missed? Thanks in advance for your help!
This configuration parameter is wrong:
cygnusagent.sinks = OrionMongoSink
According to your configuration, it must be mongo-sink (I mean, you are configuring a Mongo sink named mongo-sink when you configure lines such as cygnusagent.sinks.mongo-sink.type).
In addition, I would recommend you to not using the grouping rules feature; it is an advanced feature about sending the data to a collection different than the default one, and in a first stage I would play with the default behaviour. Thus, my recommendation is to leave the path to the file in cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file, but comment all the JSON within it :)