I know that this question was already made in this post: Send email when error occurs in console command of Symfony2 app, but answers do not provide a complete solution to the problem at hand and I can't comment on original post.
I need to send a monolog error email in command. E-mail is correctly enqueued in a file spooler; unfortunately I'm forced to use a memory spool.
Strangely enough the code snipper provided to manually flush the spool does work for emails generated in my code, not for monolog.
Does anybody know why this is happening and wether it would be possible to use a memory spool?
config.yml:
# Swiftmailer Configuration
swiftmailer:
transport: %mailer_transport%
host: %mailer_host%
username: %mailer_user%
password: %mailer_password%
spool: { type: memory }
# Monolog Configuration
monolog:
channels: ["account.create"]
handlers:
account.create.group:
type: group
members: [account.create.streamed, account.create.buffered]
channels: [account.create]
account.create.streamed:
type: stream
path: %kernel.logs_dir%/accounts_creation.log
level: info
account.create.buffered:
type: buffer
handler: account.create.swift
account.create.swift:
type: swift_mailer
from_email: xxx#yyy.com
to_email: aaa#gmail.com
subject: 'An Error Occurred while managing zzz!'
level: critical
config_prod.yml:
monolog:
handlers:
main:
type: fingers_crossed
action_level: error
handler: nested
nested:
type: stream
path: %kernel.logs_dir%/%kernel.environment%.log
level: debug
channels: [!account.create]
usage example:
try
{
//code that could block
}
catch(ManageUserBlockingExceptionInterface $e)
{
$exitCode = self::EXIT_CODE_ERROR_BLOCKING;
//le eccezioni bloccanti vengono loggate e non si conferma che
//il messaggio รจ stato utilizzato ma si termina la coda
if(!\is_null($this->logger))
{
$this->logger->crit($e->getMessage());
}
}
the logger is injected in service by dependency injection as a service:
...
<argument type="service" id="monolog.logger.account.create" on-invalid="null" />
...
and it works because critical errors are streamed in file log; but also email is created if swiftmailer is configured with a file spool.
Finally, the code to manually flush memory spool is as folllow:
protected function flushMailSpool()
{
$mailer = $this->container->get('mailer');
$spool = $mailer->getTransport()->getSpool();
$transport = $this->container->get('swiftmailer.transport.real');
$spool->flushQueue($transport);
}
it is called immediately after a service purposedly sends an email; I noticed that the same code, put in command and adapted to command environment i.e. $this->container becomes $this->getContainer() does not work, maybe due to a scope change?
Related
I have a Type: AWS::Serverless::HttpApi which I am trying to connect to a Type: AWS::Serverless::StateMachine as a trigger. Meaning the HTTP API would trigger the Step Function state machine.
I can get it working, by only specifying a single input. For example, the DefinitionBody when it works, looks like this:
DefinitionBody:
info:
version: '1.0'
title:
Ref: AWS::StackName
paths:
"/github/secret":
post:
responses:
default:
description: "Default response for POST /"
x-amazon-apigateway-integration:
integrationSubtype: "StepFunctions-StartExecution"
credentials:
Fn::GetAtt: [StepFunctionsApiRole, Arn]
requestParameters:
Input: $request.body
StateMachineArn: !Ref SecretScannerStateMachine
payloadFormatVersion: "1.0"
type: "aws_proxy"
connectionType: "INTERNET"
timeoutInMillis: 30000
openapi: 3.0.1
x-amazon-apigateway-importexport-version: "1.0"
Take note of the following line: Input: $request.body. I am only specifying the $request.body.
However, I need to be able to send the $request.body and $request.header.X-Hub-Signature-256. I need to send BOTH these values to my state machine as an input.
I have tried so many different ways. For example:
Input: " { body: $request.body, header: $request.header.X-Hub-Signature-256 }"
and
$request.body
$request.header.X-Hub-Signature-256
and
Input: $request
I get different errors each time, but this is the main one:
Warnings found during import: Unable to create integration for resource at path 'POST /github/secret': Invalid selection expression specified: Validation Result: warnings : [], errors : [Invalid source: $request specified for destination: Input].
Any help on how to pass multiple values would so be appreciated.
I am working with nodemailer to send email using custom SMTP server.
let transporter = nodemailer.createTransport({
host: 'my.smtp.host',
port: 587,
secure: false,
auth: {
user: 'user',
pass: 'password',
},
debug: true,
logger: true
})
let info = await transporter.sendMail({
from: from,
to: to,
subject: subject,
text: content,
html: content,
cc: cc,
bcc: bcc,
})
Following is the result of sendMail.
{
accepted: [
'to#to.com'
],
rejected: [],
envelopeTime: 39,
messageTime: 49,
messageSize: 1730,
response: '250 2.0.0 Ok: queued as 5513B432BE6',
envelope: {
from: 'from#from.com',
to: [
'to#to.com'
]
},
messageId: '<274421c8-1abd-4973-dd8e-f57285b46a70#from.com>'
}
And following is log message.
...
[2021-06-09 19:00:21] INFO [StdWZWNkAto] <1730 bytes encoded mime message (source size 1693 bytes)>
[2021-06-09 19:00:21] DEBUG [StdWZWNkAto] S: 250 2.0.0 Ok: queued as 5513B432BE6
[2021-06-09 19:00:21] DEBUG [StdWZWNkAto] Closing connection to the server using "end"
I already tested this SMTP server with other tools and it works correctly.
What's wrong?
Please help me.
Well, this means that nodemailer successfully send the email and now it is in the queue of your custom SMTP server, so you have to deal with it. For instance, sendmail allows to see the queue via mailq command and resend via sendmail -q (or sendmail -q -v for more information). If you need more details, you should provide info about your SMTP server.
I cannot start my dropwizard application after add database details in my application configuration file (server.yml).
server.yml (app config file)
server:
applicationConnectors:
- type: http
port: 8080
adminConnectors:
- type: http
port: 9001
database:
# the name of your JDBC driver
driverClass: org.postgresql.Driver
# the username
user: dbuser
# the password
password: pw123
# the JDBC URL
url: jdbc:postgresql://localhost/database
# any properties specific to your JDBC driver:
properties:
charSet: UTF-8
# the maximum amount of time to wait on an empty pool before throwing an exception
maxWaitForConnection: 1s
# the SQL query to run when validating a connection's liveness
validationQuery: "/* MyService Health Check */ SELECT 1"
# the timeout before a connection validation queries fail
validationQueryTimeout: 3s
# the minimum number of connections to keep open
minSize: 8
# the maximum number of connections to keep open
maxSize: 32
# whether or not idle connections should be validated
checkConnectionWhileIdle: false
# the amount of time to sleep between runs of the idle connection validation, abandoned cleaner and idle pool resizing
evictionInterval: 10s
# the minimum amount of time an connection must sit idle in the pool before it is eligible for eviction
minIdleTime: 1 minute
As result of run dropwizard application I can see:
has an error:
* Unrecognized field at: database
Did you mean?:
- metrics
- server
- logging
In addition to code given by dropwizard example you need to add a setter for database property.
#Valid
#NotNull
#JsonProperty("database")
private DataSourceFactory database = new DataSourceFactory();
public DataSourceFactory getDataSourceFactory() {
return database;
}
public void setDatabase(DataSourceFactory database) {
this.database = database;
}
In your application configuration java file, you have to add the matching property for "database". If the properties you're specifying are the standard ones (which they look to be, good!) then you can keep with the DataSourceFactory type:
public class ExampleConfiguration extends Configuration {
#Valid
#NotNull
#JsonProperty
private DataSourceFactory database = new DataSourceFactory();
public DataSourceFactory getDataSourceFactory() {
return database;
}
public void setDatabase(DataSourceFactory database) {
this.database = database;
}
}
Example here: http://www.dropwizard.io/0.9.0/docs/manual/jdbi.html
i'm part of a team that is developing an application that uses the Fiware GE's has part of the Smart-AgriFood accelerator.
We are using the Orion Context Broker for gathering the data provided by the sensor network, and we intend to use the Pep-Proxy to authenticate the sensor node for access the Orion instance. We have tried the following pepProxy's:
https://github.com/telefonicaid/fiware-orion-pep
https://github.com/ging/fi-ware-pep-proxy
We only have success implementing the second (fi-ware-pep-proxy) implementation of the proxy. With the fiware-orion-pep we haven't been able to connect to the Keystone Global instance (account.lab.fi-ware.org), we have tried the account.lab... and the cloud.lab..., my question are:
1) is the keystone (IDM) instance for authentication the account.lab or the cloud.lab?? and what port's to use or address's?
2) is the fiware-orion-pep prepared for authenticate at the account.lab.fi-ware.org?? here is way i ask this:
This one works with the curl command at >> cloud.lab.fiware.org:4730/v2.0/tokens
{
"auth": {
"passwordCredentials": {
"username": "<my_user>",
"password": "<my_password>"
}
}
}'
This one does't work with the curl comand at >> account.lab.fi-ware.org:5000/v3/auth/tokens
{
"auth": {
"identity": {
"methods": [
"password"
],
"password": {
"user": {
"domain": {
"name": "<my_domain>"
},
"name": "<my_user>",
"password": "<my_password>"
}
}
}
} }'
3) what is the implementation that i should be using for authenticate the devices or other calls to the Orion instance???
Here are the configuration that i used:
fiware-orion-pep
config.authentication = {
checkHeaders: true,
module: 'keystone',
user: '<my_user>',
password: '<my_password>',
domainName: '<my_domain>',
retries: 3,
cacheTTLs: {
users: 1000,
projectIds: 1000,
roles: 60
},
options: {
protocol: 'http',
host: 'account.lab.fiware.org',
port: 5000,
path: '/v3/role_assignments',
authPath: '/v3/auth/tokens'
}
};
fi-ware-pep-proxy (this one works), i have set the listing port to 1026 at the source code
var config = {};
config.account_host = 'https://account.lab.fiware.org';
config.keystone_host = 'cloud.lab.fiware.org';
config.keystone_port = 4731;
config.app_host = 'localhost';
config.app_port = '10026';
config.username = 'pepProxy';
config.password = 'pepProxy';
// in seconds
config.chache_time = 300;
config.check_permissions = false;
config.magic_key = undefined;
module.exports = config;
Thanks in advance for the time ... :)
The are currently some differences in how both PEP Proxies authenticate and validate against the global instances, so they do not behave in exactly the same way.
The one in telefonicaid/fiware-orion-pep was developed to fulfill the PEP Proxy requirements (authentication and validation against a Keystone and Access Control) in individual projects with their own Keystone and Keypass (a flavour of Access Control) installations, and so it evolved faster than the one in ging/fi-ware-pep-proxy and in a slightly different direction. As an example, the former supports multitenancy using the fiware-service and fiware-servicepath headers, while the latter is transparent to those mechanisms. This development direction meant also that the functionality slightly differs from time to time from the one in the global instance.
That being said, the concrete answer:
- Both PEP Proxies should be able to contact the global instance. If one doesn't, please, fill a bug in the issues of the Github repository and we will fix it as soon as possible.
- The ging/fi-ware-pep-proxy was specifically designed for accessing the global instance, so you should be able to use it as expected.
Please, if you try to proceed with the telefonicaid/fiware-orion-pep take note also that:
- the configuration flag authentication.checkHeaders should be false, as the global instance does not currently support multitenancy.
- current stable release (0.5.0) is about to change to next version (probably today) so maybe some of the problems will solve with the update.
Hope this clarify some of your doubts.
[EDIT]
1) I have already install the telefonicaid/fiware-orion-pep (v 0.6.0) from sources and from the rpm package created following the tutorial available in the github. When creating the rpm package, this is created with the following name pep-proxy-0.4.0_next-0.noarch.rpm.
2) Here is the configuration that i used:
/opt/fiware-orion-pep/config.js
var config = {};
config.resource = {
original: {
host: 'localhost',
port: 10026
},
proxy: {
port: 1026,
adminPort: 11211
} };
config.authentication = {
checkHeaders: false,
module: 'keystone',
user: '<##################>',
password: '<###################>',
domainName: 'admin_domain',
retries: 3,
cacheTTLs: {
users: 1000,
projectIds: 1000,
roles: 60
},
options: { protocol: 'http',
host: 'cloud.lab.fiware.org',
port: 4730,
path: '/v3/role_assignments',
authPath: '/v3/auth/tokens'
} };
config.ssl = {
active: false,
keyFile: '',
certFile: '' }
config.logLevel = 'DEBUG'; // List of component
config.middlewares = {
require: 'lib/plugins/orionPlugin',
functions: [
'extractCBAction'
] };
config.componentName = 'orion';
config.resourceNamePrefix = 'fiware:';
config.bypass = false;
config.bypassRoleId = '';
module.exports = config;
/etc/sysconfig/pepProxy
# General Configuration
############################################################################
# Port where the proxy will listen for requests
PROXY_PORT=1026
# User to execute the PEP Proxy with
PROXY_USER=pepproxy
# Host where the target Context Broker is located
# TARGET_HOST=localhost
# Port where the target Context Broker is listening
# TARGET_PORT=10026
# Maximum level of logs to show (FATAL, ERROR, WARNING, INFO, DEBUG)
LOG_LEVEL=DEBUG
# Indicates what component plugin should be loaded with this PEP: orion, keypass, perseo
COMPONENT_PLUGIN=orion
#
# Access Control Configuration
############################################################################
# Host where the Access Control (the component who knows the policies for the incoming requests) is located
# ACCESS_HOST=
# Port where the Access Control is listening
# ACCESS_PORT=
# Host where the authentication authority for the Access Control is located
# AUTHENTICATION_HOST=
# Port where the authentication authority is listening
# AUTHENTICATION_PORT=
# User name of the PEP Proxy in the authentication authority
PROXY_USERNAME=XXXXXXXXXXXXX
# Password of the PEP Proxy in the Authentication authority
PROXY_PASSWORD=XXXXXXXXXXXXX
In the files above i have tried the following parameters:
Keystone instance: account.lab.fiware.org or cloud.lab.fiware.org
User: pep or pepProxy or "user from fiware account"
Pass: pep or pepProxy or "user password from account"
Port: 4730, 4731, 5000
The result it's the same as before... the telefonicaid/fiware-orion-pep is unable to authenticate:
log file at /var/log/pepProxy/pepProxy
time=2015-04-13T14:49:24.718Z | lvl=ERROR | corr=71a34c8b-10b3-40a3-be85-71bd3ce34c8a | trans=71a34c8b-10b3-40a3-be85-71bd3ce34c8a | op=/v1/updateContext | msg=VALIDATION-GEN-003] Error connecting to Keystone authentication: KEYSTONE_AUTHENTICATION_ERROR: There was a connection error while authenticating to Keystone: 500
time=2015-04-13T14:49:24.721Z | lvl=DEBUG | corr=71a34c8b-10b3-40a3-be85-71bd3ce34c8a | trans=71a34c8b-10b3-40a3-be85-71bd3ce34c8a | op=/v1/updateContext | msg=response-time: 50745 statusCode: 500
result from the client console
{
"message": "There was a connection error while authenticating to Keystone: 500",
"name": "KEYSTONE_AUTHENTICATION_ERROR"
}
I'm doing something wrong here??
I am a noob to grunt and would like to start using it.
Here is my gruntfile:
module.exports = function(grunt) {
// Project configuration.
grunt.initConfig({
pkg: grunt.file.readJSON('package.json'),
devDir: 'dev/dir',
prodDir: 'prod/dir',
'sftp-deploy': {
prod: {
auth: {
host: 'server.com',
port: 22,
authKey: {
"username": "username1",
"password": "password2"
}
},
src: '<%=devDir%>',
dest: '/test/env/',
concurrency: 4,
progress: true
}
}
});
// load modules
grunt.loadNpmTasks('grunt-sftp-deploy');
// Default task(s).
grunt.registerTask('default', ['sftp-deploy']);
};
I am getting this error when i run 'grunt' in powershell:
Running "sftp-deploy:prod" (sftp-deploy) task
Logging in with username username1
Concurrency : 4
Fatal error: Connection :: error
What am I doing wrong?
thanks!
Ok, a few things to try... (sorry - a month late!)
run:
grunt sftp-deploy --verbose
This will give you a little more info regarding your error.
I solved my error after realising I couldn't create folders on my server, only upload files. So it might be worth testing that you can accomplish manually what your asking grunt to do.
Lastly, try moving your username / password into a .ftppass file
link [here] (https://www.npmjs.com/package/grunt-sftp-deploy)