pytest caplog LogCaptureFixture is broken when using logging.config.dictConfig() - pytest

I have been going around in circles on this problem for several days now and I am no closer to a solution than when I started.
I have reviewed all of the other stackoverflow entries dealing with the pytest caplog fixture and I have narrowed my problem down to the use of logging.config.dictConfig()
I have tried multiple configurations, with and without propagate=True, and they all result in the same problem ... logging is not captured when using dictConfig().
Pytest logging when used in conjunction with config.dictConfig() is broken.
Here's my test code which demonstrates the problem:
# =====================
# File: test_caplog.py
# =====================
class TestCapLog:
def _test_logger(self, tf_caplog):
"""Display caplog capture text"""
# display capture log
print("\nCAPLOG:")
output = tf_caplog.text.rstrip('\n').split(sep='\n')
if output == ['']:
print("Nothing captured")
else:
for i in range(len(output)):
print(f'{i}: {output[i]}')
def test_caplog0_root(self, caplog):
"""Test caplog 'root' logger w/o dictConfig()"""
import logging
# use logging configuration "as-is"
logger = logging.getLogger()
# log at all logging levels
logger.debug('DEBUG: log entry captured')
logger.info('INFO: log entry captured')
logger.error('ERROR: log entry captured')
logger.warning('WARNING: log entry captured')
self._test_logger(caplog)
def test_caplog1_main1(self, caplog):
"""Test caplog 'main' logger w/ dictConfig(), propagate=False"""
import logging.config
import logging
import log_config
# configure logging, propagate False
log_config.LOGGING['loggers']['main']['propagate'] = False
logging.config.dictConfig(log_config.LOGGING)
logger = logging.getLogger(name='main')
# log at all logging levels
logger.debug('DEBUG: log entry captured')
logger.info('INFO: log entry captured')
logger.error('ERROR: log entry captured')
logger.warning('WARNING: log entry captured')
self._test_logger(caplog)
def test_caplog1_main2(self, caplog):
"""Test caplog 'main' logger w/ dictConfig(), propagate=True"""
import logging.config
import logging
import log_config
# configure logging, propagate True
log_config.LOGGING['loggers']['main']['propagate'] = True
logging.config.dictConfig(log_config.LOGGING)
logger = logging.getLogger(name='main')
# log at all logging levels
logger.debug('DEBUG: log entry captured')
logger.info('INFO: log entry captured')
logger.error('ERROR: log entry captured')
logger.warning('WARNING: log entry captured')
self._test_logger(caplog)
Here's the logging configuration file
# =====================
# File: log_config.py
# =====================
"""logging configuration support"""
# System imports
import logging.handlers
import sys
#: logging formatters
_formatters = {
'msgonly': {
'format': '%(message)s'
},
'minimal': {
'format': '(%(name)s) %(message)s'
},
'normal': {
'format': '%(asctime)s (%(name)s) %(levelname)s %(message)s'
},
'debug': {
'format': '%(asctime)s (%(name)s) %(levelname)s %(module)s %(funcName)s %(message)s'
}
}
#: logging stream handler string
LOGGING_STREAM_HANDLER = 'logging.StreamHandler'
#: logging timed file handler string
LOGGING_TIMED_FILE_HANDLER = 'logging.handlers.TimedRotatingFileHandler'
#: logging handlers
_handlers = {
'debugHandler': {
'class': LOGGING_STREAM_HANDLER,
'level': logging.DEBUG,
'formatter': 'debug',
'stream': sys.stdout,
},
'consoleHandler': {
'class': LOGGING_STREAM_HANDLER,
'level': logging.DEBUG,
'formatter': 'normal',
'stream': sys.stdout,
},
'fileHandler': {
'class': LOGGING_TIMED_FILE_HANDLER,
'level': logging.DEBUG,
'formatter': 'normal',
'filename': 'logging.log',
'when': 'D',
'interval': 1,
'backupCount': 7,
'delay': True,
},
}
#: Loggers
_loggers = {
'': {
'level': logging.INFO,
'handlers': ['consoleHandler', 'fileHandler'],
'qualname': 'root',
'propagate': False,
},
'root': {
'level': logging.DEBUG,
'handlers': ['debugHandler', 'fileHandler'],
'qualname': 'root',
'propagate': False,
},
'__main__': {
'level': logging.DEBUG,
'handlers': ['debugHandler', 'fileHandler'],
'qualname': '__main__',
'propagate': False,
},
'main': {
'level': logging.DEBUG,
'handlers': ['debugHandler', 'fileHandler'],
'qualname': 'main',
'propagate': False,
},
}
#: Configuration dictionary
LOGGING = {
"version": 1,
"loggers": _loggers,
"handlers": _handlers,
"formatters": _formatters,
}
The 3 tests that I run are:
logging using the root logger with no call to dictConfig()
logging using named logger (main) with call to dictConfig() and propagate=False
logging using named logger (main) with call to dictConfig() and propagate=True
What follows is the output of executing my test code:
/home/mark/PycharmProjects/pytest_caplog/venv/bin/python /home/mark/.local/share/JetBrains/pycharm-2022.2.2/plugins/python/helpers/pycharm/_jb_pytest_runner.py --path /home/mark/PycharmProjects/pytest_caplog/test_caplog.py
Testing started at 1:09 AM ...
Launching pytest with arguments /home/mark/PycharmProjects/pytest_caplog/test_caplog.py --no-header --no-summary -q in /home/mark/PycharmProjects/pytest_caplog
============================= test session starts ==============================
collecting ... collected 3 items
test_caplog.py::TestCapLog::test_caplog0_root PASSED [ 33%]
CAPLOG:
0: ERROR root:test_caplog.py:23 ERROR: log entry captured
1: WARNING root:test_caplog.py:24 WARNING: log entry captured
test_caplog.py::TestCapLog::test_caplog1_main1 PASSED [ 66%]2022-12-22 01:09:28,810 (main) DEBUG test_caplog test_caplog1_main1 DEBUG: log entry captured
2022-12-22 01:09:28,810 (main) INFO test_caplog test_caplog1_main1 INFO: log entry captured
2022-12-22 01:09:28,810 (main) ERROR test_caplog test_caplog1_main1 ERROR: log entry captured
2022-12-22 01:09:28,811 (main) WARNING test_caplog test_caplog1_main1 WARNING: log entry captured
CAPLOG:
Nothing captured
test_caplog.py::TestCapLog::test_caplog1_main2 PASSED [100%]2022-12-22 01:09:28,815 (main) DEBUG test_caplog test_caplog1_main2 DEBUG: log entry captured
2022-12-22 01:09:28,815 (main) DEBUG DEBUG: log entry captured
2022-12-22 01:09:28,815 (main) INFO test_caplog test_caplog1_main2 INFO: log entry captured
2022-12-22 01:09:28,815 (main) INFO INFO: log entry captured
2022-12-22 01:09:28,815 (main) ERROR test_caplog test_caplog1_main2 ERROR: log entry captured
2022-12-22 01:09:28,815 (main) ERROR ERROR: log entry captured
2022-12-22 01:09:28,816 (main) WARNING test_caplog test_caplog1_main2 WARNING: log entry captured
2022-12-22 01:09:28,816 (main) WARNING WARNING: log entry captured
CAPLOG:
Nothing captured
============================== 3 passed in 0.03s ===============================
Process finished with exit code 0
The only way that I have been able to get caplog to behave as I would expect it to behave is to not use dictConfig() and write my own get_logger() function.
That seems like a waste and would not be necessary if the pytest caplog fixture would respect the dictConfig() settings.
I have read through the pytest documentation and none of the caplog examples that I have found address using anything other than the root logger.
At this point I am reconsidering my decision to switch from the standard Python unittest capability to pytest
This is a major blocker for me.
Any help that anyone can give me will be greatly appreciated.

I am not sure if this solution acceptable for you but you can have a look. Seems like you need to overwrite logging-plugin caplog_handler property that is used by caplog fixture after dictConfig call.
You can write your own fixture that sets config and overwrites caplog_handler property of logging-plugin instance with your LogCaptureHandler that is described in config. Also this handler must be specified with loggers that needs it.
# log_config.py
...
CAPLOG_HANDLER = '_pytest.logging.LogCaptureHandler'
#: logging handlers
_handlers = {
'logCaptureHandler': {
'class': CAPLOG_HANDLER,
'level': logging.DEBUG,
'formatter': 'debug'
},
'debugHandler': {
...
'main': {
'level': logging.DEBUG,
'handlers': ['debugHandler', 'fileHandler', 'logCaptureHandler'],
'qualname': 'main',
'propagate': False,
},
...
# conftest.py
import logging.config
import log_config
import pytest
#pytest.fixture(scope="function")
def logging_config(request):
logging_plugin = request.config.pluginmanager.get_plugin("logging-plugin")
config = getattr(request, "param", log_config.LOGGING)
logging.config.dictConfig(config)
logging_plugin.caplog_handler = logging._handlers["logCaptureHandler"]
Also keep in mind that your logging config must not be reconfigured during tests with logging.config.dictConfig(log_config.LOGGING) because it will cause recreation of handlers.
So logging configuration will be done only with your logging_config fixture.
To change config before test you can use indirect parametrization. Example of changing propagate in main logger in 3rd test:
import log_config
import logging
import pytest
class TestCapLog:
def _test_logger(self, tf_caplog):
"""Display caplog capture text"""
# display capture log
print("\nCAPLOG:")
output = tf_caplog.text.rstrip('\n').split(sep='\n')
if output == ['']:
print("Nothing captured")
else:
for i in range(len(output)):
print(f'{i}: {output[i]}')
def test_caplog0_root(self, caplog):
"""Test caplog 'root' logger w/o dictConfig()"""
import logging
# use logging configuration "as-is"
logger = logging.getLogger()
# log at all logging levels
logger.debug('DEBUG: log entry captured')
logger.info('INFO: log entry captured')
logger.error('ERROR: log entry captured')
logger.warning('WARNING: log entry captured')
self._test_logger(caplog)
def test_caplog1_main1(self, logging_config, caplog):
"""Test caplog 'main' logger w/ dictConfig(), propagate=False"""
import logging
# configure logging, propagate False
logger = logging.getLogger(name='main')
# log at all logging levels
logger.debug('DEBUG: log entry captured')
logger.info('INFO: log entry captured')
logger.error('ERROR: log entry captured')
logger.warning('WARNING: log entry captured')
self._test_logger(caplog)
MAIN_PROPAGATE_TRUE = log_config.LOGGING.copy()
MAIN_PROPAGATE_TRUE['loggers']['main']['propagate'] = True
#pytest.mark.parametrize("logging_config", [MAIN_PROPAGATE_TRUE], indirect=True)
def test_caplog1_main2(self, logging_config, caplog):
"""Test caplog 'main' logger w/ dictConfig(), propagate=True"""
# configure logging, propagate True
# logging.config.dictConfig(log_config.LOGGING)
logger = logging.getLogger(name='main')
# log at all logging levels
logger.debug('DEBUG: log entry captured')
logger.info('INFO: log entry captured')
logger.error('ERROR: log entry captured')
logger.warning('WARNING: log entry captured')
self._test_logger(caplog)
Also you can rewrite fixture to use mergedeep to merge your initial config (LOGGING) with request.param just to avoid defining and passing whole config before #pytest.mark.parametrize.

Related

reexport - SyntaxError: Unexpected token 'export'

There is such a file. That imports from API and exports at once.
export { extractValue, parse, parseCommand } from './parser'
export { Manager, EVENTS } from './manager'
export { runCLI, runCommand, bootstrapCommandManager } from './cli'
I receive an error:
export { extractValue, parse, parseCommand } from './parser'
^^^^^^
SyntaxError: Unexpected token 'export'
There is my babel.config.js
module.exports = {
presets: [['#babel/preset-env', {targets: {node: 'current'}}]],
plugins: [
['#babel/plugin-transform-modules-commonjs'],
['#babel/plugin-proposal-decorators', {'legacy': true}],
['#babel/plugin-proposal-class-properties'],
['#babel/plugin-proposal-export-default-from']
]
};
#babel/plugin-proposal-export-default-from does not help.
It did not compile files from node_modules directory. An ignore rule had to be set.
It works
Variant A babel-node
npx babel-node --ignore="/node_modules\/(?\!console-command-manager)/" --config-file ./babel.config.js ./src/index.js
I fault to move --ignore argument into ./babel.config.js
Variant B
Node executtion with -r runner.js
Execution
node -r ./runner.js src/index.js
Runner
const config = require('./babel.config.js')
console.log(config)
require("#babel/register")({
extensions: ['.js'],
ignore: [
/node_modules[\\/](?!console-command-manager)/
],
...config
});

Ejabberd - ejabberd_auth_external:failure:103 External authentication program failed when calling 'check_password'

I already have a schema of users with authentication-key and wanted to do authentication via that. I tried implementing authentication via sql but due to different structure of my schema I was getting error and so I implemented external-authentication method. The technologies and OS used in my application are :
Node.JS
Ejabberd as XMPP server
MySQL Database
React-Native (Front-End)
OS - Ubuntu 18.04
I implemented the external authentication configuration as mentioned in https://docs.ejabberd.im/admin/configuration/#external-script and took php script https://www.ejabberd.im/files/efiles/check_mysql.php.txt as an example. But I am getting the below mentioned error in error.log. In ejabberd.yml I have done following configuration.
...
host_config:
"example.org.co":
auth_method: [external]
extauth_program: "/usr/local/etc/ejabberd/JabberAuth.class.php"
auth_use_cache: false
...
Also, is there any external auth javascript script?
Here is the error.log and ejabberd.log as mentioned below
error.log
2019-03-19 07:19:16.814 [error]
<0.524.0>#ejabberd_auth_external:failure:103 External authentication
program failed when calling 'check_password' for admin#example.org.co:
disconnected
ejabberd.log
2019-03-19 07:19:16.811 [debug] <0.524.0>#ejabberd_http:init:151 S:
[{[<<"api">>],mod_http_api},{[<<"admin">>],ejabberd_web_admin}]
2019-03-19 07:19:16.811 [debug]
<0.524.0>#ejabberd_http:process_header:307 (#Port<0.13811>) http
query: 'POST' <<"/api/register">>
2019-03-19 07:19:16.811 [debug]
<0.524.0>#ejabberd_http:process:394 [<<"api">>,<<"register">>] matches
[<<"api">>]
2019-03-19 07:19:16.811 [info]
<0.364.0>#ejabberd_listener:accept:238 (<0.524.0>) Accepted connection
::ffff:ip -> ::ffff:ip
2019-03-19 07:19:16.814 [info]
<0.524.0>#mod_http_api:log:548 API call register
[{<<"user">>,<<"test">>},{<<"host">>,<<"example.org.co">>},{<<"password">>,<<"test">>}]
from ::ffff:ip
2019-03-19 07:19:16.814 [error]
<0.524.0>#ejabberd_auth_external:failure:103 External authentication
program failed when calling 'check_password' for admin#example.org.co:
disconnected
2019-03-19 07:19:16.814 [debug]
<0.524.0>#mod_http_api:extract_auth:171 Invalid auth data:
{error,invalid_auth}
Any help regarding this topic will be appreciated.
1) Your config about the auth_method looks good.
2) Here is a python script I've used and upgraded to make an external authentication for ejabberd.
#!/usr/bin/python
import sys
from struct import *
import os
def openAuth(args):
(user, server, password) = args
# Implement your interactions with your service / database
# Return True or False
return True
def openIsuser(args):
(user, server) = args
# Implement your interactions with your service / database
# Return True or False
return True
def loop():
switcher = {
"auth": openAuth,
"isuser": openIsuser,
"setpass": lambda(none): True,
"tryregister": lambda(none): False,
"removeuser": lambda(none): False,
"removeuser3": lambda(none): False,
}
data = from_ejabberd()
to_ejabberd(switcher.get(data[0], lambda(none): False)(data[1:]))
loop()
def from_ejabberd():
input_length = sys.stdin.read(2)
(size,) = unpack('>h', input_length)
return sys.stdin.read(size).split(':')
def to_ejabberd(result):
if result:
sys.stdout.write('\x00\x02\x00\x01')
else:
sys.stdout.write('\x00\x02\x00\x00')
sys.stdout.flush()
if __name__ == "__main__":
try:
loop()
except error:
pass
I didn't created the communication with Ejabberd from_ejabberd() and to_ejabberd(), and unfortunately can't find back the sources.

ELK, File beat cut some text from message

I have ELK(filebeat->logstash->elasticsearch<-kibana) running on win10. I gave the following two lines, then I found filebeat not sending whole text, rather some head/front texts are cut.
2018-04-27 10:42:49 [http-nio-8088-exec-1] - INFO - app-info - injectip ip 192.168.16.89
2018-04-27 10:42:23 [RMI TCP Connection(10)-127.0.0.1] - INFO - org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/] - Initializing Spring FrameworkServlet 'dispatcherServlet'
In filebeat console, I notice following text:
2018-05-24T09:02:50.361+0800 DEBUG [publish] pipeline/processor.go:275 Publish event: {
"#timestamp": "2018-05-24T01:02:50.361Z",
"#metadata": {
"beat": "filebeat",
"type": "doc",
"version": "6.2.3"
},
"source": "e:\\sjj\\xxx\\YKT\\ELK\\twoFormats.log",
"offset": 97083,
"message": "xec-1] - INFO - app-info - injectip ip 192.168.16.89",
"prospector": {
"type": "log"
},
"beat": {
"name": "DESKTOP-M4AFV3I",
"hostname": "DESKTOP-M4AFV3I",
"version": "6.2.3"
}
}
and
2018-05-24T09:11:10.374+0800 DEBUG [publish] pipeline/processor.go:275 Publish event: {
"#timestamp": "2018-05-24T01:11:10.373Z",
"#metadata": {
"beat": "filebeat",
"type": "doc",
"version": "6.2.3"
},
"prospector": {
"type": "log"
},
"beat": {
"version": "6.2.3",
"name": "DESKTOP-M4AFV3I",
"hostname": "DESKTOP-M4AFV3I"
},
"source": "e:\\sjj\\xxx\\YKT\\ELK\\twoFormats.log",
"offset": 97272,
"message": "n(10)-127.0.0.1] - INFO - org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/] - Initializing Spring FrameworkServlet 'dispatcherServlet'"
}
In the console, one could see message part, some front text is cut off. In first case, '2018-04-27 10:42:49 [http-nio-8088-e' is cut, in the second case, '2018-04-27 10:42:23 [RMI TCP Connectio' is cut.
Why filebeat will do this? this makes my regex generates parserexception in logstash.
I list my filebeat.yml file as follows:
#=========================== Filebeat prospectors =============================
filebeat.prospectors:
# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.
- type: log
# Change to true to enable this prospector configuration.
#enabled: false
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
#- /var/log/*.log
- e:\sjj\xxx\YKT\ELK\twoFormats.log
### Multiline options
# Mutiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
multiline.pattern: '^[[:space:]]'
multiline.negate: false
multiline.match: after
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false
#----------------------------- Logstash output --------------------------------
output.logstash:
hosts: ["localhost:5044"]

Compile & compress SASS on deploybot with Grunt

I've got a grunt task to compile & compress my JS & SASS files which all works fine locally but when I try using it on deploybot.com I just get an error stating:
sass sass/main.scss public/css/main.css --style=compressed --no-cache
This is my grunt file:
module.exports = function(grunt){
grunt.initConfig({
concat:{
options:{
stripBanners: true,
sourceMap: true,
sourceMapName: 'src/js/jsMap'
},
dist:{
src: ['js/vendor/jquery.slicknav.js', 'js/vendor/swiper.js', 'js/app/*.js'],
dest: 'src/js/main.js'
},
},
copy:{
js:{
files:[
{ src: 'src/js/main.js', dest: 'public/js/main.js', },
{ src: 'src/js/jsMap', dest: 'public/js/jsMap', }
],
},
},
uglify:{
production:{
options:{
sourceMap: true,
sourceMapIncludeSources: true,
sourceMapIn: 'src/js/jsMap', // input sourcemap from a previous compilation
},
files: {
'public/js/main.js': ['src/js/main.js'],
},
},
},
sass:{
dev:{
options:{
style: 'expanded'
},
files:{
'public/css/main.css': 'sass/main.scss'
}
},
production:{
options:{
style: 'compressed',
noCache: true
},
files:{
'public/css/main.css': 'sass/main.scss'
}
}
},
watch: {
dev:{
files: ['js/**/*.js', 'sass/*.scss'],
tasks: ['build-dev'],
options: {
spawn: false,
interrupt: true,
},
},
},
});
grunt.loadNpmTasks('grunt-contrib-concat');
grunt.loadNpmTasks('grunt-contrib-copy');
grunt.loadNpmTasks('grunt-contrib-uglify');
grunt.loadNpmTasks('grunt-contrib-sass');
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.registerTask('build-dev', ['concat', 'copy:js', 'sass:dev']);
grunt.registerTask('build-prod', ['concat', 'uglify:production', 'sass:production']);
grunt.registerTask("watch-dev", ['watch:dev']);
};
These are the commands I'm running to compile & compress my code, all the version specific stuff was to try and fix the problem I have the same issue when remove it.
nvm install 0.10.25
nvm use 0.10.25
npm uninstall grunt -g
npm install grunt-cli -g
npm install grunt#0.4.5 --save-dev
npm install -g grunt-cli
npm install --save-dev
grunt build-prod --stack --verbose --debug
This is what is shown in the log file after the node & grunt install bits:
output Loading "Gruntfile.js" tasks...OK
output + build-dev, build-prod, watch-dev
output Running tasks: build-prod
output Running "build-prod" task
output [D] Task source: /source/Gruntfile.js
output Running "concat" task
output [D] Task source: /source/node_modules/grunt-contrib-concat/tasks/concat.js
output Running "concat:dist" (concat) task
output [D] Task source: /source/node_modules/grunt-contrib-concat/tasks/concat.js
output Verifying property concat.dist exists in config...OK
output Files: js/vendor/jquery.slicknav.js, js/vendor/swiper.js, js/app/centre-events-boxes.js, js/app/centre-footer.js, js/app/club.move-nav.js, js/app/club.social-link-position.js, js/app/func.stick-to-top.js, js/app/home.move-nav.js, js/app/home.stick-to-top.js, js/app/match-event-box-height.js, js/app/slicknav.js, js/app/swiperjs-slider.js -> src/js/main.js
output Options: separator="\n", banner="", footer="", stripBanners, process=false, sourceMap, sourceMapName="src/js/jsMap", sourceMapStyle="embed"
output Reading js/vendor/jquery.slicknav.js...OK
output Reading js/vendor/swiper.js...OK
output Reading js/app/centre-events-boxes.js...OK
output Reading js/app/centre-footer.js...OK
output Reading js/app/club.move-nav.js...OK
output Reading js/app/club.social-link-position.js...OK
output Reading js/app/func.stick-to-top.js...OK
output Reading js/app/home.move-nav.js...OK
output Reading js/app/home.stick-to-top.js...OK
output Reading js/app/match-event-box-height.js...OK
output Reading js/app/slicknav.js...OK
output Reading js/app/swiperjs-slider.js...OK
output Writing src/js/jsMap...OK
output Source map src/js/jsMap created.
output Writing src/js/main.js...OK
output File src/js/main.js created.
output Running "uglify:production" (uglify) task
output [D] Task source: /source/node_modules/grunt-contrib-uglify/tasks/uglify.js
output Verifying property uglify.production exists in config...OK
output Files: src/js/main.js -> public/js/main.js
output Options: banner="", footer="", compress={"warnings":false}, mangle={}, beautify=false, report="min", expression=false, maxLineLen=32000, ASCIIOnly=false, screwIE8=false, quoteStyle=0, sourceMap, sourceMapIncludeSources, sourceMapIn="src/js/jsMap"
output Minifying with UglifyJS...Reading src/js/jsMap...OK
output Parsing src/js/jsMap...OK
output Reading src/js/main.js...OK
output OK
output Writing public/js/main.js...OK
output Writing public/js/main.js.map...OK
output File public/js/main.js.map created (source map).
output File public/js/main.js created: 192.88 kB → 77.01 kB
output >> 1 sourcemap created.
output >> 1 file created.
output Running "sass:production" (sass) task
output [D] Task source: /source/node_modules/grunt-contrib-sass/tasks/sass.js
output Verifying property sass.production exists in config...OK
output Files: sass/main.scss -> public/css/main.css
output Options: style="compressed", noCache
output Command: sass sass/main.scss public/css/main.css --style=compressed --no-cache
output Errno::EISDIR: Is a directory # rb_sysopen - public/css/main.css
output Use --trace for backtrace.
output Warning: Exited with error code 1 Use --force to continue.
output Aborted due to warnings.
I've been trying to fix this for days and have no ideas. I've tried contacting their support too.
Turns out after contacting their support team multiple times the problem was on their end, something to do with a caching mechanism I think. Nothing I could do to solve it without their support though.

symfony2.3 spool monolog error email in command

I know that this question was already made in this post: Send email when error occurs in console command of Symfony2 app, but answers do not provide a complete solution to the problem at hand and I can't comment on original post.
I need to send a monolog error email in command. E-mail is correctly enqueued in a file spooler; unfortunately I'm forced to use a memory spool.
Strangely enough the code snipper provided to manually flush the spool does work for emails generated in my code, not for monolog.
Does anybody know why this is happening and wether it would be possible to use a memory spool?
config.yml:
# Swiftmailer Configuration
swiftmailer:
transport: %mailer_transport%
host: %mailer_host%
username: %mailer_user%
password: %mailer_password%
spool: { type: memory }
# Monolog Configuration
monolog:
channels: ["account.create"]
handlers:
account.create.group:
type: group
members: [account.create.streamed, account.create.buffered]
channels: [account.create]
account.create.streamed:
type: stream
path: %kernel.logs_dir%/accounts_creation.log
level: info
account.create.buffered:
type: buffer
handler: account.create.swift
account.create.swift:
type: swift_mailer
from_email: xxx#yyy.com
to_email: aaa#gmail.com
subject: 'An Error Occurred while managing zzz!'
level: critical
config_prod.yml:
monolog:
handlers:
main:
type: fingers_crossed
action_level: error
handler: nested
nested:
type: stream
path: %kernel.logs_dir%/%kernel.environment%.log
level: debug
channels: [!account.create]
usage example:
try
{
//code that could block
}
catch(ManageUserBlockingExceptionInterface $e)
{
$exitCode = self::EXIT_CODE_ERROR_BLOCKING;
//le eccezioni bloccanti vengono loggate e non si conferma che
//il messaggio è stato utilizzato ma si termina la coda
if(!\is_null($this->logger))
{
$this->logger->crit($e->getMessage());
}
}
the logger is injected in service by dependency injection as a service:
...
<argument type="service" id="monolog.logger.account.create" on-invalid="null" />
...
and it works because critical errors are streamed in file log; but also email is created if swiftmailer is configured with a file spool.
Finally, the code to manually flush memory spool is as folllow:
protected function flushMailSpool()
{
$mailer = $this->container->get('mailer');
$spool = $mailer->getTransport()->getSpool();
$transport = $this->container->get('swiftmailer.transport.real');
$spool->flushQueue($transport);
}
it is called immediately after a service purposedly sends an email; I noticed that the same code, put in command and adapted to command environment i.e. $this->container becomes $this->getContainer() does not work, maybe due to a scope change?