Celery worker with gevent pool + Sentry logger = hang - celery

I'm using Celery with Django integration. I discovered some troubles with new commit to my current project: Celery worker with gevent pool refused to handle new tasks. After short investigation, I found that 'sentry' log handler cause a problem:
settings.py:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
}
},
'formatters': {
'verbose': {
'format': '%(levelname)s %(asctime)s %(module)s '\
'%(process)d %(thread)d %(message)s'
},
'gunicorn_style': {
'format': CELERYD_TASK_LOG_FORMAT,
},
},
'handlers': {
'console':{
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'gunicorn_style'
},
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false'],
'class': 'django.utils.log.AdminEmailHandler'
},
'sentry': {
'level': 'WARNING',
'class': 'raven.contrib.django.raven_compat.handlers.SentryHandler',
}
},
'loggers': {
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
'raven': {
'level': 'INFO',
'handlers': ['console'],
'propagate': False,
},
'sentry.errors': {
'level': 'INFO',
'handlers': ['console'],
'propagate': False,
},
}
}
...
#I want verbose logs only for my apps
for i in MY_APPS:
LOGGING['loggers'][i] = {
'handlers': ['console', 'sentry'],
'level': CONSOLE_LOGLEVEL,
'propagate': False,
}
LOGGING['loggers']['celery'] = {
'handlers': ['sentry'],
'level': CONSOLE_LOGLEVEL,
'propagate': True,
}
...
With 'handlers': ['console'] all works fine, but when I add 'sentry' handler celer+gevent worker start to behave as follows: take N tasks from broker, where N is concurrency level and then stops.
I run celery worker with this command:
python manage.py celery worker -Q celery_gevent -P gevent -c 20
Note: deathlock shows with concurrency >= 3
$ pip freeze
Django==1.5
Fabric==1.6.0
South==0.7.6
amqp==1.0.9
anyjson==0.3.3
argparse==1.2.1
billiard==2.7.3.22
celery==3.0.16
cssselect==0.8
distribute==0.6.24
django-appconf==0.6
django-celery==3.0.11
django-geoip==0.3
django-nose==1.1
django-redis==3.2
flower==0.5.0
gevent==0.13.8
greenlet==0.4.0
gunicorn==0.17.2
ipython==0.13.1
kombu==2.5.7
logilab-astng==0.24.2
logilab-common==0.59.0
lxml==3.1.1
nose==1.2.1
paramiko==1.10.0
progressbar==2.3dev
psycopg2==2.4.6
pycrypto==2.6
pylint==0.27.0
pymongo==2.4.2
python-dateutil==1.5
pytz==2013b
raven==3.2.0
redis==2.7.2
requests==1.0.4
six==1.3.0
tornado==3.0.1
wsgiref==0.1.2
I'm using RabbitMQ as broker and redis as result backend
Thank you.
P.S. sync celery workers works fine with any configuration

David Cramer proposed to use gevent+http as transport for raven and it seams to work (https://github.com/getsentry/raven-python/issues/305)

Related

What should the publicPath in webpack config be for a dynamic port?

I'm currently building a microfrontend using webpack's module federation, however when I create a deployment in kubernetes it's not resolving because of an incorrect publicPath. It's still a bit complex to me and I'm not sure what to set the publicPath to as the localhost port keeps changing every deployment.
So it looks like: http://127.0.0.1:TUNNEL_PORT, whereby TUNNEL_PORT is dynamic. How would I account for this when defining my output.publicPath?
Webpack.config.js
const HtmlWebPackPlugin = require("html-webpack-plugin");
const ModuleFederationPlugin = require("webpack/lib/container/ModuleFederationPlugin");
const deps = require("./package.json").dependencies;
module.exports = {
output: {
publicPath: "http://localhost:3000/",
// publicPath: 'auto',
},
resolve: {
extensions: [".tsx", ".ts", ".jsx", ".js", ".json"],
},
devServer: {
port: 3000,
historyApiFallback: true,
},
module: {
rules: [
{
test: /\.m?js/,
type: "javascript/auto",
resolve: {
fullySpecified: false,
},
},
{
test: /\.(css|s[ac]ss)$/i,
use: ["style-loader", "css-loader", "postcss-loader"],
},
{
test: /\.(ts|tsx|js|jsx)$/,
exclude: /node_modules/,
use: {
loader: "babel-loader",
},
},
],
},
plugins: [
new ModuleFederationPlugin({
name: "microfrontend1",
filename: "remoteEntry.js",
remotes: {},
exposes: {
"./Header": "./src/Header.tsx",
"./Footer": "./src/Footer.tsx",
},
shared: {
...deps,
react: {
singleton: true,
eager: true,
requiredVersion: deps.react,
},
"react-dom": {
singleton: true,
eager: true,
requiredVersion: deps["react-dom"],
},
},
}),
new HtmlWebPackPlugin({
template: "./src/index.html",
}),
],
};

Headless-chrome testing breaking when doing test with geolocation or custom handlers

This is my current protractor config file setup:
const chrome = {
browserName: 'chrome',
unexpectedAlertBehaviour: 'accept',
chromeOptions: {
args: [
'--use-fake-device-for-media-stream',
'--use-fake-ui-for-media-stream',
`--use-file-for-fake-audio-capture=${filesPath}/E2Eaudio.wav`
],
prefs: {
custom_handlers: {
'enabled': true,
'registered_protocol_handlers': [
{
'default': true,
'protocol': 'tel',
'title': '',
'url': `${urls[this.params.cloud]}/?checksw=true&call=%s`
}
]
},
profile: {
managed_default_content_settings: {
notifications: 1,
geolocation: 1
}
},
download: {
// Code Here
}
}
},
loggingPrefs: {
browser: 'ALL'
},
'goog:loggingPrefs': {
browser: 'ALL'
}
};
When doing test invovling tel links or geolocation headless tests break, but when doing the same test non-headless, there are no problems.

grunt-protractor-runner: Unable to pick scenarios

I'm trying to create grunt tasks using grunt-protractor-runner with my protractor-cucumber framework. Below is how the Gruntfile.js looks like:
grunt.initConfig({
protractor: {
options: {
//configFile: "./config/config.js",
keepAlive: true,
noColor: false,
},
chrome: {
options: {
configFile: "./config/config.js",
args: {
autoConnect: false,
seleniumServerJar: './node_modules/webdriver-manager/selenium/selenium-server-standalone-3.141.59.jar',
chromeDriver: './node_modules/webdriver-manager/selenium/chromedriver.exe',
specs: [
'../features/calendar.feature',
'../features/deal.feature',
'../features/entitlement.feature',
'../features/filter.feature',
'../features/product.feature'
],
capabilities: {
browserName: 'chrome',
chromeOptions: {
useAutomationExtension: false,
args: ['–disable-gpu'],
}
}
}
}
}
},
});
grunt.registerTask('test', ['protractor:chrome']);
If I run command grunt test it opens the chrome browser and closes with the below log:
Running "protractor:chrome" (protractor) task
[17:22:57] I/launcher - Running 1 instances of WebDriver
[17:22:57] I/hosted - Using the selenium server at http://localhost:4444/wd/hub
0 scenarios
0 steps
0m00.000s
This doesn't pick any scenarios to run. Can you help me to understand what's the issue here? My config.conf looks like this:
const Reporter = require('../support/Reporter.js');
exports.config = {
seleniumAddress: 'http://localhost:4444/wd/hub',
autoConnect: false,
framework: 'custom',
frameworkPath: require.resolve('protractor-cucumber-framework'),
restartBrowserBetweenTests: false,
SELENIUM_PROMISE_MANAGER: true,
ignoreUncaughtExceptions: true,
onPrepare: function () {
browser.ignoreSynchronization = false;
browser.manage().timeouts().setScriptTimeout(40 * 1000);
browser.manage().timeouts().implicitlyWait(4 * 1000);
browser.manage().window().maximize();
require('babel-register');
},
cucumberOpts: {
strict: true,
format: ['json:./reports/json/cucumber_report.json'],
require: ['../support/*.js', '../stepDefinitions/*.js', '../stepDefinitions/*.ts'],
tags: 'not #Ignore', //(#CucumberScenario or #ProtractorScenario) and (not #Ignore)
retry: 3
},
params: {
env: 'test',
test: {
url: '',
users: {
BankerRO: '',
BankerRW: '',
BusinessRiskRW: '',
RiskRW: '',
RO: '',
},
db: {
server: '',
port: '',
name: '',
userId: '',
password: '',
}
}
},
onComplete: function () {
Reporter.moveReportToArchive();
Reporter.createHTMLReport();
}
};
I finally found out the root-cause, if I keep spec[] part in the Gruntfile, grunt fails to pick the scenarios even from config.js. When I removed spec from Gruntfile and kept it in the config.js, it started working fine. I'm not sure this is how it works or a potential bug with grunt-protractor-runner
Conclusion: Looks like grunt-protractor-runner looks for specs in config.js file and ignores if you keep it in Gruntfile.js
I have raised this as an issue: https://github.com/teerapap/grunt-protractor-runner/issues/197#issue-537600108

How to get grunt serve task working alongside watch?

I've recently installed and got a it up and running but I can't seem to get it running concurrently with my watch task? In my grunt file, if register the serve task before watch, the server spins up and but the watch task doesn't....and vice versa. This is the serve package and Im using and Grunt file attached:
https://www.npmjs.com/package/grunt-serve
module.exports = function(grunt) {
// 1. All configuration goes here
grunt.initConfig({
pkg: grunt.file.readJSON('package.json'),
concat: {
dist: {
src: [
'js/libs/*.js', // All JS in the libs folder
'js/global.js' // This specific file
],
dest: 'js/build/production.js',
}
},
uglify: {
options: {
mangle: false
},
my_target: {
files: {
'js/build/production.min.js': ['js/build/production.js']
}
}
},
imagemin: {
dynamic: {
files: [{
expand: true,
cwd: 'images/',
src: ['**/*.{png,jpg,gif}'],
dest: 'images/build/'
}]
}
},
sass: {
//options: {
// style: 'compressed'
//},
dist: {
files: [{
expand: true,
cwd: 'css',
src: ['*.scss'],
dest: 'css/build/',
ext: '.css'
}]
}
},
serve: {
options: {
port: 9000
}
},
watch: {
options: {
livereload: true,
},
css: {
files: ['css/**/*.scss'],
tasks: ['sass'],
options: {
spawn: false,
}
},
scripts: {
files: ['js/*.js'],
tasks: ['concat', 'uglify'],
options: {
spawn: false,
},
}
}
});
// Load all Grunt tasks automatically wihtout having to enter manaually
require('load-grunt-tasks')(grunt);
grunt.registerTask(
'default',
[
'concat',
'uglify',
'sass',
'serve',
'watch'
]
);
};
Grunt serve and grunt watch are both blocking tasks. You can use a plugin like grunt-concurrent to run both at the same time in separate threads. https://github.com/sindresorhus/grunt-concurrent
concurrent: {
target1: ['serve', 'watch'],
}
//aslo update your default task
grunt.registerTask(
'default',
[
'concat',
'uglify',
'sass',
'concurrent:target1'
]
);
Additionally you could also use grunt-concurrent to run your uglify and sass tasks in parallel which may improve build time.

Can't connect to mongodb from mocha test

Connecting from the REPL works fine:
> var mongoose=require('mongoose');
undefined
> mongoose.connect('mongodb://localhost/test', function(error) {
... console.log( 'connected\n%s\n', error );
... });
returns:
{ connections:
[ { base: [Circular],
collections: {},
models: {},
replica: false,
hosts: null,
host: 'localhost',
port: 27017,
user: undefined,
pass: undefined,
name: 'test',
options: [Object],
_readyState: 2,
_closeCalled: false,
_hasOpened: false,
_listening: false,
_events: {},
db: [Object] } ],
plugins: [],
models: {},
modelSchemas: {},
options: {} }
> connected # Yes!
undefined
But connecting from a Mocha test suite does not work:
var mongoose = require( 'mongoose' );
console.log( 'connecting...' );
mongoose.connect( 'mongodb://localhost/test', function( error ) {
if( error) console.error( 'Error while connecting:\n%\n', error );
console.log( 'connected' );
});
returns:
$ mocha
connecting...
0 passing (0 ms)
Does anyone know why this not working?
Do you have any tests in your suite? If not it seems like mocha exits before mongoose gets a chance to connect. One of the features listed on the mocha page is
auto-exit to prevent "hanging" with an active loop
which may have something to do with it. You could try connecting to mongoose in the before method of your test suite e.g.
describe('test suite', function() {
before(function(done) {
mongoose.connect('mongodb://localhost/test', function(error) {
if (error) console.error('Error while connecting:\n%\n', error);
console.log('connected');
done(error);
});
});
});