I am trying to start a ipyparallel cluster using MPI.
The ipcluster_config has following lines modified as such:
c.MPILauncher.mpi_cmd = ['mpiexec']
c.MPIControllerLauncher.controller_args = ['--ip=*']
c.MPILauncher.mpi_args = ["-machinefile", "~/mpi_hosts"]
The ipcontroller_config.py is configured as such:
c.HubFactory.engine_ip = '*'
c.HubFactory.ip = '*'
c.HubFactory.client_ip = '*'
However, when I launch the cluster using command
ipcluster start --profile mpi -n 2
it fails with following message
Engines shutdown early, they probably failed to connect.
You can set this by adding "--ip='*'" to your ControllerLauncher.controller_args
Not sure how to debug further.
Related
We are using Chef to manage our infrastructure, and I'm running into an issue where the Splunk TA (Add-on for Kafka) simply refuses to acknowledge I've dropped kafka_credential.conf file in the local directory of the plugin. If I use the "Web UI", it generates an entry properly and it shows up in the add-on configuration.
[root#ip-10-14-1-42 local]# ls
app.conf inputs.conf kafka.conf kafka_credentials.conf
[root#ip-10-14-1-42 local]# grep -nr "" *.conf
app.conf:1:# MANAGED BY CHEF. PLEASE DO NOT MODIFY!
app.conf:2:[install]
app.conf:3:is_configured = 1
inputs.conf:1:# MANAGED BY CHEF. PLEASE DO NOT MODIFY!
inputs.conf:2:[kafka_mod]
inputs.conf:3:interval = 60
inputs.conf:4:start_by_shell = false
inputs.conf:5:
inputs.conf:6:[kafka_mod://my_app]
inputs.conf:7:kafka_cluster = default
inputs.conf:8:kafka_topic = log-my_app
inputs.conf:9:kafka_topic_group = my_app
inputs.conf:10:kafka_partition_offset = earliest
inputs.conf:11:index = main
kafka.conf:1:# MANAGED BY CHEF. PLEASE DO NOT MODIFY!
kafka.conf:2:[global_settings]
kafka.conf:3:log_level = INFO
kafka.conf:4:index = main
kafka.conf:5:use_kv_store = 0
kafka.conf:6:use_multiprocess_consumer = 1
kafka.conf:7:fetch_message_max_bytes = 1048576
kafka_credentials.conf:1:# MANAGED BY CHEF. PLEASE DO NOT MODIFY!
kafka_credentials.conf:2:[default]
kafka_credentials.conf:3:kafka_brokers = 10.14.2.164:9092,10.14.2.194:9092
kafka_credentials.conf:4:kafka_partition_offset = earliest
kafka_credentials.conf:5:index = main
Upon restarting splunk, the add-on is installed, and even the input is created under the Inputs section, but the cluster itself is "not available" and when examining the logs I see this:
2017-08-09 01:40:25,442 INFO pid=29212 tid=MainThread file=kafka_mod.py:main:168 | Start Kafka
2017-08-09 01:40:30,508 INFO pid=29212 tid=MainThread file=kafka_config.py:_get_kafka_clusters:228 | Clusters: {}
2017-08-09 01:40:30,509 INFO pid=29212 tid=MainThread file=kafka_config.py:__init__:188 | No Kafka cluster are configured
It seems like this plugin is only respecting clusters created through the WebUI. That is not going to work as we want to be able to fully configure this through Chef. Short of hacking the REST API, and fudging around with the .py files in the addon directory and forcing a dictionary in, what are my options?
Wondering if anyone has encountered this before.
If I had to guess it is silently rejecting the files because # is not traditionally used for comments in INI files. Try a ; instead.
I've recently installed Icinga2 on a bunch of Ubuntu LXC containers. I have a master node where you can log into icingaweb to check status.
However the load thresholds seem low and I cannot see how our even where you can adjust the parameters. May I ask for someone to point me in the right direction? Is this done on the master or the remote nodes? What's the file and where does it sit in the file structure?
I installed Icinga2 on Ubuntu 16.04 server from the Icinga2 PPA
Create a service definition for load in master:
apply Service "load" {
import "generic-service"
check_command = "load"
vars.load_wload1 = 5
vars.load_wload5 = 4
vars.load_wload15 = 3
vars.load_cload1 = 10
vars.load_cload5 = 6
vars.load_cload15 = 4
command_endpoint = host.address
assign where host.name == "monitored client"
}
More info here
I am trying to run celerdy + redis in my setup.
CELERYD_NODES="worker1"
CELERYD_NODES="worker1 worker2 worker3"
CELERY_BIN="/home/snijsure/.virtualenvs/mtest/bin/celery"
CELERYD_CHDIR="/home/snijsure/work/mytest/"
CELERYD_OPTS="--time-limit=300 --concurrency=8"
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
CELERYD_USER="celery"
CELERYD_GROUP="celery"
CELERY_CREATE_DIRS=1
export DJANGO_SETTINGS_MODULE="analytics.settings.local"
I have following in my base.py
BROKER_URL = 'redis://localhost:6379/0'
CELERY_RESULT_BACKEND = 'redis://localhost:6379/0'
BROKER_HOST = "localhost"
BROKER_BACKEND="redis"
REDIS_PORT=6379
REDIS_HOST = "localhost"
BROKER_USER = ""
BROKER_PASSWORD =""
BROKER_VHOST = "0"
REDIS_DB = 0
REDIS_CONNECT_RETRY = True
CELERY_SEND_EVENTS=True
CELERY_RESULT_BACKEND='redis'
CELERY_TASK_RESULT_EXPIRES = 10
CELERYBEAT_SCHEDULER="djcelery.schedulers.DatabaseScheduler"
CELERY_ALWAYS_EAGER = False
import djcelery
djcelery.setup_loader()
However when I start the celeryd using /etc/init.d/celerdy start
I see following messages in my log files
[2014-08-14 23:16:41,430: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
Trying again in 32.00 seconds...
It seems like its trying to connect to amqp. Any ideas on why that is I have followed procedure outlined here
http://celery.readthedocs.org/en/latest/getting-started/brokers/redis.html
I am running version 3.1.13 (Cipater)
What am I doing wrong?
-Subodh
How do you start you celery worker? I encounter this error once because I didn't start it right. You should add -A option when execute "celery worker" so that celery will connect to the broker you configured in your Celery Obj. Otherwise celery will try to connect the default broker.
Your /etc/default/celeryd file looks ok.
You are using djcelery, however. I'd recommend you drop that. If you look at the Django setup guide and example project you will notice that there are no longer any INSTALLED_APPS required for celery. It appears that djcelery is now only recommended if you want to use the Django SQL database as a backend.
https://github.com/celery/celery/tree/3.1/examples/django/
http://celery.readthedocs.org/en/latest/django/first-steps-with-django.html#using-celery-with-django
I've just rebuilt against that pattern and I can confirm that it works ok, at least in terms of connecting to Redis rather than trying to use RabbitMQ (amqp).
I'm trying to setup an application webserver using uWSGI + Nginx, which runs a Flask application using SQLAlchemy to communicate to a Postgres database.
When I make requests to the webserver, every other response will be a 500 error.
The error is:
Traceback (most recent call last):
File "/var/env/argos/lib/python3.3/site-packages/sqlalchemy/engine/base.py", line 867, in _execute_context
context)
File "/var/env/argos/lib/python3.3/site-packages/sqlalchemy/engine/default.py", line 388, in do_execute
cursor.execute(statement, parameters)
psycopg2.OperationalError: SSL error: decryption failed or bad record mac
The above exception was the direct cause of the following exception:
sqlalchemy.exc.OperationalError: (OperationalError) SSL error: decryption failed or bad record mac
The error is triggered by a simple Flask-SQLAlchemy method:
result = models.Event.query.get(id)
uwsgi is being managed by supervisor, which has a config:
[program:my_app]
command=/usr/bin/uwsgi --ini /etc/uwsgi/apps-enabled/myapp.ini --catch-exceptions
directory=/path/to/my/app
stopsignal=QUIT
autostart=true
autorestart=true
and uwsgi's config looks like:
[uwsgi]
socket = /tmp/my_app.sock
logto = /var/log/my_app.log
plugins = python3
virtualenv = /path/to/my/venv
pythonpath = /path/to/my/app
wsgi-file = /path/to/my/app/application.py
callable = app
max-requests = 1000
chmod-socket = 666
chown-socket = www-data:www-data
master = true
processes = 2
no-orphans = true
log-date = true
uid = www-data
gid = www-data
The furthest that I can get is that it has something to do with uwsgi's forking. But beyond that I'm not clear on what needs to be done.
The issue ended up being uwsgi's forking.
When working with multiple processes with a master process, uwsgi initializes the application in the master process and then copies the application over to each worker process. The problem is if you open a database connection when initializing your application, you then have multiple processes sharing the same connection, which causes the error above.
The solution is to set the lazy configuration option for uwsgi, which forces a complete loading of the application in each process:
lazy
Set lazy mode (load apps in workers instead of master).
This option may have memory usage implications as Copy-on-Write semantics can not be used. When lazy is enabled, only workers will be reloaded by uWSGI’s reload signals; the master will remain alive. As such, uWSGI configuration changes are not picked up on reload by the master.
There's also a lazy-apps option:
lazy-apps
Load apps in each worker instead of the master.
This option may have memory usage implications as Copy-on-Write semantics can not be used. Unlike lazy, this only affects the way applications are loaded, not master’s behavior on reload.
This uwsgi configuration ended up working for me:
[uwsgi]
socket = /tmp/my_app.sock
logto = /var/log/my_app.log
plugins = python3
virtualenv = /path/to/my/venv
pythonpath = /path/to/my/app
wsgi-file = /path/to/my/app/application.py
callable = app
max-requests = 1000
chmod-socket = 666
chown-socket = www-data:www-data
master = true
processes = 2
no-orphans = true
log-date = true
uid = www-data
gid = www-data
# the fix
lazy = true
lazy-apps = true
As an alternative you might dispose the engine. This is how I solved the problem.
Such issues may happen if there is a query during the creation of the app, that is, in the module that creates the app itself. If that states, the engine allocates a pool of connections and then uwsgi forks.
By invoking 'engine.dispose()', the connection pool itself is closed and new connections will come up as soon as someone starts making queries again. So if you do that at the end of the module where you create your app, new connections will be created after the UWSGI fork.
I am running a flask app using gunicorn on Heroku. My application started exhibiting this problem when I added the --preload option to my Procfile. When I removed that option, my application resumed functioning as normal.
Not sure whether to add this as an answer to this question or ask a separate question and put this as an answer there. I was getting this exact same error for reasons that are slightly different from the people who have posted and answered. In my setup, I using gunicorn as a wsgi for a Flask application. In this application, I was offloading some intense database operations off to a celery worker. The error would come from the celery worker.
From reading a lot of the answers here and looking at the psycopg2 as well as sqlalchemy session documentation, it became apparent to me that it is a bad idea to share an SQLAlchemy session between separate processes (the gunicorn worker and the sqlalchemy worker in my case).
What ended up solving this for me was creating a new session in the celery worker function so it used a new session each time it was called and also destroying the session after every web request so flask used a session per request. The overall solution looked like this:
Flask_app.py
#app.teardown_appcontext
def shutdown_session(exception=None):
session.close()
celery_func.py
#celery_app.task(bind=True, throws=(IntegrityError))
def access_db(self,entity_dict, tablename):
with Session() as session:
try:
session.add(ORM_obj)
session.commit()
except IntegrityError as e:
session.rollback()
print('primary key violated')
raise e
I'm using python. I did a yum install memcached followed by a easy_install python-memcached
I used the simple test program from the Help(memcache). When I wasn't getting the proper answers I threw in some print statements:
[~/test]$ cat m2.py
import memcache
mc = memcache.Client(['127.0.0.1:11211'], debug=0)
x = mc.set("some_key", "Some value")
print 'Just set a key and value into the cache (suposedly)'
value = mc.get("some_key")
print 'Just retrieved that value from the cache using the key'
print 'X %s' % x
print 'Value %s' % value
[~/test]$ python m2.py
Just set a key and value into the cache (suposedly)
Just retrieved that value from the cache using the key
X 0
Value None
[~/test]$
The question now is, what have I failed to do in my installation? It appears to be working from an API perspective but it fails to put anything into the memcache share area.
I'm using a virtualbox vm running centos
[~]# cat /proc/version
Linux version 2.6.32-358.6.2.el6.i686 (mockbuild#c6b8.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Thu May 16 18:12:13 UTC 2013
Is there a daemon that is supposed to be running? I don't see an obvious named one when I do a ps.
I tried to get pylibmc installed on my vm but was unable to find a working installation so for now will see if I can get the above stuff working first.
I discovered if i ran straight from the python console GUI i get a bit more output if I set debug=1
>>> mc = memcache.Client(['127.0.0.1:11211'], debug=1)
>>> mc.stats
{}
>>> mc.set('test','value')
MemCached: MemCache: inet:127.0.0.1:11211: connect: Connection refused. Marking dead.
0
>>> mc.get('test')
MemCached: MemCache: inet:127.0.0.1:11211: connect: Connection refused. Marking dead.
When I try to use per the example telnet to connect to the port i get a connection refused:
[root#~]# telnet 127.0.0.1 11211
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
[root#~]#
I tried the instructions I found on the net for configuring telnet so localhost wouldn't be disabled:
vi /etc/xinetd.d/telnet
service telnet
{
flags = REUSE
socket_type = stream
wait = no
user = root
server = /usr/sbin/in.telnetd
log_on_failure += USERID
disable = no
}
And then ran the commands to restart the service(s):
service iptables stop
service xinetd stop
service iptables start
service xinetd start
service iptables stop
I ran with both cases (iptables started and stopped) but it has no effect. So I am out of ideas. What do I need to do to make it so the PORT will be allowed? if that is the problem?
Or is there a memcached service that needs to be running that needs to open up the port ?
well this is what it took to get it working: ( a series of manual steps )
1) su -
cd /var/run
mkdir memcached # this was missing
In the memcached file I added "-l 127.0.0.1" to the OPTIONS statement. It's apparently a listen option. Do this for steps 2 & 3. I'm not certain which file is actually used at runtime.
2) cd /etc/sysconfig
cp memcached memcached.old
vi memcached
3) cd /etc/init.d
cp memcached memcached.old
vi memcached
4) Try some commands to see if the server starts now
/etc/init.d/memcached start
/etc/init.d/memcached status
/etc/init.d/memcached stop
/etc/init.d/memcached restart
I tried opening a browser, but it never seemed to actually display anything so I don't really know how valid this approach is. I'm not running apache or anything like this so perhaps its not relevant to my cause. Perhaps I would have to supply a ?key=blah or something.
5) http://127.0.0.1:11211
6) Now it should be ready to go. If one runs the test shown with the following it should work. At least it did for me. doing the help(memcache) will display a simple program. just paste that in and it should work just fine.
[~]$ python
>>> import memcache
>>> help(memcache)