I have a problem starting and working with sphinx.
I was able to run indexer --all, but now I want to search it, and I keep getting this error when I run searchd --status.
WARNING: failed to connect to 127.0.0.1:9312: Connection refused
WARNING: failed to connect to 0.0.0.0:9306: Connection refused
FATAL: failed to connect to daemon: please specify listen with sphinx protocol in your config file
sphinx query() returns false, and I guess that's related to connection problem.
Here's the part of my .conf file.
searchd
{
listen = 127.0.0.1:9312
listen = 9306:sphinx
listen = 2471:mysql41
log = /var/log/sphinx/searchd.log
query_log = /var/log/sphinx/query.log
max_matches = 1000
read_timeout = 5
max_children = 30
pid_file = /var/run/sphinx/searchd.pid
seamless_rotate = 1
preopen_indexes = 1
unlink_old = 1
workers = threads # for RT to work
binlog_path = /var/lib/sphinx
}
What am I missing in configuration of listening ports?
As noted in comments, indicates searchd daemon not actully running.
Can try using searchd to start the daemon (and later searchd --stop), which can show errors you might not see with using service/init.d starting.
(because if the log file itself is not functional, there is nowhere for errors to go :)
Related
I am doing development now with highcharts on Windows/Eclipse/Jetty/Java environment.
I have the following highcharts-convert.properties:
#### phantomjs properties ####
# the host and port phantomjs listens to
host = 127.0.0.1
port = 7777
# location of the phantomjs executable
exec = phantomjs
# name of the convert script used by phantomjs
script = highcharts-convert.js
#### connect properties used to connect with phantomjs running as HTTP-server ####
# all values in milliseconds
# specifies the timeout when reading from phantomjs when a connection is established
readTimeout = 6000
# timeout to be used when opening a communications link to the phantomjs server
connectTimeout = 500
# the whole request to the phantomjs server is scheduled, max timeout can last to this value.
maxTimeout = 6500
#### Pool properties ####
# number of phantomjs servers you can run in the pool.
poolSize = 6
# The pool is implemented as a BlockingQueue.
maxWait = 500
# Keep files in the temp folder for a certain retentionTime, defined in miliseconds
retentionTime = 30000
I have to constantly start, stop, and restart Jetty within Eclipse. I notice that each time I start jetty, 6 window processes called p"hantomjs.exe *32" are started. However, when I stop Jetty, these process do not disappear. This leads to too many "phantomjs.exe *32" running on my machine.
How can I fix this problem?
I run two web app in a machine and one DB in another machine.(They use the same DB)
One can run very well,But another one always down after about 4 hours.
Here is error information:
Error 2014-11-03 13:31:05,902 [http-bio-8080-exec-7] ERROR spi.SqlExceptionHelper - An I/O error occured while sending to the backend.
| Error 2014-11-03 13:31:05,904 [http-bio-8080-exec-7] ERROR spi.SqlExceptionHelper - This connection has been closed.
Postgresql logs:
2014-10-26 23:41:31 CDT WARNING: pgstat wait timeout
2014-10-27 01:13:48 CDT WARNING: pgstat wait timeout
2014-10-27 03:55:46 CDT LOG: could not receive data from client: Connection timed out
2014-10-27 03:55:46 CDT LOG: unexpected EOF on client connection
Who caused this problem, app or database? or net?
Reason:
At this point it was clear that the TCP connection that was sitting idle was already broken, but our app still assumed it to be open. By idle connections, I mean connections in the pool that aren’t in active use at the moment by the application.
After some search, I came to the conclusion that the network firewall between my app and the database is dropping the idle/stale connections after 1 hour. It seemed to be a common problem that many people have faced.
Solution:
In grails, you can set this in DataSource.groovy.
environments {
development {
dataSource {
//configure DBCP
properties {
maxActive = 50
maxIdle = 25
minIdle = 1
initialSize = 1
minEvictableIdleTimeMillis = 60000
timeBetweenEvictionRunsMillis = 60000
numTestsPerEvictionRun = 3
maxWait = 10000
testOnBorrow = true
testWhileIdle = true
testOnReturn = false
validationQuery = "SELECT 1"
}
}
}
}
I am trying to run celerdy + redis in my setup.
CELERYD_NODES="worker1"
CELERYD_NODES="worker1 worker2 worker3"
CELERY_BIN="/home/snijsure/.virtualenvs/mtest/bin/celery"
CELERYD_CHDIR="/home/snijsure/work/mytest/"
CELERYD_OPTS="--time-limit=300 --concurrency=8"
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
CELERYD_USER="celery"
CELERYD_GROUP="celery"
CELERY_CREATE_DIRS=1
export DJANGO_SETTINGS_MODULE="analytics.settings.local"
I have following in my base.py
BROKER_URL = 'redis://localhost:6379/0'
CELERY_RESULT_BACKEND = 'redis://localhost:6379/0'
BROKER_HOST = "localhost"
BROKER_BACKEND="redis"
REDIS_PORT=6379
REDIS_HOST = "localhost"
BROKER_USER = ""
BROKER_PASSWORD =""
BROKER_VHOST = "0"
REDIS_DB = 0
REDIS_CONNECT_RETRY = True
CELERY_SEND_EVENTS=True
CELERY_RESULT_BACKEND='redis'
CELERY_TASK_RESULT_EXPIRES = 10
CELERYBEAT_SCHEDULER="djcelery.schedulers.DatabaseScheduler"
CELERY_ALWAYS_EAGER = False
import djcelery
djcelery.setup_loader()
However when I start the celeryd using /etc/init.d/celerdy start
I see following messages in my log files
[2014-08-14 23:16:41,430: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
Trying again in 32.00 seconds...
It seems like its trying to connect to amqp. Any ideas on why that is I have followed procedure outlined here
http://celery.readthedocs.org/en/latest/getting-started/brokers/redis.html
I am running version 3.1.13 (Cipater)
What am I doing wrong?
-Subodh
How do you start you celery worker? I encounter this error once because I didn't start it right. You should add -A option when execute "celery worker" so that celery will connect to the broker you configured in your Celery Obj. Otherwise celery will try to connect the default broker.
Your /etc/default/celeryd file looks ok.
You are using djcelery, however. I'd recommend you drop that. If you look at the Django setup guide and example project you will notice that there are no longer any INSTALLED_APPS required for celery. It appears that djcelery is now only recommended if you want to use the Django SQL database as a backend.
https://github.com/celery/celery/tree/3.1/examples/django/
http://celery.readthedocs.org/en/latest/django/first-steps-with-django.html#using-celery-with-django
I've just rebuilt against that pattern and I can confirm that it works ok, at least in terms of connecting to Redis rather than trying to use RabbitMQ (amqp).
I tried a simple test with memcached from jelastic and always getting the exception "COnnection refused"... But the URL ist correct. Is some add
MemcachedClient c = new MemcachedClient(
new InetSocketAddress("memcached-myexample.jelastic.dogado.eu", 11211));
c.set("someKey", 3600, user);
User cachedUser = (User) c.get("someKey");
Here is the exception:
2014-01-02 00:07:41.820 INFO net.spy.memcached.MemcachedConnection: Added {QA sa=memcached-myexample.jelastic.dogado.eu/92.51.168.106:11211, #Rops=0, #Wops=0, #iq=0, topRop=null, topWop=null, toWrite=0, interested=0} to connect queue
2014-01-02 00:07:41.833 WARN net.spy.memcached.MemcachedConnection: Could not redistribute to another node, retrying primary node for someKey.
2014-01-02 00:07:41.835 WARN net.spy.memcached.MemcachedConnection: Could not redistribute to another node, retrying primary node for someKey.
2014-01-02 00:07:41.858 INFO net.spy.memcached.MemcachedConnection: Connection state changed for sun.nio.ch.SelectionKeyImpl#2dc1482f
2014-01-02 00:07:41.859 INFO net.spy.memcached.MemcachedConnection: Reconnecting due to failure to connect to {QA sa=memcached-myexample.jelastic.dogado.eu/92.51.168.106:11211, #Rops=0, #Wops=2, #iq=0, topRop=null, topWop=Cmd: set Key: someKey Flags: 1 Exp: 3600 Data Length: 149, toWrite=0, interested=0}
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:735)
at net.spy.memcached.MemcachedConnection.handleIO(MemcachedConnection.java:629)
at net.spy.memcached.MemcachedConnection.handleIO(MemcachedConnection.java:409)
at net.spy.memcached.MemcachedConnection.run(MemcachedConnection.java:1334)
I would try to telnet to your memcached cluster in order to rule out a firewall issue. You can do that with the following command.
telnet memcached-myexample.jelastic.dogado.eu 11211
If that doesn't work then you have network issues. If this is the case I would first check to see if you have a firewall up.
Add int portNum = 11211; at the first, and try again.
int portNum = 11211;
MemcachedClient c = new MemcachedClient(
new InetSocketAddress("memcached-myexample.jelastic.dogado.eu", portNum));
// Store a value (async) for one hour
c.set("someKey", 3600, someObject);
// Retrieve a value (synchronously).
Object myObject=c.get("someKey");
Thanks but the error was on a firewall rule from the provider. So not my failure.
Check /etc/memcached.conf file and update the server IP address from which you want to access the cache.
I'm using python. I did a yum install memcached followed by a easy_install python-memcached
I used the simple test program from the Help(memcache). When I wasn't getting the proper answers I threw in some print statements:
[~/test]$ cat m2.py
import memcache
mc = memcache.Client(['127.0.0.1:11211'], debug=0)
x = mc.set("some_key", "Some value")
print 'Just set a key and value into the cache (suposedly)'
value = mc.get("some_key")
print 'Just retrieved that value from the cache using the key'
print 'X %s' % x
print 'Value %s' % value
[~/test]$ python m2.py
Just set a key and value into the cache (suposedly)
Just retrieved that value from the cache using the key
X 0
Value None
[~/test]$
The question now is, what have I failed to do in my installation? It appears to be working from an API perspective but it fails to put anything into the memcache share area.
I'm using a virtualbox vm running centos
[~]# cat /proc/version
Linux version 2.6.32-358.6.2.el6.i686 (mockbuild#c6b8.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Thu May 16 18:12:13 UTC 2013
Is there a daemon that is supposed to be running? I don't see an obvious named one when I do a ps.
I tried to get pylibmc installed on my vm but was unable to find a working installation so for now will see if I can get the above stuff working first.
I discovered if i ran straight from the python console GUI i get a bit more output if I set debug=1
>>> mc = memcache.Client(['127.0.0.1:11211'], debug=1)
>>> mc.stats
{}
>>> mc.set('test','value')
MemCached: MemCache: inet:127.0.0.1:11211: connect: Connection refused. Marking dead.
0
>>> mc.get('test')
MemCached: MemCache: inet:127.0.0.1:11211: connect: Connection refused. Marking dead.
When I try to use per the example telnet to connect to the port i get a connection refused:
[root#~]# telnet 127.0.0.1 11211
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
[root#~]#
I tried the instructions I found on the net for configuring telnet so localhost wouldn't be disabled:
vi /etc/xinetd.d/telnet
service telnet
{
flags = REUSE
socket_type = stream
wait = no
user = root
server = /usr/sbin/in.telnetd
log_on_failure += USERID
disable = no
}
And then ran the commands to restart the service(s):
service iptables stop
service xinetd stop
service iptables start
service xinetd start
service iptables stop
I ran with both cases (iptables started and stopped) but it has no effect. So I am out of ideas. What do I need to do to make it so the PORT will be allowed? if that is the problem?
Or is there a memcached service that needs to be running that needs to open up the port ?
well this is what it took to get it working: ( a series of manual steps )
1) su -
cd /var/run
mkdir memcached # this was missing
In the memcached file I added "-l 127.0.0.1" to the OPTIONS statement. It's apparently a listen option. Do this for steps 2 & 3. I'm not certain which file is actually used at runtime.
2) cd /etc/sysconfig
cp memcached memcached.old
vi memcached
3) cd /etc/init.d
cp memcached memcached.old
vi memcached
4) Try some commands to see if the server starts now
/etc/init.d/memcached start
/etc/init.d/memcached status
/etc/init.d/memcached stop
/etc/init.d/memcached restart
I tried opening a browser, but it never seemed to actually display anything so I don't really know how valid this approach is. I'm not running apache or anything like this so perhaps its not relevant to my cause. Perhaps I would have to supply a ?key=blah or something.
5) http://127.0.0.1:11211
6) Now it should be ready to go. If one runs the test shown with the following it should work. At least it did for me. doing the help(memcache) will display a simple program. just paste that in and it should work just fine.
[~]$ python
>>> import memcache
>>> help(memcache)