What am I doing wrong for Sphinx to fail to start during cap deploy? - sphinx

I'm struggling to get Sphinx back up and running after deploying a rails app to my VPS.
Specifically, I'm thrown this error:
** [out :: myapp.com] => Mixing in Lockdown version: 1.6.4
** [out :: myapp.com]
** [out :: myapp.com] Failed to start searchd daemon. Check /var/www/myapp/releases/20100227224936/log/searchd.log.
** [out :: myapp.com] Failed to start searchd daemon. Check /var/www/myapp/releases/20100227224936/log/searchd.log
However, a log file isn't created!
This is the deploy.rb I am using (with thanks to Updrift :) )
namespace :deploy do
desc "Restart the app"
task :restart, :roles => :app do
# This regen's the config file, stops Sphinx if running, then starts it.
# No indexing is done, just a restart of the searchd daemon
# thinking_sphinx.running_start
# The above does not re-index. If any of your define_index blocks
# in your models have changed, you will need to perform an index.
# If these are changing frequently, you can use the following
# in place of running_start
thinking_sphinx.stop
thinking_sphinx.index
thinking_sphinx.start
# Restart the app
run "touch #{current_path}/tmp/restart.txt"
end
desc "Cleanup older revisions"
task :after_deploy do
cleanup
end
end
I'm using the Thinking Sphinx gem, v 1.3.16, passenger 2.2.10. Any thoughts you have would be greatly appreciated.
Many thanks!
Greg
UPDATE: Further to some more google searching, I've found a couple of other people with similar errors - seemingly related to port listening errors eg here and [I'm not allowed to link to the other one]. My production.sphinx.conf similarly has used port 9312, despite me specifying in sphinx.yml to use 3312.
Does anyone have any idea what might be causing this? Thanks.

I should have rung up the IT Crowd: "Have you tried turning it off and on again?"
http://groups.google.com/group/thinking-sphinx/browse_thread/thread/dde565ea40e31075
Rebooting the server released the port.

Related

Undefined method [] for nil when trying to run cap deploy:restart

I have a Rails 5.2 app, and with cap 3.4.1 we are suddenly getting this weird error:
[b35efe76] Phusion Passenger(R) 6.0.8
DEBUG [b35efe76] Finished in 0.305 seconds with exit status 0 (successful).
(Backtrace restricted to imported tasks)
cap aborted!
SSHKit::Runner::ExecuteError: Exception while executing as deploy#host.com: undefined method `[]' for nil:NilClass
Trying to narrow it down, it happens when attempting to restart, because this is the line that fails:
cap production deploy:restart
The question is, how do I go about finding what file is trying to call [] on a nil value on? Running cap with --trace is of no value because it just gives me internal errors - nothing in my code. Basically, how do I find out what is nil?
One more clue, currently, I have a bunch of servers, if I run the cap restart command on server A, it restarts fine, on server B, it throws this error, so I'm guessing there's an environment variable on server A but not B, but the error is just so opaque I don't know where to begin.
Thanks for any help,
kevin
A wild guess: I had a similar problem and I could solve it by upgrading capistrano-passenger to >= 0.2.1
Looks like the version change of passenger from 6.0.7 to 6.0.8 has introduces a problem. I see you are also on 6.0.8, so it might affect you, too!
Link to the capistrano-passenger issue

How can I run executable release build in phoenix?

We are using distillery to create an executable build release in our
phoenix application. We are using docker to deploy our application on Digital Ocean. Since we don't want to share code with the other machine we want to execute the build compiled file as executable which we can run on some remote machine through command line.
So after some research we found that distillery plugin build executable release which creates a build using
mix release --executable
We are able to create the release. It provides three command to run these
punitjain#apple:project$ _build/dev/rel/project/bin/project foreground
punitjain#apple:project$ _build/dev/rel/project/bin/project start
punitjain#apple:project$ _build/dev/rel/project/bin/project console
I am getting following error after running foreground command
> $ _build/dev/rel/project/bin/project foreground [info] Application
> project exited: Project.start(:normal, []) returned an error:
> shutdown: failed to start child: Project.Endpoint
> ** (EXIT) shutdown: failed to start child: Phoenix.CodeReloader.Server
> ** (EXIT) an exception was raised:
> ** (UndefinedFunctionError) function Mix.Project.config/0 is undefined (module Mix.Project is not available)
> Mix.Project.config()
> (phoenix) lib/phoenix/code_reloader/server.ex:29: Phoenix.CodeReloader.Server.init/1
> (stdlib) gen_server.erl:328: :gen_server.init_it/6
> (stdlib) proc_lib.erl:247: :proc_lib.init_p_do_apply/3 {"Kernel pid
> terminated",application_controller,"{application_start_failure,project,{{shutdown,{failed_to_start_child,'Elixir.Project.Endpoint',{shutdown,{failed_to_start_child,'Elixir.Phoenix.CodeReloader.Server',{undef,[{'Elixir.Mix.Project',config,[],[]},{'Elixir.Phoenix.CodeReloader.Server',init,1,[{file,\"lib/phoenix/code_reloader/server.ex\"},{line,29}]},{gen_server,init_it,6,[{file,\"gen_server.erl\"},{line,328}]},{proc_lib,init_p_do_apply,3,[{file,\"proc_lib.erl\"},{line,247}]}]}}}}},{'Elixir.Project',start,[normal,[]]}}}"}
>
> Crash dump is being written to: erl_crash.dump...done Kernel pid
> terminated (application_controller)
> ({application_start_failure,project,{{shutdown,{failed_to_start_child,'Elixir.Project.Endpoint',{shutdown,{failed_to_start_child,'Elixir.Phoenix.Code
Can you please help me how to resolve this error?
I would also be very happy to hear if anyone is having any better approach to create an executable release file. Which I can use in any Linux based system to deploy without exposing my source code using Docker.
Please let me know if I need to provide more information or do any amendments in question asked.
thanks
The problem is that you are building a release with MIX_ENV=dev, with Phoenix this will mean that the code reloader is included, and the code reloader does not work within a release. You should either build the release with MIX_ENV=prod or disable the code reloader in dev.

NFS mount points are going off/NFS compound failed for server mashost

We have an application in solaris during specific test case we will generate heap dump which will be written in to the server at specific path during this case we are getting following error in trace file
java.lang.OutOfMemoryError: Java heap space
Dumping heap to /ossrc/upgrade/JREheapdumps/java_pid16092.hprof ...
Dump file is incomplete: I/O error
and in /var/adm/messages we could see
Oct 28 13:00:10 ossuas2 nfs: [ID 733954 kern.info] NOTICE: [NFS4][Server: mashost][Mntpt: /ossrc/upgrade]NFS server mashost not
responding; still trying
Oct 28 13:02:53 ossuas2 nfs: [ID 733954 kern.info] NOTICE: [NFS4][Server: mashost][Mntpt: /usr/local]NFS server mashost not
responding; still trying
Oct 28 13:04:53 ossuas2 nfs: [ID 733954 kern.info] NOTICE: [NFS4][Server: mashost][Mntpt: /etc/opt/ericsson]NFS server mashost not
responding; still trying
Can anyone please help here why we are getting this problem and can any tell us can an application cause this impact on mashost ..????
First things first, check out the NFS service w/ svcbundle and svcs -- when it crashes, run:
# svcs -x nfs/client
on the client, and
# svcs -x nfs/server
on the server. I would expect one or both to be in a "maintenance" state. (You may see it fails to start properly at all). If it is in a maintenance mode, you should see a row marked "Reason:" that says why.
You might see "offline" -- in that case, startd will attempt to reboot the service multiple times and, if it fails after five attempts or hangs indefinitely, places it into "maintenance" state and stops restarting.
Check the logs in
/var/svc/log/<service-name FMRI>.log
There will be one on your client machine under "network-nfs-client:default" (probably, may have a name other than 'default' if it's been changed manually), and one on the server under "network-nfs-server:default"
See what you can glean from those.
svcbundle is all the time taking snapshots as backups of services, so you can try reverting to one of those.
# svcs -s nfs/server:default
svc:/network/nfs/server:default> listsnap
svc:/network/nfs/server:default> revert start [name_of_snapshot]
svc:/network/nfs/server:default> quit
# svcadm refresh nfs/server:default
# svcadm restart nfs/server:default
Make sure to include the ":default" tag, or if you saw a different tag from "svcs nfs/server" include it, that name defines an instance of the service, every running service is an instance.
If the process is failing to boot, you might have to look at the XML manifest under /lib/svc/manifest/network/nfs/ -- inside, you'll see dependencies (and services dependent on this one), then "exec_method"s, which define how the service starts, stops and restarts.
Instead of snapshots, you can can also restore it to default: use svccfg -s <FMRI> delete to clear it, then svcadm refresh <FMRI> and svcadm enable <FMRI>.
If the service is in maintenance state, once you've isolated and fixed the problem, you can manually clear that state by running svcadm clear <FMRI>.

python-memcache memcached -- I installed on centos virtualbox but it get/set never seem to work

I'm using python. I did a yum install memcached followed by a easy_install python-memcached
I used the simple test program from the Help(memcache). When I wasn't getting the proper answers I threw in some print statements:
[~/test]$ cat m2.py
import memcache
mc = memcache.Client(['127.0.0.1:11211'], debug=0)
x = mc.set("some_key", "Some value")
print 'Just set a key and value into the cache (suposedly)'
value = mc.get("some_key")
print 'Just retrieved that value from the cache using the key'
print 'X %s' % x
print 'Value %s' % value
[~/test]$ python m2.py
Just set a key and value into the cache (suposedly)
Just retrieved that value from the cache using the key
X 0
Value None
[~/test]$
The question now is, what have I failed to do in my installation? It appears to be working from an API perspective but it fails to put anything into the memcache share area.
I'm using a virtualbox vm running centos
[~]# cat /proc/version
Linux version 2.6.32-358.6.2.el6.i686 (mockbuild#c6b8.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Thu May 16 18:12:13 UTC 2013
Is there a daemon that is supposed to be running? I don't see an obvious named one when I do a ps.
I tried to get pylibmc installed on my vm but was unable to find a working installation so for now will see if I can get the above stuff working first.
I discovered if i ran straight from the python console GUI i get a bit more output if I set debug=1
>>> mc = memcache.Client(['127.0.0.1:11211'], debug=1)
>>> mc.stats
{}
>>> mc.set('test','value')
MemCached: MemCache: inet:127.0.0.1:11211: connect: Connection refused. Marking dead.
0
>>> mc.get('test')
MemCached: MemCache: inet:127.0.0.1:11211: connect: Connection refused. Marking dead.
When I try to use per the example telnet to connect to the port i get a connection refused:
[root#~]# telnet 127.0.0.1 11211
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
[root#~]#
I tried the instructions I found on the net for configuring telnet so localhost wouldn't be disabled:
vi /etc/xinetd.d/telnet
service telnet
{
flags = REUSE
socket_type = stream
wait = no
user = root
server = /usr/sbin/in.telnetd
log_on_failure += USERID
disable = no
}
And then ran the commands to restart the service(s):
service iptables stop
service xinetd stop
service iptables start
service xinetd start
service iptables stop
I ran with both cases (iptables started and stopped) but it has no effect. So I am out of ideas. What do I need to do to make it so the PORT will be allowed? if that is the problem?
Or is there a memcached service that needs to be running that needs to open up the port ?
well this is what it took to get it working: ( a series of manual steps )
1) su -
cd /var/run
mkdir memcached # this was missing
In the memcached file I added "-l 127.0.0.1" to the OPTIONS statement. It's apparently a listen option. Do this for steps 2 & 3. I'm not certain which file is actually used at runtime.
2) cd /etc/sysconfig
cp memcached memcached.old
vi memcached
3) cd /etc/init.d
cp memcached memcached.old
vi memcached
4) Try some commands to see if the server starts now
/etc/init.d/memcached start
/etc/init.d/memcached status
/etc/init.d/memcached stop
/etc/init.d/memcached restart
I tried opening a browser, but it never seemed to actually display anything so I don't really know how valid this approach is. I'm not running apache or anything like this so perhaps its not relevant to my cause. Perhaps I would have to supply a ?key=blah or something.
5) http://127.0.0.1:11211
6) Now it should be ready to go. If one runs the test shown with the following it should work. At least it did for me. doing the help(memcache) will display a simple program. just paste that in and it should work just fine.
[~]$ python
>>> import memcache
>>> help(memcache)

MongoMapper, Padrino and Passenger - Connection Failure?

I'm currently working on a Padrino project which has been working absolutely fine in developement, but after pushing it to my live environment, I'm experiencing problems. Checked the logs and the error I'm getting is:
[31m ERROR[0m -[33m24/Jul/2012 11:32:53[0mMongo::ConnectionFailure - Operation failed with the following exception: #<Mongo::ConnectionFailure:0xa762528>:
My database.rb file is the standard one generated by Padrino, namely:
MongoMapper.connection = Mongo::Connection.new('localhost', nil, :logger => logger)
case Padrino.env
when :development then MongoMapper.database = 'licensing_development'
when :production then MongoMapper.database = 'licensing_production'
when :test then MongoMapper.database = 'licensing_test'
end
Everything works perfectly in the console, so I'm assuming that the problem is to do with Passenger. Any ideas where I might be going wrong?
OK, so ignore me. I forgot to set RACK_ENV when running the rake task that imported my data, as well as when starting the console, so there was no data in my production database.