Curl can connect to an Iron server on localhost, but Scala intermittently cannot - scala

In my Rust app, I start Iron like so:
let host: &str = &format!("localhost:{}", port);
info!("Server running at http://{}", host);
Iron::new(Chain::new(router)).http(host).expect("Could not start Iron server");
It responds:
INFO Server running at http://localhost:3000
I can curl it:
$ curl "http://localhost:3000/v1/foo"
{"bar":"baz"}
However, in Scala I cannot connect:
$ scala
Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_40).
Type in expressions for evaluation. Or try :help.
scala> scala.io.Source.fromURL("http://localhost:3000/v1/foo").mkString
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
spray-client also cannot connect:
spray.can.Http$ConnectionAttemptFailedException: Connection attempt to 127.0.0.1:3000 failed
Both of these attempts are from the same IP and localhost is correct. The Iron server logs nothing on a failed connection request.
Different combinations of localhost vs 127.0.0.1 in both client and server do not fix the problem. I misdiagnosed this. Using 127.0.0.1 in the Rust client does fix the problem.
After taking a break, the code started working. I don't recall if I restarted Iron. I then did several hours of development against it. At some stage it stopped working again. Restarts of the JVM and/or Iron server are not helping to fix the issue.
This is not specific to my Rust app;
I can recreate the problem with the example hello world Iron app.
$ git clone https://github.com/iron/iron.git
$ (cd iron && cargo run --example hello)
and then
$ curl "http://localhost:3000/"
Hello world!
but
$ scala
scala> scala.io.Source.fromURL("http://localhost:3000/").mkString
java.net.ConnectException: Connection refused
OSX 10.11.6
cargo 0.13.0-nightly (9399229 2016-09-14)
also tested against cargo 0.13.0-nightly (19cfb67 2016-09-28)

According to this comment on this bug report, "Iron will resolve ('localhost') to IPv6 by default while your other services use IPv4"
Bind Iron to 127.0.0.1 whilst that bug is unresolved.

Related

Minishift: Problems starting

I am trying to get minishift runnin on my machine (Windows 10) with Virtualbox 5.1.24.
Minishift version: 1.0.0+4f8cb6d
CDK Version: 3.0.0-2
Starting minishift gives me the following:
C:\>minishift start --vm-driver virtualbox
Starting local OpenShift cluster using 'virtualbox' hypervisor...
E0727 18:34:21.682796 17204 start.go:176] Error starting the VM: Error
creating new host: Error attempting to get plugin server address for RPC:
Failed to dial the plugin server in 10s. Retrying.
E0727 18:34:31.740746 17204 start.go:176] Error starting the VM: Error
creating new host: Error attempting to get plugin server address for RPC:
Failed to dial the plugin server in 10s. Retrying.
E0727 18:34:41.770667 17204 start.go:176] Error starting the VM: Error
creating new host: Error attempting to get plugin server address for RPC:
Failed to dial the plugin server in 10s. Retrying.
Error starting the VM: Error creating new host: Error attempting to get
plugin server address for RPC: Failed to dial the plugin server in 10s
Error creating new host: Error attempting to get plugin server address for
RPC: Failed to dial the plugin server in 10s
Error creating new host: Error attempting to get plugin server address for
RPC: Failed to dial the plugin server in 10s
I read the comments that it needs to run from the C:\drive but it looks like this did not fix the problem. I am happy about any hints how to fix this. If there is any additional information you need, just let me know.
Sounds like you got it working.
I usually encourage folks who are having trouble starting their minishift VMs to try the following:
Find your preferred virtualization provider from the list of available options
Install the appropriate driver plugin for your system
Persist your VM provider configuration: minishift config set vm-driver virualbox

failed to find free socket port for process dispatcher when trying remote debug

Highlights:
windows 10 host machine
ubuntu vagrant box (virtualbox) as guest vm
using vagrant port forwarding as like this: config.vm.network "forwarded_port", guest: 1234, host: 12340
IDE: IntelliJ IDEA with Ruby plugin
The Issue:
I've tried to set up remote ruby debug following this guide and getting an error in IDE: "failed to find free socket port for process dispatcher". It looks this issue is not IntelliJ-specific, I was able to reproduce it with latest RubyMine as well.
From IDEA's log
2017-07-07 21:53:03,515 [8879188] INFO - tion.impl.ExecutionManagerImpl - Failed to find free socket port for process dispatcher
com.intellij.execution.ExecutionException: Failed to find free socket port for process dispatcher
at org.jetbrains.plugins.ruby.ruby.debugger.RubyProcessDispatcher.<init>(RubyProcessDispatcher.java:46)
at org.jetbrains.plugins.ruby.ruby.debugger.RubyRemoteDebugRunner.doExecute(RubyRemoteDebugRunner.java:62)
...
Caused by: java.net.BindException: Address already in use: JVM_Bind
at java.net.TwoStacksPlainSocketImpl.socketBind(Native Method)
at java.net.TwoStacksPlainSocketImpl.socketBind(TwoStacksPlainSocketImpl.java:137)
...
I can understand it says Address already in use: JVM_Bind, but how remote debug supposed to work at all then? (I mean Is there any way to access guest vm port not forwarding it before? Clearly no) Any help to solve this issue is much appreciated.
For me the issue was due to another debug session that was open in the background. To prevent that from happening again (and also close all other currently open sessions, once you run the configuration again) select "Single instance only" in the Debug Configuration:

knife softlayer command throws ERROR: Excon::Error::Socket: Connection reset by peer (Errno::ECONNRESET

Chef Server Version : chef-server 12.11.1
knife softlayer server create
--image-id ${image_id}
--ssh-keys ${ssh_keys}
--hostname $node_name
--network-interface-speed 100
--domain $domain_name
--cores ${cores}
--ram ${ram}
--datacenter ${datacenter}
--node-name $node_name
--vlan $public_vlan
--private-vlan $private_vlan
--use-private-network
-x root
-i $USER_HOME/.ssh/id_rsa -VV
Client Output
Launching SoftLayer VM, this may take a few minutes.
............................................................................
............................................................................
................
After 6 minutes it throws this error
ERROR: Excon::Error::Socket: Connection reset by peer (Errno::ECONNRESET)
ERROR: Excon::Error::Socket: Connection reset by peer (Errno::ECONNRESET)
The Softlayer's API has this issue where sometimes the server resets the connection to the client. Currently they are working on a fix but there is not an ETA. The issue showed up long time ago. I only can recommend to try catch the error and try again.

python-memcache memcached -- I installed on centos virtualbox but it get/set never seem to work

I'm using python. I did a yum install memcached followed by a easy_install python-memcached
I used the simple test program from the Help(memcache). When I wasn't getting the proper answers I threw in some print statements:
[~/test]$ cat m2.py
import memcache
mc = memcache.Client(['127.0.0.1:11211'], debug=0)
x = mc.set("some_key", "Some value")
print 'Just set a key and value into the cache (suposedly)'
value = mc.get("some_key")
print 'Just retrieved that value from the cache using the key'
print 'X %s' % x
print 'Value %s' % value
[~/test]$ python m2.py
Just set a key and value into the cache (suposedly)
Just retrieved that value from the cache using the key
X 0
Value None
[~/test]$
The question now is, what have I failed to do in my installation? It appears to be working from an API perspective but it fails to put anything into the memcache share area.
I'm using a virtualbox vm running centos
[~]# cat /proc/version
Linux version 2.6.32-358.6.2.el6.i686 (mockbuild#c6b8.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Thu May 16 18:12:13 UTC 2013
Is there a daemon that is supposed to be running? I don't see an obvious named one when I do a ps.
I tried to get pylibmc installed on my vm but was unable to find a working installation so for now will see if I can get the above stuff working first.
I discovered if i ran straight from the python console GUI i get a bit more output if I set debug=1
>>> mc = memcache.Client(['127.0.0.1:11211'], debug=1)
>>> mc.stats
{}
>>> mc.set('test','value')
MemCached: MemCache: inet:127.0.0.1:11211: connect: Connection refused. Marking dead.
0
>>> mc.get('test')
MemCached: MemCache: inet:127.0.0.1:11211: connect: Connection refused. Marking dead.
When I try to use per the example telnet to connect to the port i get a connection refused:
[root#~]# telnet 127.0.0.1 11211
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
[root#~]#
I tried the instructions I found on the net for configuring telnet so localhost wouldn't be disabled:
vi /etc/xinetd.d/telnet
service telnet
{
flags = REUSE
socket_type = stream
wait = no
user = root
server = /usr/sbin/in.telnetd
log_on_failure += USERID
disable = no
}
And then ran the commands to restart the service(s):
service iptables stop
service xinetd stop
service iptables start
service xinetd start
service iptables stop
I ran with both cases (iptables started and stopped) but it has no effect. So I am out of ideas. What do I need to do to make it so the PORT will be allowed? if that is the problem?
Or is there a memcached service that needs to be running that needs to open up the port ?
well this is what it took to get it working: ( a series of manual steps )
1) su -
cd /var/run
mkdir memcached # this was missing
In the memcached file I added "-l 127.0.0.1" to the OPTIONS statement. It's apparently a listen option. Do this for steps 2 & 3. I'm not certain which file is actually used at runtime.
2) cd /etc/sysconfig
cp memcached memcached.old
vi memcached
3) cd /etc/init.d
cp memcached memcached.old
vi memcached
4) Try some commands to see if the server starts now
/etc/init.d/memcached start
/etc/init.d/memcached status
/etc/init.d/memcached stop
/etc/init.d/memcached restart
I tried opening a browser, but it never seemed to actually display anything so I don't really know how valid this approach is. I'm not running apache or anything like this so perhaps its not relevant to my cause. Perhaps I would have to supply a ?key=blah or something.
5) http://127.0.0.1:11211
6) Now it should be ready to go. If one runs the test shown with the following it should work. At least it did for me. doing the help(memcache) will display a simple program. just paste that in and it should work just fine.
[~]$ python
>>> import memcache
>>> help(memcache)

JBoss 5.1 binds to host address while run in vserver with -b <guest address>

while starting JBoss 5.1.0.GA in virtual server machine on Debian (linux-VServer technology) I get the following error:
ERROR [org.jboss.kernel.plugins.dependency.AbstractKernelController] (main) Error installing to Start: name=jboss.remoting:protocol=rmi,service=JMXConnectorServer state=Create mode=Manual requiredState=Installed
java.io.IOException: Cannot bind to URL [rmi://10.1.2.11:1090/jmxconnector]: javax.naming.NoPermissionException [Root exception is java.rmi.ServerException: RemoteException occurred in server thread; nested exception is:
java.rmi.AccessException: Registry.Registry.bind disallowed; origin /AA.BB.CC.DD is non-local host]
where AA.BB.CC.DD is host machine name, 10.1.2.11 is vserver guest with JBoss and JBoss is started with -b 10.1.2.11 (I also tried -Djboss.bind.address=10.1.2.11 - the same result).
10.1.2.11 is bound to dummy2 interface on host (serving 10.1.2.1 network).
The root exception is strange - why JBoss wants to bind to host address AA.BB.CC.DD? There were no problems with 4.2.3.GA on the same machine, also started with -b 10.1.2.11.
It starts correctly when no params present - binds to localhost and everything is ok, but it MUST be bound to 10.1.2.11 to be visible by Apache on another vserver guest, acting as proxy.
I thought that it can be fixed by setting net.ipv4.conf.all.promote_secondaries=1 via sysctl (was 0) but it didn't help much.
Has anyone had such problem?
Regards,
bart
Could you confirm if The port //10.1.2.11:1090/ isn't being used by another process (even a zombie one : P)?
There was a problem similar at JbossJIRA a couple of years ago... But i though it was already fixed.