Session 0x0 for server null when starting Atlas - apache-zookeeper

I just installed Atlas in HDP 2.6.3 and the start up of Atlas server gave below error:
/var/log/atlas/application.log
2019-12-17 23:41:30,446 INFO - [main-SendThread(1:2181):] ~ Opening socket connection to server 1/0.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (ClientCnxn:1019)
2019-12-17 23:41:30,447 ERROR - [main-SendThread(1:2181):] ~ Unable to open socket to 1/0.0.0.1:2181 (ClientCnxnSocketNIO:289)
2019-12-17 23:41:30,447 WARN - [main-SendThread(1:2181):] ~ Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (ClientCnxn:1146)
java.net.SocketException: Invalid argument
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:454)
at sun.nio.ch.Net.connect(Net.java:446)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:277)
at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:287)
at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1011)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1047)
2019-12-17 23:41:30,548 WARN - [main:] ~ Possibly transient ZooKeeper, quorum=1:2181, exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase-unsecure/hbaseid (RecoverableZooKeeper:272)
2019-12-17 23:41:31,548 INFO - [main-SendThread(1:2181):] ~ Opening socket connection to server 1/0.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (ClientCnxn:1019)
2019-12-17 23:41:31,549 ERROR - [main-SendThread(1:2181):] ~ Unable to open socket to 1/0.0.0.1:2181 (ClientCnxnSocketNIO:289)
2019-12-17 23:41:31,549 WARN - [main-SendThread(1:2181):] ~ Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (ClientCnxn:1146)
java.net.SocketException: Invalid argument
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:454)
at sun.nio.ch.Net.connect(Net.java:446)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:277)
at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:287)
at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1011)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1047)
My Zookeeper, Kafka and Solr are running fine. Here are some troubleshoot commands I tried and outputs:
# netstat -plant | grep 2181
tcp 0 0 127.0.0.1:38388 127.0.0.1:2181 ESTABLISHED 67615/java
tcp6 0 0 :::2181 :::* LISTEN 59604/java
tcp6 0 0 127.0.0.1:39600 127.0.0.1:2181 ESTABLISHED 60859/java
tcp6 0 0 127.0.0.1:2181 127.0.0.1:42112 ESTABLISHED 59604/java
tcp6 0 0 127.0.0.1:2181 127.0.0.1:42304 ESTABLISHED 59604/java
tcp6 0 0 127.0.0.1:38380 127.0.0.1:2181 ESTABLISHED 9159/java
tcp6 0 0 127.0.0.1:42116 127.0.0.1:2181 ESTABLISHED 61237/java
tcp6 0 0 127.0.0.1:38398 127.0.0.1:2181 ESTABLISHED 1361/java
tcp6 0 0 127.0.0.1:38400 127.0.0.1:2181 ESTABLISHED 9159/java
tcp6 0 0 127.0.0.1:38354 127.0.0.1:2181 ESTABLISHED 1051/java
tcp6 0 0 127.0.0.1:2181 127.0.0.1:38804 ESTABLISHED 59604/java
tcp6 0 0 127.0.0.1:2181 127.0.0.1:38388 ESTABLISHED 59604/java
tcp6 0 0 127.0.0.1:38390 127.0.0.1:2181 ESTABLISHED 1051/java
tcp6 0 0 127.0.0.1:2181 127.0.0.1:42116 ESTABLISHED 59604/java
tcp6 0 0 127.0.0.1:2181 127.0.0.1:38384 ESTABLISHED 59604/java
tcp6 0 0 127.0.0.1:2181 127.0.0.1:38400 ESTABLISHED 59604/java
tcp6 0 0 127.0.0.1:2181 127.0.0.1:38354 ESTABLISHED 59604/java
tcp6 0 0 127.0.0.1:2181 127.0.0.1:38398 ESTABLISHED 59604/java
tcp6 0 0 127.0.0.1:38804 127.0.0.1:2181 ESTABLISHED 59825/java
tcp6 0 0 127.0.0.1:2181 127.0.0.1:38390 ESTABLISHED 59604/java
tcp6 0 0 127.0.0.1:2181 127.0.0.1:39600 ESTABLISHED 59604/java
tcp6 0 0 127.0.0.1:2181 127.0.0.1:38394 ESTABLISHED 59604/java
tcp6 0 0 127.0.0.1:38394 127.0.0.1:2181 ESTABLISHED 1051/java
tcp6 0 0 127.0.0.1:2181 127.0.0.1:38380 ESTABLISHED 59604/java
tcp6 0 0 127.0.0.1:38384 127.0.0.1:2181 ESTABLISHED 1051/java
tcp6 0 0 127.0.0.1:42112 127.0.0.1:2181 ESTABLISHED 61237/java
tcp6 0 0 127.0.0.1:38358 127.0.0.1:2181 ESTABLISHED 1361/java
tcp6 0 0 127.0.0.1:2181 127.0.0.1:38358 ESTABLISHED 59604/java
tcp6 0 0 127.0.0.1:42304 127.0.0.1:2181 ESTABLISHED 61237/java
It is just a one node HDP with all services in one host.
How do I troubleshoot this so I can start up Atlas?
UPDATE 1
In earlier part of the log, it was using demo.myserver.local:2181 just fine:
2019-12-18 00:33:34,699 INFO - [main:] ~ Client environment:java.io.tmpdir=/tmp (ZooKeeper:100)
2019-12-18 00:33:34,713 INFO - [main:] ~ Client environment:java.compiler=<NA> (ZooKeeper:100)
2019-12-18 00:33:34,714 INFO - [main:] ~ Client environment:os.name=Linux (ZooKeeper:100)
2019-12-18 00:33:34,725 INFO - [main:] ~ Client environment:os.arch=amd64 (ZooKeeper:100)
2019-12-18 00:33:34,726 INFO - [main:] ~ Client environment:os.version=3.10.0-1062.1.1.el7.x86_64 (ZooKeeper:100)
2019-12-18 00:33:34,727 INFO - [main:] ~ Client environment:user.name=atlas (ZooKeeper:100)
2019-12-18 00:33:34,727 INFO - [main:] ~ Client environment:user.home=/home/atlas (ZooKeeper:100)
2019-12-18 00:33:34,735 INFO - [main:] ~ Client environment:user.dir=/home/atlas (ZooKeeper:100)
2019-12-18 00:33:34,737 INFO - [main:] ~ Initiating client connection, connectString=demo.myserver.local:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher#3fa7df1 (ZooKeeper:438)
2019-12-18 00:33:35,051 INFO - [main-SendThread(demo.myserver.local:2181):] ~ Opening socket connection to server demo.myserver.local/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (ClientCnxn:1019)
2019-12-18 00:33:35,103 INFO - [main-SendThread(demo.myserver.local:2181):] ~ Socket connection established, initiating session, client: /127.0.0.1:39586, server: demo.myserver.local/127.0.0.1:2181 (ClientCnxn:864)
2019-12-18 00:33:35,239 INFO - [main-SendThread(demo.myserver.local:2181):] ~ Session establishment complete on server demo.myserver.local/127.0.0.1:2181, sessionid = 0x16f16396b0a000f, negotiated timeout = 60000 (ClientCnxn:1279)
2019-12-18 00:33:40,047 WARN - [main:] ~ Unable to load native-hadoop library for your platform... using builtin-java classes where applicable (NativeCodeLoader:62)
==> /var/log/atlas/gc-worker.log.0.current <==
Heap after GC invocations=1 (full 0):
par new generation total 552960K, used 31365K [0x0000000080000000, 0x00000000a5800000, 0x00000000a5800000)
eden space 491520K, 0% used [0x0000000080000000, 0x0000000080000000, 0x000000009e000000)
from space 61440K, 51% used [0x00000000a1c00000, 0x00000000a3aa1688, 0x00000000a5800000)
to space 61440K, 0% used [0x000000009e000000, 0x000000009e000000, 0x00000000a1c00000)
concurrent mark-sweep generation total 1482752K, used 0K [0x00000000a5800000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 22798K, capacity 23030K, committed 23424K, reserved 1069056K
class space used 2777K, capacity 2863K, committed 2944K, reserved 1048576K
}
Atlas config
cat /etc/atlas/conf/atlas-application.properties
# Generated by Apache Ambari. Wed Dec 18 00:33:04 2019
atlas.audit.hbase.tablename=ATLAS_ENTITY_AUDIT_EVENTS
atlas.audit.hbase.zookeeper.quorum=1
atlas.audit.zookeeper.session.timeout.ms=60000
atlas.auth.policy.file=/usr/hdp/current/atlas-server/conf/policy-store.txt
atlas.authentication.keytab=/etc/security/keytabs/atlas.service.keytab
atlas.authentication.method.file=true
atlas.authentication.method.file.filename=/usr/hdp/current/atlas-server/conf/users-credentials.properties
atlas.authentication.method.kerberos=false
atlas.authentication.method.ldap=false
atlas.authentication.method.ldap.ad.base.dn=
atlas.authentication.method.ldap.ad.bind.dn=
atlas.authentication.method.ldap.ad.bind.password=
atlas.authentication.method.ldap.ad.default.role=ROLE_USER
atlas.authentication.method.ldap.ad.domain=
atlas.authentication.method.ldap.ad.referral=ignore
atlas.authentication.method.ldap.ad.url=
atlas.authentication.method.ldap.ad.user.searchfilter=(sAMAccountName={0})
atlas.authentication.method.ldap.base.dn=
atlas.authentication.method.ldap.bind.dn=
atlas.authentication.method.ldap.bind.password=
atlas.authentication.method.ldap.default.role=ROLE_USER
atlas.authentication.method.ldap.groupRoleAttribute=cn
atlas.authentication.method.ldap.groupSearchBase=
atlas.authentication.method.ldap.groupSearchFilter=
atlas.authentication.method.ldap.referral=ignore
atlas.authentication.method.ldap.type=ldap
atlas.authentication.method.ldap.url=
atlas.authentication.method.ldap.user.searchfilter=
atlas.authentication.method.ldap.userDNpattern=uid=
atlas.authentication.principal=atlas
atlas.authorizer.impl=simple
atlas.cluster.name=myhdp
atlas.enableTLS=false
atlas.graph.index.search.backend=solr5
atlas.graph.index.search.solr.mode=cloud
atlas.graph.index.search.solr.zookeeper-url=demo.myserver.local:2181/solr
atlas.graph.storage.backend=hbase
atlas.graph.storage.hbase.table=atlas_titan
atlas.graph.storage.hostname=demo.myserver.local
atlas.kafka.bootstrap.servers=demo.myserver.local:6667
atlas.kafka.enable.auto.commit=false
atlas.kafka.hook.group.id=atlas
atlas.kafka.session.timeout.ms=30000
atlas.kafka.zookeeper.connect=demo.myserver.local:2181
atlas.kafka.zookeeper.connection.timeout.ms=30000
atlas.kafka.zookeeper.session.timeout.ms=60000
atlas.kafka.zookeeper.sync.time.ms=20
atlas.lineage.schema.query.hive_table=hive_table where __guid='%s'\, columns
atlas.lineage.schema.query.Table=Table where __guid='%s'\, columns
atlas.notification.create.topics=true
atlas.notification.embedded=false
atlas.notification.replicas=1
atlas.notification.topics=ATLAS_HOOK,ATLAS_ENTITIES
atlas.proxyusers=knox
atlas.rest.address=http://demo.myserver.local:21000
atlas.server.address.id1=demo.myserver.local:21000
atlas.server.bind.address=demo.myserver.local
atlas.server.ha.enabled=false
atlas.server.http.port=21000
atlas.server.https.port=21443
atlas.server.ids=id1
atlas.solr.kerberos.enable=false
atlas.ssl.exclude.protocols=TLSv1.2
atlas.sso.knox.browser.useragent=
atlas.sso.knox.enabled=false
atlas.sso.knox.providerurl=
atlas.sso.knox.publicKey=

Related

local to local connections with TIME_WAIT state in centos

I have centos7 host with 2 interfaces:
lo: 127.0.0.1
eth0: 10.0.0.11
when I am looking for 'local to local' connections in netstat -apeen output or in /proc/net/tcp file, I can see only one line for TIME_WAIT connection, for example (case 1):
tcp 0 0 127.0.0.1:6388 127.0.0.1:13444 TIME_WAIT
at the same time if state ESTABLISHED I can see 2 lines accordingly, see examples (case 2):
tcp 0 0 127.0.0.1:11514 127.0.0.1:24156 ESTABLISHED
tcp 0 0 127.0.0.1:24156 127.0.0.1:11514 ESTABLISHED
or
tcp 0 0 10.0.0.11:81 10.0.0.11:52162 ESTABLISHED
tcp 0 0 10.0.0.11:52162 10.0.0.11:81 ESTABLISHED
in both cases pair of ports busy correct?
could someone explain why 1 case having 1 line and 2 case 2 lines in output ?
I am trying to count total amount of busy ports on each IP address by counting 1 line - 1 port in /proc/net/tcp but looks like this is incorrect, is there any better way ?

Cannot connect to RabbitMQ server hosted remotely

I have installed and configured RabbitMQ on Ubuntu 16.04 server using reference. Since the default user that is guest is only allowed to connect locally by default, I added a new user with the administrator tag and set its permission so that it can access / virtual host. I enabled RabbitMQ management console. I am successfully able to login with the user I created. I am also able to connect with RabbitMQ when I am connecting to it via localhost using my created user. But when I am trying to connect with the RabbitMQ server through other servers using following code:
import pika
credentials = pika.PlainCredentials('new_user', 'new_pass')
parameters = pika.ConnectionParameters('<server's Public IP>', 5672,'/',credentials)
connection = pika.BlockingConnection(parameters)
It throws an error:
Traceback (most recent call last):
File "", line 1, in
File "/Library/Python/2.7/site-packages/pika/adapters/blocking_connection.py", line 339, in init
self._process_io_for_connection_setup()
File "/Library/Python/2.7/site-packages/pika/adapters/blocking_connection.py", line 374, in _process_io_for_connection_setup
self._open_error_result.is_ready)
File "/Library/Python/2.7/site-packages/pika/adapters/blocking_connection.py", line 395, in _flush_output
raise exceptions.ConnectionClosed()
pika.exceptions.ConnectionClosed
The same code works fine when I run this code on server, on which RabbitMQ is installed and by replacing <server's Public IP> with 0.0.0.0.
Output of sudo netstat -nltp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 18021/beam
tcp 0 0 0.0.0.0:4369 0.0.0.0:* LISTEN 18110/epmd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1230/sshd
tcp 0 0 0.0.0.0:15672 0.0.0.0:* LISTEN 18021/beam
tcp6 0 0 :::5672 :::* LISTEN 18021/beam
tcp6 0 0 :::4369 :::* LISTEN 18110/epmd
tcp6 0 0 :::22 :::* LISTEN 1230/sshd
What could be causing this error?
this usually happens with a very low connection timeout. adjust your connection string to include a larger connection timeout, such as 30 or 60 seconds, and you should be good to go.
looks like pika uses this setting https://pika.readthedocs.io/en/latest/modules/parameters.html#pika.connection.ConnectionParameters.blocked_connection_timeout

Change bound IP running on port 7077 - Apache Spark

Can Spark be configured so that instead of binding to address 127.0.1.1 for port 7077, can
instead be bound to 0.0.0.0 . In same way as port 8080 is bound :
netstat -pln
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.1.1:7077 0.0.0.0:* LISTEN 2864/java
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 2864/java
tcp 0 0 127.0.1.1:6066 0.0.0.0:* LISTEN 2864/java
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
udp 0 0 0.0.0.0:68 0.0.0.0:* -
udp 0 0 192.168.192.22:123 0.0.0.0:* -
udp 0 0 127.0.0.1:123 0.0.0.0:* -
udp 0 0 0.0.0.0:123 0.0.0.0:* -
udp 0 0 0.0.0.0:21415 0.0.0.0:* -
Active UNIX domain sockets (only servers)
Proto RefCnt Flags Type State I-Node PID/Program name Path
unix 2 [ ACC ] STREAM LISTENING 7195 - /var/run/dbus/system_bus_socket
unix 2 [ ACC ] SEQPACKET LISTENING 405 - /run/udev/control
Reason I'm asking this is that I'm unable to connect workers to master node and I think the issue is that the master ip is not discoverable.
Error when try to connect slave to master :
15/04/02 21:58:18 WARN Remoting: Tried to associate with unreachable remote address [akka.tcp://sparkMaster#raspberrypi:7077]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: Connection refused: raspberrypi/192.168.192.22:7077
15/04/02 21:58:18 INFO RemoteActorRefProvider$RemoteDeadLetterActorRef: Message [org.apache.spark.deploy.DeployMessages$RegisterWorker] from Actor[akka://sparkWorker/user/Worker#1677101765] to Actor[akka://sparkWorker/deadLetters] was not delivered. [10] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
In spark-env.sh you can set SPARK_MASTER_IP=<ip>.
A hostname would also work fine (via SPARK_STANDALONE_MASTER=<hostname>), just make sure the workers connect to exactly the same hostname as the master binds to (i.e. the spark:// address that is shown in Spark master UI).

Rebinding udp socket to new one

As syslog uses the predefined socket port number of 514, is there any way to rebind this socket port number to any other port number specifically between 49152 and 65535. I am using Unix C 'gcc' compiler.
bash-3.2$ netstat -anp | grep udp
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
udp 0 0 0.0.0.0:2049 0.0.0.0:* -
udp 0 0 0.0.0.0:514 <-- needs to be changed 0.0.0.0:* -
udp 0 0 127.1.1.1:6688 0.0.0.0:* -
udp 0 0 0.0.0.0:4785 0.0.0.0:* -
udp 0 0 0.0.0.0:69 0.0.0.0:* -
udp 0 0 0.0.0.0:47451 0.0.0.0:* -
udp 0 0 0.0.0.0:613 0.0.0.0:* -
udp 0 0 0.0.0.0:111 0.0.0.0:* -
udp 0 0 0.0.0.0:1009 0.0.0.0:* -
udp 0 0 0.0.0.0:1012 0.0.0.0:* -
I need to change the 514 to the specified value.
gcc is simply a compiler and so we cannot change the port number of syslog application during compilation. Having said that, it is likely that one can configure the port, usually in the configuration files, and you should be able to set it to 514. Here is an example that talks about configuring both the client side and the server side: http://itvomit.com/2012/06/01/linux-sending-log-files-to-a-remote-server/ . Depending upon your system, I would explore how to change the port number.
We should keep in mind that if we change the port from 514 to X, then if remote clients send information to syslog, they they would also need to send it to the new port X and not 514. Here is a link on that explains how we need to change the client side config to redirect logging messages to a port number different from 514: http://docs.splunk.com/Documentation/Storm/Storm/User/Howtosetupsyslog

Eclipse help won't show under Ubuntu

Whenever I open some help within eclipse I get a page saying:
Server Error. The following error occurred: [code=CANT_CONNECT_LOOPBACK] Cannot connect due to potential loopback problems
I'm running Ubuntu 10.04.
Any ideas what this can be?
UPDATE
Some commands outputs (some private info replaced):
$ ifconfig -a
eth0 Link encap:Ethernet HWaddr 00:xx:xx:xx:xx:xx
inet addr:123.12.123.235 Bcast:123.12.456.255 Mask:255.255.254.0
inet6 addr: fe80::xxx:eff:xxxx:xxxx/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1343040 errors:0 dropped:0 overruns:0 frame:0
TX packets:1133672 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:829265876 (829.2 MB) TX bytes:242912202 (242.9 MB)
Memory:f3200000-f3220000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:125 errors:0 dropped:0 overruns:0 frame:0
TX packets:125 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:15910 (15.9 KB) TX bytes:15910 (15.9 KB)
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
123.12.123.0 0.0.0.0 255.255.254.0 U 1 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0
0.0.0.0 123.12.456.254 0.0.0.0 UG 0 0 0 eth0
$ sudo netstat -anp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 765/portmap
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 871/sshd
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1181/cupsd
tcp 0 0 0.0.0.0:52068 0.0.0.0:* LISTEN 786/rpc.statd
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 1186/mysqld
tcp 0 0 0.0.0.0:53709 0.0.0.0:* LISTEN -
tcp 0 0 123.12.123.235:755 123.12.5.48:2049 ESTABLISHED -
tcp 0 0 123.12.123.235:60793 123.12.5.129:8080 ESTABLISHED 2264/firefox-bin
tcp 0 0 123.12.123.235:57940 123.12.5.43:8080 ESTABLISHED 2264/firefox-bin
tcp 0 0 123.12.123.235:57928 123.12.5.43:8080 CLOSE_WAIT 2247/google-chrome
tcp 0 0 123.12.123.235:35767 123.12.5.129:8080 ESTABLISHED 2247/google-chrome
tcp 0 0 123.12.123.235:57930 123.12.5.43:8080 ESTABLISHED 2247/google-chrome
tcp 0 0 123.12.123.235:57931 123.12.5.43:8080 CLOSE_WAIT 2247/google-chrome
tcp6 0 0 :::80 :::* LISTEN 1278/apache2
tcp6 0 0 :::22 :::* LISTEN 871/sshd
tcp6 0 0 ::1:631 :::* LISTEN 1181/cupsd
tcp6 0 0 :::55934 :::* LISTEN 1956/eclipse
tcp6 0 0 :::5900 :::* LISTEN 1792/vino-server
udp 0 0 0.0.0.0:35631 0.0.0.0:* 912/avahi-daemon: r
udp 0 0 0.0.0.0:962 0.0.0.0:* 786/rpc.statd
udp 0 0 0.0.0.0:68 0.0.0.0:* 1575/dhclient
udp 0 0 0.0.0.0:46149 0.0.0.0:* -
udp 0 0 0.0.0.0:5353 0.0.0.0:* 912/avahi-daemon: r
udp 0 0 0.0.0.0:111 0.0.0.0:* 765/portmap
udp 0 0 0.0.0.0:36211 0.0.0.0:* 786/rpc.statd
udp 0 0 123.12.123.235:123 0.0.0.0:* 1689/ntpd
udp 0 0 127.0.0.1:123 0.0.0.0:* 1689/ntpd
udp 0 0 0.0.0.0:123 0.0.0.0:* 1689/ntpd
udp6 0 0 fe80::227:eff:fe07::123 :::* 1689/ntpd
udp6 0 0 ::1:123 :::* 1689/ntpd
udp6 0 0 :::123 :::* 1689/ntpd
Active UNIX domain sockets (servers and established) due to post size limit.
UPDATE 2
My proxy bypass settings:
I know this is a late answer, but I had the same problem and resolved it, so to tie up this one...
This is a combination of two bugs:
(i) Eclipse's internal help browser doesn't use the Eclipse proxy settings! See:
https://bugs.eclipse.org/bugs/show_bug.cgi?id=318969
(and the bugs referenced in comment #7 therein)
(ii) Ubuntu's proxy support is horribly broken in certain subtle ways. See:
https://bugs.launchpad.net/ubuntu/+bug/300271
The fix/workaround is to manually set the no_proxy environment variable before running eclipse (as reported in the Eclipse #308035 bug referenced from the 318969 one) e.g.
export no_proxy=127.0.0.1,localhost
eclipse &
Help then launches correctly within Eclipse. Of course, once Eclipse is launched (thus running its own internal HTTP server), you can also access the local help manually from another browser (or, if within the 'can't access 127.0.0.1' screen, there's an icon at the top to launch in an external window --> default system browser).
This may well apply on other Linux distros using Gnome.
[Couldn't post the 308035 bug link because my low reputation means I can only post 2 hyperlinks :-( Getting excited at this privilege come 10 reputation points :-)]
Basically, this error means that Eclipse is failing to establish a TCP/IP connection to your localhost using 127.0.0.1 (Eclipse starts a server for the Help).
If you are using some specific proxy settings (either global at the OS level or local at the Eclipse level), double check that you are bypassing the proxy for localhost and 127.0.0.1.
If this doesn't help, try setting the hostname that help uses to localhost when starting eclipse (either on the command line or in the eclipse.ini):
eclipse -vmargs -Dserver_host=localhost
Eclipse help is actually an HTTP server.
This is probably a permissions problem with your installation of Eclipse.
I have no suggestions except to check the permissions on your Eclipse folder, or delete and reinstall Eclipse.
had the same problem recently installing and running Eclipse on 9.10. Found that the default settings for Eclipse were fine but 9.10 had no proxy bypass set for 127.0.0.1 in its system settings. Also had to install Apache2 via Synaptic. I installed Apache2, did not change any settings for Apache2 and then went System > Preferences > Network Proxy Preferences clicked on Ignored Hosts and added "127.0.0.1". Reset the Eclipse Network Preferences back to default, restarted Eclipse and help worked perfectly. Hope this works for others.
David, thanks for the netstat output; you'll notice that Eclipse is listening on an IPv6 port:
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp6 0 0 :::55934 :::* LISTEN 1956/eclipse
Is your proxy configuration set to bypass both 127.0.0.1 and ::1?
Make sure your /etc/hosts file is properly done. Usually
if the line containing 127.0.0.1 has your host name remove it and just leave 'localhost'
if the opposite were true, try adding your hostname to it :)
such things happen because gnome is trying to match hostname and sockets to handle UI things. Might be worth asking on superuser..