Output after spark job submitted to spark cluster in kubernetes cluster created by minicube:
----------------- RUNNING ----------------------
[Stage 0:> (0 + 0) / 2]17/06/16 16:08:15 INFO VerifiableProperties: Verifying properties
17/06/16 16:08:15 INFO VerifiableProperties: Property group.id is overridden to xxx
17/06/16 16:08:15 INFO VerifiableProperties: Property zookeeper.connect is overridden to
xxxxxxxxxxxxxxxxxxxxx
[Stage 0:> (0 + 0) / 2]
Information from spark web ui:
foreachRDD at myfile.scala:49 +details
org.apache.spark.streaming.dstream.DStream.foreachRDD(DStream.scala:625)
myfile.run(myfile.scala:49) Myjob$.main(Myjob.scala:100)
Myjob.main(Myjob.scala)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:743)
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
My codes:
println("----------------- RUNNING ----------------------");
eventsStream.foreachRDD { rdd =>
println("xxxxxxxxxxxxxxxxxxxxx")
//println(rdd.count());
if( !rdd.isEmpty )
{
println("yyyyyyyyyyyyyyyyyyyyyyy")
val df = sqlContext.read.json(rdd);
df.registerTempTable("data");
val rules = rulesSource.rules();
var resultsRDD : RDD[(String,String,Long,Long,Long,Long,Long,Long)]= sc.emptyRDD;
rules.foreach { rule =>
...
}
sqlContext.dropTempTable("data")
}
else
{
println("-------");
println("NO DATA");
println("-------");
}
}
Any idea? Thanks
UPDATE
My spark job runs well in docker container of standalone spark. but if submitted to spark cluster in kubernetes cluster, it is stuck in kafka streaming. No idea why?
The yaml file for spark master is from https://github.com/phatak-dev/kubernetes-spark/blob/master/spark-master.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
name: spark-master
name: spark-master
spec:
replicas: 1
template:
metadata:
labels:
name: spark-master
spec:
containers:
- name : spark-master
image: spark-2.1.0-bin-hadoop2.6
imagePullPolicy: "IfNotPresent"
name: spark-master
ports:
- containerPort: 7077
protocol: TCP
command:
- "/bin/bash"
- "-c"
- "--"
args :
- './start-master.sh ; sleep infinity'
Logs will be helpful to diagnose the issue.
essentially you can't create another RDD with in the RDD operation.
i.e. rdd1.map{rdd2.count()} is not valid
See how the RDD is converted to dataframe after the implicit sqlContext import.
import sqlContext.implicits._
eventsStream.foreachRDD { rdd =>
println("yyyyyyyyyyyyyyyyyyyyyyy")
val df = rdd.toDF();
df.registerTempTable("data");
.... //Your logic here.
sqlContext.dropTempTable("data")
}
Related
I have a DB in namespace ns-restriction-demo-2 and a nodeJs application that is using that DB from namespace ns-restriction-demo-1.
So I was trying to add authorization i.e. DB can only be access by one namespace (ns-restriction-demo-1), but when I enable the mixer logs, I realise that the source attributes are different.
{
"level": "warn",
"time": "0001-01-01T00:00:00.000000Z",
"instance": "newlog.instance.ns-restriction-demo-2",
"destination": "db-demo-2",
"destinationName": "db-demo-2-868c4bb6c7-f5l7c",
"destinationNamespace": "ns-restriction-demo-2",
"destinationPort": 3306,
"destinationPrinciple": "unkown",
"latency": "0s",
"mtls": false,
"requestHost": "unkown",
"responseCode": 0,
"responseSize": 0,
"source": "calico-node",
"sourceName": "calico-node-ljzxl",
"sourceNamespace": "kube-system",
"sourcePrinicple": "unkown",
"sourceServiceAccount": "calico-node",
"sourceservices": "unkown",
"user": "unknown"
}
Configuration I have used
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: newlog
namespace: ns-restriction-demo-2
spec:
compiledTemplate: logentry
params:
severity: '"warning"'
variables:
sourceName: source.name | "unkown"
sourceNamespace: source.namespace | "unkown"
sourcePrinicple: source.principal | "unkown"
sourceServiceAccount: source.serviceAccount | "unkown"
sourceservices: source.services | "unkown"
destinationPort: destination.port | 0
destinationName: destination.name | "unkown"
destinationNamespace: destination.namespace | "unkown"
destinationPrinciple: destination.principal | "unkown"
requestHost: request.host | "unkown"
source: source.labels["app"] | source.workload.name | "unknown"
user: source.user | "unknown"
mtls: connection.mtls | false
destination: destination.labels["app"] | destination.workload.name | "unknown"
responseCode: response.code | 0
responseSize: response.size | 0
latency: response.duration | "0ms"
monitored_resource_type: '"UNSPECIFIED"'
---
# Configuration for a stdio handler
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: newloghandler
namespace: ns-restriction-demo-2
spec:
compiledAdapter: stdio
params:
severity_levels:
warning: 1 # Params.Level.WARNING
outputAsJson: true
---
# Rule to send logentry instances to a stdio handler
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: newlogstdio
namespace: ns-restriction-demo-2
spec:
match: "true" # match for all requests
actions:
- handler: newloghandler
instances:
- newlog
---
So I have a question for the istio's expert here.
Why source or sourceName or sourceNamespace is related to calico? why is it not about that nodeJs app that is using this DB?
Please help me understand what have I done wrong or missing something?
Calico Version
Client Version: v3.5.8
Git commit: 107e128
Cluster Version: v3.6.2
Cluster Type: k8s,bgp,kdd
Other Versions
Istio: 1.1.10
Kubernetes: 1.13.6
Collecting metrics with Mixer for TCP Services is different from the HTTP one.
The access to your database is an example of TCP workload inside Istio Mesh.
Therefore please use a dedicated 'tcpaccesslog' template instead of 'logentry' one, which includes in attributes mapping a valid protocol configuration:
'protocol: context.protocol | "tcp"'
This helps Istio to properly derive source and sourceName attributes.
Note:
in case a requests to DB are sent over mTLS enabled TCP connection, you should have available some extra attributes:
source.workload.name
source.workload.namespace
Bazel failed to deploy sample k8s deployment (deployment.yaml) file in k8s tenant.
I followed the link https://github.com/bazelbuild/rules_k8s#aliasing-eg-k8s_deploy and tried one sample deployment.yaml file to deploy the application in k8s tenant. I have one k8s tenant already configured in the build machine. To deploy the application I executed:
bazel run //main:dev.create
But the bazel command is failing with below error:
[root#localhost t2]# bazel run //main:dev.create <br/>
Starting local Bazel server and connecting to it... <br/>
INFO: Analyzed target //main:dev.create (68 packages loaded, 6876 targets configured).<br/>
INFO: Found 1 target...<br/>
INFO: Deleting stale sandbox base <br/>/root/.cache/bazel/_bazel_root/5ad59170e5ff426844f68e5dd9f66fb3/sandbox
Target //main:dev.create up-to-date:<br/>
bazel-bin/main/dev.create<br/>
INFO: Elapsed time: 33.497s, Critical Path: 2.04s<br/>
INFO: 0 processes.<br/>
INFO: Build completed successfully, 1 total action<br/>
INFO: Build completed successfully, 1 total action<br/>
$ /usr/local/bin/kubectl --cluster=kubernetes --context= --user= create -f -<br/>
error: error parsing STDIN: error converting YAML to JSON: yaml: line 4: <br/>mapping values are not allowed in this context<br/>
this is my WORKSPACE file
load("#bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
git_repository(
name = "io_bazel_rules_go",
remote = "https://github.com/bazelbuild/rules_go.git",
tag = "0.18.5"
)
git_repository(
name = "bazel_gazelle",
remote = "https://github.com/bazelbuild/bazel-gazelle.git",
tag = "0.17.0",
)
load("#io_bazel_rules_go//go:deps.bzl", "go_download_sdk","go_register_toolchains","go_rules_dependencies")
go_download_sdk(
name = "gosdk",
sdks = {
......
},
urls = [....],
)
go_register_toolchains(
"#//:gosdk",
)
go_rules_dependencies()
load("#bazel_gazelle//:deps.bzl", "gazelle_dependencies", "go_repository")
gazelle_dependencies()
git_repository(
name = "io_bazel_rules_docker",
commit = "e12e276a9a6ded09363a6c1f0de46c573bd6096c",
remote = "https://github.com/xxxxx/rules_docker.git",
)
load(
"#io_bazel_rules_docker//repositories:repositories.bzl",
container_repositories = "repositories",
)
container_repositories()
load("#io_bazel_rules_docker//container:container.bzl", "container_pull")
load(
"#io_bazel_rules_docker//go:image.bzl",
go_image_repos = "repositories",
)
go_image_repos()
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "io_bazel_rules_k8s",
sha256 = "91fef3e6054096a8947289ba0b6da3cba559ecb11c851d7bdfc9ca395b46d8d8",
strip_prefix = "rules_k8s-0.1",
urls = ["https://github.com/bazelbuild/rules_k8s/releases/download/v0.1/rules_k8s-v0.1.tar.gz"],
)
load("#io_bazel_rules_k8s//k8s:k8s.bzl", "k8s_repositories")
k8s_repositories()
load("#io_bazel_rules_k8s//k8s:k8s.bzl", "k8s_defaults")
k8s_defaults(
name = "k8s_deploy",
kind = "deployment",
cluster = "kubernetes",
)
build.bazel file :
load("#io_bazel_rules_go//go:def.bzl", "go_binary", "go_library")
go_binary(
name = "hello_go",
embed = [":go_default_library"],
visibility = ["//visibility:public"],
)
go_library(
name = "go_default_library",
srcs = ["main.go"],
)
load("#io_bazel_rules_docker//go:image.bzl", "go_image")
go_image(
name = "go-image",
base = ":test",
embed = [":go_default_library"],
)
load("#io_bazel_rules_docker//container:image.bzl", "container_image")
container_image(
name = "test",
base = "#go_image_base//image",
user = "101",
)
load("#io_bazel_rules_k8s//k8s:object.bzl", "k8s_object")
k8s_object(
name = "dev",
kind = "deployment",
template = ":deployment.yaml",
cluster = "kubernetes",
images = {
"xxxxx.net/test/new:v1": ":go-image",
},
)
deployment.yaml file :
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: staging
spec:
replicas: 1
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: xxxxx.net/test/new:v1
imagePullPolicy: Always
ports:
- containerPort: 50051
On the same server, i have kubeconfig file kept at /root/.kube/config.
I have nodejs code running inside a pod. From inside the pod I want to find the zone of the node where this pod is running. What is the best way do do that? Do I need extra permissions?
I have not been able to find a library but I post the code that does it below. The getContent function was slightly adapted from this post This code should work inside a GKE pod or and GCE host.
Use it as following:
const gcp = require('./gcp.js')
gcp.zone().then(z => console.log('Zone is: ' + z))
Module: gcp.js
const getContent = function(lib, options) {
// return new pending promise
return new Promise((resolve, reject) => {
// select http or https module, depending on reqested url
const request = lib.get(options, (response) => {
// handle http errors
if (response.statusCode < 200 || response.statusCode > 299) {
reject(new Error('Failed to load page, status code: ' + response.statusCode));
}
// temporary data holder
const body = [];
// on every content chunk, push it to the data array
response.on('data', (chunk) => body.push(chunk));
// we are done, resolve promise with those joined chunks
response.on('end', () => resolve(body.join('')));
});
// handle connection errors of the request
request.on('error', (err) => reject(err))
})
};
exports.zone = () => {
return getContent(
require('http'),
{
hostname: 'metadata.google.internal',
path: '/computeMetadata/v1/instance/zone',
headers: {
'Metadata-Flavor': 'Google'
},
method: 'GET'
})
}
You can use failure-domain.beta.kubernetes.io/region and failure-domain.beta.kubernetes.io/zone labels of the pod to getting its region and AZ.
But, please keep in mind, that:
Only GCE and AWS are currently supported automatically (though it is easy to add similar support for other clouds or even bare metal, by simply arranging for the appropriate labels to be added to nodes and volumes).
To get access to labels, you can use DownwardAPI for attaching a Volume with your current labels and annotations of the pod. You don't need any extra permissions for use it, just mount them as a volume.
Here is an example from a documentation:
apiVersion: v1
kind: Pod
metadata:
name: kubernetes-downwardapi-volume-example
labels:
zone: us-est-coast
cluster: test-cluster1
rack: rack-22
annotations:
build: two
builder: john-doe
spec:
containers:
- name: client-container
image: k8s.gcr.io/busybox
command: ["sh", "-c"]
args:
- while true; do
if [[ -e /etc/podinfo/labels ]]; then
echo -en '\n\n'; cat /etc/podinfo/labels; fi;
if [[ -e /etc/podinfo/annotations ]]; then
echo -en '\n\n'; cat /etc/podinfo/annotations; fi;
sleep 5;
done;
volumeMounts:
- name: podinfo
mountPath: /etc/podinfo
readOnly: false
volumes:
- name: podinfo
downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- path: "annotations"
fieldRef:
fieldPath: metadata.annotations
When you have a mounted volume with labels, you can read a file /etc/labels which will contain information about AZ and Region as a Key-Pairs, like this:
failure-domain.beta.kubernetes.io/region=us-east-1
failure-domain.beta.kubernetes.io/zone=us-east-1c
I am trying to connect a scala spark-shell with a flume agent.
I launch the shell :
./bin/spark-shell
--jars /Users/romain/Informatique/zoo/spark-1.6.0-bin-hadoop2.4/lib/spark-streaming-flume-sink_2.10-1.6.0.jar
--jars /Users/romain/Informatique/zoo/spark-1.6.0-bin-hadoop2.4/lib/spark-streaming-flume_2.10-1.6.0.jar
--jars /Users/romain/Informatique/zoo/spark-1.6.0-bin-hadoop2.4/lib/spark-streaming-flume-assembly_2.10-1.6.0.jar
And launch a scala script to listen on port 10000 of localhost :
import org.apache.spark.SparkConf
import org.apache.spark.streaming._
import org.apache.spark.streaming.flume._
import org.apache.spark.util.IntParam
val host = "localhost"
val port = 10000
val batchInterval = Milliseconds(5000)
// Create the context and set the batch size
val sparkConf = new SparkConf().setAppName("FlumePollingEventCount")
val ssc = new StreamingContext(sc, batchInterval)
// Create a flume stream that polls the Spark Sink running in a Flume agent
val stream = FlumeUtils.createPollingStream(ssc, host, port)
// Print out the count of events received from this server in each batch
stream.count().map(cnt => "Received " + cnt + " flume events." ).print()
ssc.start()
ssc.awaitTermination()
Then I configure and start a shell agent :
1) configuration : I make a tail - f on a file appended by another running script (I think it doesn't matter to detail that here)
agent1.sources = source1
agent1.channels = channel1
agent1.sinks = spark
agent1.sources.source1.type = exec
agent1.sources.source1.command = tail -f /Users/romain/Informatique/notebooks/spark_scala/velib/logs/trajets.csv
agent1.sources.source1.channels = channel1
agent1.channels.channel1.type = memory
agent1.channels.channel1.capacity = 2000000
agent1.channels.channel1.transactionCapacity = 1000000
agent1.sinks = avroSink
agent1.sinks.avroSink.type = avro
agent1.sinks.avroSink.channel = channel1
agent1.sinks.avroSink.hostname = localhost
agent1.sinks.avroSink.port = 10000
2) start :
./bin/flume-ng agent --conf conf --conf-file ./conf/avro_velib.conf --name agent1 -Dflume.root.logger=INFO,console
And then things go ok till an error :
2017-02-01 15:09:55,688 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:173)] Starting Sink avroSink
2017-02-01 15:09:55,688 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.sink.AbstractRpcSink.start(AbstractRpcSink.java:289)] Starting RpcSink avroSink { host: localhost, port: 10000 }...
2017-02-01 15:09:55,688 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:184)] Starting Source source1
2017-02-01 15:09:55,689 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.source.ExecSource.start(ExecSource.java:169)] Exec source starting with command:tail -f /Users/romain/Informatique/notebooks/spark_scala/velib/logs/trajets.csv
2017-02-01 15:09:55,689 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:120)] Monitored counter group for type: SINK, name: avroSink: Successfully registered new MBean.
2017-02-01 15:09:55,689 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:96)] Component type: SINK, name: avroSink started
2017-02-01 15:09:55,689 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.sink.AbstractRpcSink.createConnection(AbstractRpcSink.java:206)] Rpc sink avroSink: Building RpcClient with hostname: localhost, port: 10000
2017-02-01 15:09:55,689 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.sink.AvroSink.initializeRpcClient(AvroSink.java:126)] Attempting to create Avro Rpc client.
2017-02-01 15:09:55,691 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:120)] Monitored counter group for type: SOURCE, name: source1: Successfully registered new MBean.
2017-02-01 15:09:55,692 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:96)] Component type: SOURCE, name: source1 started
2017-02-01 15:09:55,710 (lifecycleSupervisor-1-1) [WARN - org.apache.flume.api.NettyAvroRpcClient.configure(NettyAvroRpcClient.java:634)] Using default maxIOWorkers
2017-02-01 15:10:15,802 (lifecycleSupervisor-1-1) [WARN - org.apache.flume.sink.AbstractRpcSink.start(AbstractRpcSink.java:294)] Unable to create Rpc client using hostname: localhost, port: 10000
org.apache.flume.FlumeException: NettyAvroRpcClient { host: localhost, port: 10000 }: RPC connection error
at org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:182)
at org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:121)
at org.apache.flume.api.NettyAvroRpcClient.configure(NettyAvroRpcClient.java:638)
at org.apache.flume.api.RpcClientFactory.getInstance(RpcClientFactory.java:89)
at org.apache.flume.sink.AvroSink.initializeRpcClient(AvroSink.java:127)
at org.apache.flume.sink.AbstractRpcSink.createConnection(AbstractRpcSink.java:211)
at org.apache.flume.sink.AbstractRpcSink.start(AbstractRpcSink.java:292)
at org.apache.flume.sink.DefaultSinkProcessor.start(DefaultSinkProcessor.java:46)
at org.apache.flume.SinkRunner.start(SinkRunner.java:79)
at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Error connecting to localhost/127.0.0.1:10000
at org.apache.avro.ipc.NettyTransceiver.getChannel(NettyTransceiver.java:261)
at org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:203)
at org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:152)
at org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:168)
... 16 more
Any idea welcom
You should start "listenner"(spark app in your case) first and then launch "writer" (flume-ng)
Instead of localhost use your machine name i.e (machine name used in spark conf) in both scala and avroconf.
agent1.sinks.avroSink.hostname = <Machine name>
agent1.sinks.avroSink.port = 10000
It seems to me that application.conf and reference.conf behaves differently. I do understand that reference.conf is intended as "safe fall back" configuration which works every time and application.conf is specific. However I would expect that configuration loaded from either of those will behave exactly the same in the sense of parsing the configuration.
What I am facing is that if the configuration is in application.conf it works fine and when the same file is renamed to reference.conf it doesn't work.
2015-03-30 11:35:54,603 [DEBUG] [BackEndServices-akka.actor.default-dispatcher-15] [com.ss.rg.service.ad.AdImporterServiceActor]akka.tcp://BackEndServices#127.0.0.1:2551/user/AdImporterService - Snapshot saved successfully - removing messages and snapshots up to 0 and timestamp: 1427708154584
2015-03-30 11:35:55,037 [DEBUG] [BackEndServices-akka.actor.default-dispatcher-4] [spray.can.server.HttpListener]akka.tcp://BackEndServices#127.0.0.1:2551/user/IO-HTTP/listener-0 - Binding to /0.0.0.0:8080
2015-03-30 11:35:55,054 [DEBUG] [BackEndServices-akka.actor.default-dispatcher-15] [akka.io.TcpListener]akka.tcp://BackEndServices#127.0.0.1:2551/system/IO-TCP/selectors/$a/0 - Successfully bound to /0:0:0:0:0:0:0:0:8080
2015-03-30 11:35:55,056 [INFO ] [BackEndServices-akka.actor.default-dispatcher-4] [spray.can.server.HttpListener]akka.tcp://BackEndServices#127.0.0.1:2551/user/IO-HTTP/listener-0 - Bound to /0.0.0.0:8080
Compared to :
2015-03-30 11:48:34,053 [INFO ] [BackEndServices-akka.actor.default-dispatcher-3] [Cluster(akka://BackEndServices)]Cluster(akka://BackEndServices) - Cluster Node [akka.tcp://BackEndServices#127.0.0.1:2551] - Leader is moving node [akka.tcp://BackEndServices#127.0.0.1:2551] to [Up]
2015-03-30 11:48:36,413 [DEBUG] [BackEndServices-akka.actor.default-dispatcher-15] [spray.can.server.HttpListener]akka.tcp://BackEndServices#127.0.0.1:2551/user/IO-HTTP/listener-0 - Binding to "0.0.0.0":8080
2015-03-30 11:48:36,446 [DEBUG] [BackEndServices-akka.actor.default-dispatcher-3] [akka.io.TcpListener]akka.tcp://BackEndServices#127.0.0.1:2551/system/IO-TCP/selectors/$a/0 - Bind failed for TCP channel on endpoint ["0.0.0.0":8080]: java.net.SocketException: Unresolved address
2015-03-30 11:48:36,446 [WARN ] [BackEndServices-akka.actor.default-dispatcher-15] [spray.can.server.HttpListener]akka.tcp://BackEndServices#127.0.0.1:2551/user/IO-HTTP/listener-0 - Bind to "0.0.0.0":8080 failed
The settle difference are double quotes. And my configuration is specified as follows:
akka {
... standard akka configuration ...
}
webserver.port = 8080
webserver.bindaddress = "0.0.0.0"
Configuration setting is loaded as follows in code:
val webserver_port_key = "webserver.port"
val webserver_bindaddress_key = "webserver.bindaddress"
protected val webserver_bindaddress = ConfigFactory.load().getString(webserver_bindaddress_key)
protected val webserver_port = ConfigFactory.load().getInt(webserver_port_key)
Did I missed something? I double checked that the port 8080 is free when reference.conf fails to bind.
Thanks for hints
UPDATE:
Start with log-config-on-start = on:
- When it is in application.conf
# application.conf: 60-61
"webserver" : {
# application.conf: 61
"bindaddress" : "0.0.0.0",
# application.conf: 60
"port" : 8080
}
- When it is in reference.conf
# reference.conf: 60-61
"webserver" : {
# reference.conf: 61
"bindaddress" : "0.0.0.0",
# reference.conf: 60
"port" : 8080
}
Issue found :
# application.properties
"webserver" : {
# application.properties
"bindaddress" : "\"0.0.0.0\"",
# application.properties
"port" : "8080"
}
It seems that the bindaddress is of a different type because it shows up differently in logs.
In either case enable Akka full config printing on start with this setting in your config:
log-config-on-start = on
Then compare both configurations to see where they mismatch. They should work the same way if the are the same. I suspect that the way you define bindaddress is different, i.e. String vs some other type.