I Am trying to convert the following graphSon Format into a graph instance
using the follwing command
graph.io(IoCore.graphson()).reader().create().readGraph(stream, graph);
But while running converting the GRaphSON into graph instance given
below
{"id":0,
"label":"buyer",
"outE":
{"email_is":
[{"id":0,"inV":1,
"properties":{"weight":1}
}
]}
,"properties":
{"buyer":
[{
"id":0,"value":"buyer0"
}]
,"age":
[{
"id":1,"value":10}]
}}
{"id":1,
"label":"email",
"inE":
{ "email_is":
[{"id":1,"outV":0,
"properties":{"weight":1}}
]}
,"properties":
{"email":
[{"id":2,
"value":"email0"
}]
}}
I am getting the following error
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:293)
at java.lang.Thread.run(Thread.java:745)Caused by: java.lang.IllegalArgumentException: Invalid vertex provided: null
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:145)
at com.thinkaurelius.titan.graphdb.vertices.AbstractVertex.addEdge(AbstractVertex.java:149)
at com.thinkaurelius.titan.graphdb.vertices.AbstractVertex.addEdge(AbstractVertex.java:23)
at org.apache.tinkerpop.gremlin.structure.io.graphson.GraphSONReader.lambda$null$57(GraphSONReader.java:114)
at java.util.Iterator.forEachRemaining(Iterator.java:116)
at org.apache.tinkerpop.gremlin.structure.io.graphson.GraphSONReader.lambda$readGraph$58(GraphSONReader.java:108)
at java.util.HashMap$EntrySet.forEach(HashMap.java:1035)
at org.apache.tinkerpop.gremlin.structure.io.graphson.GraphSONReader.readGraph(GraphSONReader.java:108)
at pluradj.titan.tinkerpop3.example.JavaExample2.main(JavaExample2.java:50)
... 6 more
Can anyone tell me an easier way to make GRAPHSON file , as it is a very tedious task using StringWriter and JSONWRiter classes.
It doesn't look like there is anything wrong with your format except that you have line breaks where GraphSON's adjacency list requires one vertex per line as so:
{"id":0,"label":"buyer","outE":{"email_is":[{"id":0,"inV":1,"properties":{"weight":1}}]},"properties":{"buyer":[{"id":0,"value":"buyer0"}],"age":[{"id":1,"value":10}]}}
{"id":1,"label":"email","inE":{ "email_is":[{"id":1,"outV":0,"properties":{"weight":1}}]},"properties":{"email":[{"id":2,"value":"email0"}]}}
In this format it seems to work just fine:
gremlin> graph = TitanFactory.open('conf/titan-berkeleyje.properties')
==>standardtitangraph[berkeleyje:/db/berkeley]
gremlin> graph.io(graphson()).readGraph('data/sample.json')
==>null
gremlin> g = graph.traversal()
==>graphtraversalsource[standardtitangraph[berkeleyje:/db/berkeley], standard]
gremlin> g.V().valueMap()
==>[email:[email0]]
==>[age:[10], buyer:[buyer0]]
If you want to have "valid" JSON, then you can do this (which is only practical for small graphs):
{
"vertices": [
{"id":0,"label":"buyer","outE":{"email_is":[{"id":0,"inV":1,"properties":{"weight":1}}]},"properties":{"buyer":[{"id":0,"value":"buyer0"}],"age":[{"id":1,"value":10}]}},
{"id":1,"label":"email","inE":{ "email_is":[{"id":1,"outV":0,"properties":{"weight":1}}]},"properties":{"email":[{"id":2,"value":"email0"}]}}
]
}
and then you have to initialize the GraphSONReader a little differently and use the unwrapAdjacencyList setting:
gremlin> graph = TitanFactory.open('conf/titan-berkeleyje.properties')
==>standardtitangraph[berkeleyje:/db/berkeley]
gremlin> reader = graph.io(graphson()).reader().unwrapAdjacencyList(true).create()
==>org.apache.tinkerpop.gremlin.structure.io.graphson.GraphSONReader#286090c
gremlin> reader.readGraph(new FileInputStream('data/sample.json'), graph)
==>null
gremlin> g = graph.traversal()
==>graphtraversalsource[standardtitangraph[berkeleyje:/db/berkeley], standard]
gremlin> g.V().valueMap()
==>[age:[10], buyer:[buyer0]]
==>[email:[email0]]
Related
I need to write to external HDFS cluster whose authentication details are available for both simple as well as kerberos authentication. For the sake of simplicity, lets assume we are dealing with simple authentication.
This is what I have:
External HDFS cluster connection details (host, port)
Authentication details (user for simple auth)
HDFS location where files need to be written (hdfs://host:port/loc)
Also, other details like format, etc.
Please note SPARK user is not same as user specified for HDFS auth.
Now, using the spark programming API, this is what I am trying to do:
val hadoopConf = new Configuration()
hadoopConf.set("fs.defaultFS", fileSystemPath)
hadoopConf.set("hadoop.job.ugi", userName)
val jConf = new JobConf(hadoopConf)
jConf.setUser(user)
jConf.set("user.name", user)
jConf.setOutputKeyClass(classOf[NullWritable])
jConf.setOutputValueClass(classOf[Text])
jConf.setOutputFormat(classOf[TextOutputFormat[NullWritable, Text]])
outputDStream.foreachRDD(r => {
val rdd = r.mapPartitions { iter =>
val text = new Text()
iter.map { x =>
text.set(x.toString)
println(x.toString)
(NullWritable.get(), text)
}
}
val rddCount = rdd.count()
if(rddCount > 0) {
rdd.saveAsHadoopFile(config.outputPath, classOf[NullWritable], classOf[Text], classOf[TextOutputFormat[NullWritable, Text]], jConf)
}
})
Here, I was assuming that if we pass JobConf with correct details, it should be used for authentication and write should be done using the user specified in JobConf.
However, write still happens as the spark user ("root") irrespective of the auth details present in JobConf ("hdfs" as user). Below is the exception that I get:
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=WRITE, inode="/spark-deploy/out/_temporary/0":hdfs:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1698)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1682)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1665)
at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3900)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:978)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
at org.apache.hadoop.ipc.Client.call(Client.java:1475)
at org.apache.hadoop.ipc.Client.call(Client.java:1412)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy40.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:558)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy41.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3000)
... 45 more
Please let me know, if there are any suggestions.
This is probably more a comment than an answer but as it is too long I put it here. I haven't tried this because I have no environment to test it. Please try and let me know if this works (and if it doesn't I'll remove this answer).
Looking a bit into the code it looks like DFSClient creates a proxy using createProxyWithClientProtocol that uses UserGroupInformation.getCurrentUser() (I haven't traced the createHAProxy branch down but I suspect the same logic there). Then this info is sent to the server for authentication.
It means that you need to change what UserGroupInformation.getCurrentUser() returns in the context of your particular call. This is what UserGroupInformation.doAs is supposed to do so you just need to get a proper UserGroupInformation instance. And in the case of simple authentication UserGroupInformation.createRemoteUser might actually work.
So I suggest trying something like this:
...
val rddCount = rdd.count()
if(rddCount > 0) {
val remoteUgi = UserGroupInformation.createRemoteUser("hdfsUserName")
remoteUgi.doAs(() => { rdd.saveAsHadoopFile(config.outputPath, classOf[NullWritable], classOf[Text], classOf[TextOutputFormat[NullWritable, Text]], jConf) })
}
I was using below API for fetching the vertex in 2.0.12 and it was working
OrientGraphFactory factory = new OrientGraphFactory("remote:172.21.112.228/mydb", "root", "root").setupPool(1,50);
OrientGraphNoTx graph = factory.getNoTx();
graph.getVertices("Test.name","zyx");
But with the latest 2.2.18, when I tried with
OrientGraphFactory factory = new OrientGraphFactory("memory:172.21.112.228/mydb", "root", "root").setupPool(1,50);
OrientGraphNoTx graph = factory.getNoTx();
graph.getVertices("Test.name","zyx");
I am getting below error,
Exception in thread "main" java.lang.IllegalArgumentException: OClass not found in the schema: Test
at com.tinkerpop.blueprints.impls.orient.OrientBaseGraph.getVertices(OrientBaseGraph.java:814)
at test.GetOrientDBData.main(GetOrientDBData.java:57)
Try following code:
OrientGraphFactory factory = new OrientGraphFactory("memory:mydb",
"root", "root").setupPool(1,50);
OrientGraphNoTx graph = factory.getNoTx();
if(g.getVertexType("Test") == null){
g.createVertexType("Test");
}
graph.getVertices("Test.name","zyx");
Two changes on your code:
removed the IP in the memory: URL, no need for that
added a check to make sure that Test class exists and in case create it
For test purpose, I would like to use BigQuery Connector to write Parquet Avro logs in BigQuery. As I'm writing there is no way to read directly Parquet from the UI to ingest it so I'm writing a Spark job to do so.
In Scala, for the time being, job body is the following:
val events: RDD[RichTrackEvent] =
readParquetRDD[RichTrackEvent, RichTrackEvent](sc, googleCloudStorageUrl)
val conf = sc.hadoopConfiguration
conf.set("mapred.bq.project.id", "myproject")
// Output parameters
val projectId = conf.get("fs.gs.project.id")
val outputDatasetId = "logs"
val outputTableId = "test"
val outputTableSchema = LogSchema.schema
// Output configuration
BigQueryConfiguration.configureBigQueryOutput(
conf, projectId, outputDatasetId, outputTableId, outputTableSchema
)
conf.set(
"mapreduce.job.outputformat.class",
classOf[BigQueryOutputFormat[_, _]].getName
)
events
.mapPartitions {
items =>
val gson = new Gson()
items.map(e => gson.fromJson(e.toString, classOf[JsonObject]))
}
.map(x => (null, x))
.saveAsNewAPIHadoopDataset(conf)
As the BigQueryOutputFormat isn't finding the Google Credentials, it fallbacks on metadata host to try to discover them with the following stacktrace:
016-06-13 11:40:53 WARN HttpTransport:993 - exception thrown while executing request
java.net.UnknownHostException: metadata
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589 at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
at sun.net.www.http.HttpClient.New(HttpClient.java:308)
at sun.net.www.http.HttpClient.New(HttpClient.java:326)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1169)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1105)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:999)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:933)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:93)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:972)
at com.google.cloud.hadoop.util.CredentialFactory$ComputeCredentialWithRetry.executeRefreshToken(CredentialFactory.java:160)
at com.google.api.client.auth.oauth2.Credential.refreshToken(Credential.java:489)
at com.google.cloud.hadoop.util.CredentialFactory.getCredentialFromMetadataServiceAccount(CredentialFactory.java:207)
at com.google.cloud.hadoop.util.CredentialConfiguration.getCredential(CredentialConfiguration.java:72)
at com.google.cloud.hadoop.io.bigquery.BigQueryFactory.createBigQueryCredential(BigQueryFactory.java:81)
at com.google.cloud.hadoop.io.bigquery.BigQueryFactory.getBigQuery(BigQueryFactory.java:101)
at com.google.cloud.hadoop.io.bigquery.BigQueryFactory.getBigQueryHelper(BigQueryFactory.java:89)
at com.google.cloud.hadoop.io.bigquery.BigQueryOutputCommitter.<init>(BigQueryOutputCommitter.java:70)
at com.google.cloud.hadoop.io.bigquery.BigQueryOutputFormat.getOutputCommitter(BigQueryOutputFormat.java:102)
at com.google.cloud.hadoop.io.bigquery.BigQueryOutputFormat.getOutputCommitter(BigQueryOutputFormat.java:84)
at com.google.cloud.hadoop.io.bigquery.BigQueryOutputFormat.getOutputCommitter(BigQueryOutputFormat.java:30)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1135)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1078)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1078)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:357)
at org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1078)
It is of course expected but it should be able to use my service account and its key as GoogleCredential.getApplicationDefault() returns appropriate credentials fetched from GOOGLE_APPLICATION_CREDENTIALS environment variable.
As the connector seems to read credentials, from hadoop configuration, what's the keys to set so that it reads GOOGLE_APPLICATION_CREDENTIALS ? Is there a way to configure the output format to use a provided GoogleCredential object ?
If I understand your question correctly - you might want to set:
<name>mapred.bq.auth.service.account.enable</name>
<name>mapred.bq.auth.service.account.email</name>
<name>mapred.bq.auth.service.account.keyfile</name>
<name>mapred.bq.project.id</name>
<name>mapred.bq.gcs.bucket</name>
Here, the mapred.bq.auth.service.account.keyfile should point to the full file path to the older-style "P12" keyfile; alternatively, if you're using the newer "JSON" keyfiles, you should replace the "email" and "keyfile" entries with the single mapred.bq.auth.service.account.json.keyfile key:
<name>mapred.bq.auth.service.account.enable</name>
<name>mapred.bq.auth.service.account.json.keyfile</name>
<name>mapred.bq.project.id</name>
<name>mapred.bq.gcs.bucket</name>
Also you might want to take a look at https://github.com/spotify/spark-bigquery - which is much more civilised way of working with BQ and Spark. The setGcpJsonKeyFile method used in this case is the same JSON file you'd set for mapred.bq.auth.service.account.json.keyfile if using the BQ connector for Hadoop.
I am using play-framework 2.3.x with reactivemongo-extension JSON type. following is my code for fetch the data from db as below:
def getStoredAccessToken(authInfo: AuthInfo[User]) = {
println(">>>>>>>>>>>>>>>>>>>>>>: BEFORE"); //$doc("clientId" $eq authInfo.user.email, "userId" $eq authInfo.user._id.get)
var future = accessTokenService.findRandom(Json.obj("clientId" -> authInfo.user.email, "userId" -> authInfo.user._id.get));
println(">>>>>>>>>>>>>>>>>>>>>>: AFTER: "+future);
future.map { option => {
println("*************************** ")
println("***************************: "+option.isEmpty)
if (!option.isEmpty){
var accessToken = option.get;println(">>>>>>>>>>>>>>>>>>>>>>: BEFORE VALUE");
var value = Crypto.validateToken(accessToken.createdAt.value)
println(">>>>>>>>>>>>>>>>>>>>>>: "+value);
Some(scalaoauth2.provider.AccessToken(accessToken.accessToken, accessToken.refreshToken, authInfo.scope,
Some(value), new Date(accessToken.createdAt.value)))
}else{
Option.empty
}
}}
}
When i using BSONDao and BsonDocument for fetching the data, this code successfully run, but after converting to JSONDao i getting the following error:
Note: Some time this code will run but some it thrown an exception after converting to JSON
play - Cannot invoke the action, eventually got an error: java.lang.IllegalArgumentException: bound must be positive
application -
Following are the logs of application full exception strack trace as below:
>>>>>>>>>>>>>>>>>>>>>>: BEFORE
>>>>>>>>>>>>>>>>>>>>>>: AFTER: scala.concurrent.impl.Promise$DefaultPromise#7f4703e3
play - Cannot invoke the action, eventually got an error: java.lang.IllegalArgumentException: bound must be positive
application -
! #6m1520jff - Internal server error, for (POST) [/oauth2/token] ->
play.api.Application$$anon$1: Execution exception[[IllegalArgumentException: bound must be positive]]
at play.api.Application$class.handleError(Application.scala:296) ~[play_2.11-2.3.8.jar:2.3.8]
at play.api.DefaultApplication.handleError(Application.scala:402) [play_2.11-2.3.8.jar:2.3.8]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3$$anonfun$applyOrElse$4.apply(PlayDefaultUpstreamHandler.scala:320) [play_2.11-2.3.8.jar:2.3.8]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3$$anonfun$applyOrElse$4.apply(PlayDefaultUpstreamHandler.scala:320) [play_2.11-2.3.8.jar:2.3.8]
at scala.Option.map(Option.scala:146) [scala-library-2.11.6.jar:na]
Caused by: java.lang.IllegalArgumentException: bound must be positive
at java.util.Random.nextInt(Random.java:388) ~[na:1.8.0_40]
at scala.util.Random.nextInt(Random.scala:66) ~[scala-library-2.11.6.jar:na]
The problem is solve, but i am not sure, why this produce, I think there is problem with reactivemongo-extension JSONDao library. because when i use findOne instead of findRandom the code is run successfully, but the findRandom is run good on BSON dao. Still not found what the exact problem is that, but following is the resolved code.
def getStoredAccessToken(authInfo: AuthInfo[User]) = {
println(authInfo.user.email+" ---- "+authInfo.user._id.get)
var future = accessTokenService.findOne($doc("clientId" $eq authInfo.user.email, "userId" $eq authInfo.user._id.get)); //user findOne instead of findRandom in JsonDao
future.map { option => {
if (!option.isEmpty){
var accessToken = option.get;
var value = Crypto.validateToken(accessToken.createdAt.value)
Some(scalaoauth2.provider.AccessToken(accessToken.accessToken, accessToken.refreshToken, authInfo.scope,
Some(value), new Date(accessToken.createdAt.value)))
}else{
Option.empty
}
}}
}
Please note: even though I'm using Groovy here, I think my exception is really about using the Jersey/JAX-RS API correctly.
Given the following code:
ClientConfig clientConfig = new DefaultClientConfig()
clientConfig.getFeatures().put(JSONConfiguration.FEATURE_POJO_MAPPING, Boolean.TRUE)
Client jerseyClient = Client.create(clientConfig)
WebResource webResource = jerseyClient.resource("http://localhost:8080/location/")
Long id = 5L
Address address = webResource.path("address").path(id)
.accept(MediaType.APPLICATION_JSON)
.get(Long)
I am getting the following exception:
groovy.lang.MissingMethodException: No signature of method: com.sun.jersey.api.client.WebResource.path() is applicable for argument types: (java.lang.Long) values: [5]
Possible solutions: path(java.lang.String), put(), wait(long), put(com.sun.jersey.api.client.GenericType), put(java.lang.Class), put(java.lang.Object)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:55)
at org.codehaus.groovy.runtime.callsite.PojoMetaClassSite.call(PojoMetaClassSite.java:46)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
at com.me.myapp.Driver.run(Driver.groovy:43)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
<rest omitted for brevity>
I am trying to hit the following REST endpoint:
GET http://localhost:8080/location/address/{id}
Where am I going wrong?
You're calling the path method with a long, but it can only take a String. Your id is a long, but since the error message says the value is 1, I assume that LocationResourcePaths.ADDRESS_PATH is also a long with value 1, is that the case?