Why cant deserialize gremlin.process.traversal.Bytecode with none() step? - deserialization

I try to deserialize traversal query with none() step, but it seems like there is no support for such deserialization
Tinkerpop version is 3.5.0
Here is a code that fail
import org.apache.tinkerpop.gremlin.jsr223.JavaTranslator;
import org.apache.tinkerpop.gremlin.process.traversal.Bytecode;
import org.apache.tinkerpop.gremlin.process.traversal.Traversal;
import org.apache.tinkerpop.gremlin.process.traversal.dsl.graph.DefaultGraphTraversal;
import org.apache.tinkerpop.gremlin.process.traversal.dsl.graph.GraphTraversalSource;
import org.apache.tinkerpop.gremlin.structure.Graph;
import org.apache.tinkerpop.gremlin.structure.io.GraphReader;
import org.apache.tinkerpop.gremlin.structure.io.GraphWriter;
import org.apache.tinkerpop.gremlin.structure.io.Mapper;
import org.apache.tinkerpop.gremlin.structure.io.graphson.*;
import org.apache.tinkerpop.gremlin.tinkergraph.structure.TinkerGraph;
import org.apache.tinkerpop.shaded.jackson.databind.ObjectMapper;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
public class GraphExperimental {
private static final Mapper<ObjectMapper> GRAPHSON_MAPPER_3_0 = GraphSONMapper.build()
.version(GraphSONVersion.V3_0)
.typeInfo(TypeInfo.PARTIAL_TYPES)
.addCustomModule(GraphSONXModuleV3d0.build().create(false))
.create();
private static final GraphWriter GRAPHSON_WRITER_3_0 = GraphSONWriter.build()
.mapper(GRAPHSON_MAPPER_3_0)
.create();
private static final GraphReader GRAPHSON_READER_3_0 = GraphSONReader.build()
.mapper(GRAPHSON_MAPPER_3_0)
.create();
private GraphExperimental() {
}
public static void main(String[] args) {
Graph graph = TinkerGraph.open();
DefaultGraphTraversal<Object, Object> defaultGraphTraversal = new DefaultGraphTraversal<>(graph.traversal());
// defaultGraphTraversal.addV("A").property("mac", "1");
defaultGraphTraversal.iterate();
Bytecode bytecode = defaultGraphTraversal.getBytecode();
String serialize = writeValueAsString(bytecode);
Graph anotherGraph = TinkerGraph.open();
GraphTraversalSource anotherTraversal = anotherGraph.traversal();
Bytecode bytecodeDeserialized = readValue(serialize, Bytecode.class);
Traversal.Admin<?, ?> translate = JavaTranslator.of(anotherTraversal).translate(bytecodeDeserialized);
}
public static <T> T readValue(String value, Class<? extends T> clazz) {
try (ByteArrayInputStream in = new ByteArrayInputStream(value.getBytes("UTF-8"))) {
return GRAPHSON_READER_3_0.readObject(in, clazz);
} catch (IOException e) {
throw new RuntimeException(e);
}
}
public static String writeValueAsString(Object value) {
try (ByteArrayOutputStream out = new ByteArrayOutputStream()) {
GRAPHSON_WRITER_3_0.writeObject(out, value);
return out.toString("UTF-8");
} catch (IOException e) {
throw new RuntimeException(e);
}
}
}
In result i catch such an exception:
Exception in thread "main" java.lang.IllegalStateException: Could not locate method: GraphTraversalSource.none()
at org.apache.tinkerpop.gremlin.jsr223.JavaTranslator.invokeMethod(JavaTranslator.java:207)
at org.apache.tinkerpop.gremlin.jsr223.JavaTranslator.translate(JavaTranslator.java:89)
at example.GraphExperimental.main(GraphExperimental.java:63)
If you just remove comment from defaultGraphTraversal.addV("A").property("mac", "1");
The code will work
So why does there is no support for such a deserialization?
Thanks a lot!!!

I would recommend that you not attempt to construct a DefaultGraphTraversal directly and to instead spawn them from a GraphTraversalSource (i.e. your g). I suppose it can work, but that class really wasn't designed to be utilized by users directly so the API could shift out from under you or an initialization behavior could change and cause you trouble.
In a sense, the direct construction is what is causing the problem here as you build traversal bytecode that isn't really valid:
gremlin> defaultGraphTraversal = new DefaultGraphTraversal<>(graph.traversal());[]
gremlin> defaultGraphTraversal.iterate();[]
gremlin> bytecode = defaultGraphTraversal.getBytecode()
==>[[], [none()]]
Since you don't spawn from a GraphTraversalSource you end up constructing a traversal that doesn't have any source instructions (i.e. the first [] in the above bytecode output). Without that, the JavaTranslator heads down a different path than it expects and tries to call none() on a GraphTraversalSource rather than on a Traversal. You obviously solve this problem when you uncomment that line that calls addV() because that is a proper start step that can act as a spawn whereas none() cannot.

Related

java.io.BufferedReader().map Cannot infer type argument(s) for <T> fromStream(Stream<? extends T>)

Scenario: a Spring WebFlux triggering CommandLineRunner.run in order to load data to MongoDb for testing purpose.
Goal: when starting the microservice locally it is aimed to read a json file and load documents to MongDb.
Personal knowledge: "bufferedReader.lines().filter(l -> !l.trim().isEmpty()" reads each json node and return it as stream. Then I can map it to "l" and access the get methods. I guess I don't have to create a list and then stream it since I have already load it as stream by "new InputStreamReader(getClass().getClassLoader().getResourceAsStream()" and I assume I can use lines() since it node will result in a string line. Am I in right direction or I am messing up some idea?
This is a json sample file:
{
"Extrato": {
"description": "credit",
"value": "R$1.000,00",
"status": 11
},
"Extrato": {
"description": "debit",
"value": "R$2.000,00",
"status": 99
}
}
model
import org.springframework.data.annotation.Id;
import org.springframework.data.mongodb.core.mapping.Document;
#Document
public class Extrato {
#Id
private String id;
private String description;
private String value;
private Integer status;
public Extrato(String id, String description, String value, Integer status) {
super();
this.id = id;
this.description = description;
this.value = value;
this.status = status;
}
... getters and setter accordinly
Repository
import org.springframework.data.mongodb.repository.Query;
import org.springframework.data.repository.reactive.ReactiveCrudRepository;
import com.noblockingcase.demo.model.Extrato;
import reactor.core.publisher.Flux;
import org.springframework.data.domain.Pageable;
public interface ExtratoRepository extends ReactiveCrudRepository<Extrato, String> {
#Query("{ id: { $exists: true }}")
Flux<Extrato> retrieveAllExtratosPaged(final Pageable page);
}
command for loading from above json file
import org.springframework.boot.CommandLineRunner;
import org.springframework.stereotype.Component;
import com.noblockingcase.demo.model.Extrato;
import com.noblockingcase.demo.repository.ExtratoRepository;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import reactor.core.publisher.Flux;
#Component
public class TestDataLoader implements CommandLineRunner {
private static final Logger log = LoggerFactory.getLogger(TestDataLoader.class);
private ExtratoRepository extratoRepository;
TestDataLoader(final ExtratoRepository extratoRepository) {
this.extratoRepository = extratoRepository;
}
#Override
public void run(final String... args) throws Exception {
if (extratoRepository.count().block() == 0L) {
final LongSupplier longSupplier = new LongSupplier() {
Long l = 0L;
#Override
public long getAsLong() {
return l++;
}
};
BufferedReader bufferedReader = new BufferedReader(
new InputStreamReader(getClass().getClassLoader().getResourceAsStream("carga-teste.txt")));
//*** THE ISSUE IS NEXT LINE
Flux.fromStream(bufferedReader.lines().filter(l -> !l.trim().isEmpty())
.map(l -> extratoRepository.save(new Extrato(String.valueOf(longSupplier.getAsLong()),
l.getDescription(), l.getValue(), l.getStatus()))))
.subscribe(m -> log.info("Carga Teste: {}", m.block()));
}
}
}
Here is the MongoDb config althought I don't think it is relevant
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import com.mongodb.MongoClientOptions;
#Configuration
public class MongoDbSettings {
#Bean
public MongoClientOptions mongoOptions() {
return MongoClientOptions.builder().socketTimeout(2000).build();
}
}
If I tried my original code and adjust it for reading a text file I can successfully read text file instead of json. Obvisouly it doesn't fit my demand since I want read json file. By the way, it can clarify a bit more where I am blocked.
load-test.txt (available in https://github.com/jimisdrpc/webflux-worth-scenarious/blob/master/demo/src/main/resources/carga-teste.txt)
crédito de R$1.000,00
débito de R$100,00
snippet code working with simple text file
BufferedReader bufferedReader = new BufferedReader(
new InputStreamReader(getClass().getClassLoader().getResourceAsStream("carga-teste.txt")));
Flux.fromStream(bufferedReader.lines().filter(l -> !l.trim().isEmpty())
.map(l -> extratoRepository
.save(new Extrato(String.valueOf(longSupplier.getAsLong()), "Qualquer descrição", l))))
.subscribe(m -> log.info("Carga Teste: {}", m.block()));
Whole project working succesfully reading from text file: https://github.com/jimisdrpc/webflux-worth-scenarious/tree/master/demo
Docker compose for booting MongoDb https://github.com/jimisdrpc/webflux-worth-scenarious/blob/master/docker-compose.yml
To summarize, my issue is: I didn't figure out how read a json file and insert the data into MongoDb during CommandLineRunner.run()
I found an example with Flux::using Flux::fromStream to be helpful for this purpose. This will read your file into a Flux and then you can subscribe to and process with .flatmap or something. From the Javadoc
using(Callable resourceSupplier, Function> sourceSupplier, Consumer resourceCleanup)
Uses a resource, generated by a supplier for each individual Subscriber, while streaming the values from a Publisher derived from the same resource and makes sure the resource is released if the sequence terminates or the Subscriber cancels.
and the code that I put together:
private static Flux<Account> fluxAccounts() {
return Flux.using(() ->
new BufferedReader(new InputStreamReader(new ClassPathResource("data/ExportCSV.csv").getInputStream()))
.lines()
.map(s->{
String[] sa = s.split(" ");
return Account.builder()
.firstname(sa[0])
.lastname(sa[1])
.build();
}),
Flux::fromStream,
BaseStream::close
);
}
Please note your json is invalid. Text data is not same as json. Json needs a special handling so always better to use library.
carga-teste.json
[
{"description": "credit", "value": "R$1.000,00", "status": 11},
{"description": "debit","value": "R$2.000,00", "status": 99}
]
Credits goes to article here - https://www.nurkiewicz.com/2017/09/streaming-large-json-file-with-jackson.html.
I've adopted to use Flux.
#Override
public void run(final String... args) throws Exception {
BufferedReader bufferedReader = new BufferedReader(
new InputStreamReader(getClass().getClassLoader().getResourceAsStream("carga-teste.json")));
ObjectMapper mapper = new ObjectMapper();
Flux<Extrato> flux = Flux.generate(
() -> parser(bufferedReader, mapper),
this::pullOrComplete,
jsonParser -> {
try {
jsonParser.close();
} catch (IOException e) {}
});
flux.map(l -> extratoRepository.save(l)).subscribe(m -> log.info("Carga Teste: {}", m.block()));
}
}
private JsonParser parser(Reader reader, ObjectMapper mapper) {
JsonParser parser = null;
try {
parser = mapper.getFactory().createParser(reader);
parser.nextToken();
} catch (IOException e) {}
return parser;
}
private JsonParser pullOrComplete(JsonParser parser, SynchronousSink<Extrato> emitter) {
try {
if (parser.nextToken() != JsonToken.END_ARRAY) {
Extrato extrato = parser.readValueAs(Extrato.class);
emitter.next(extrato);
} else {
emitter.complete();
}
} catch (IOException e) {
emitter.error(e);
}
return parser;
}

can Flink receive http requests as datasource?

Flink can read a socket stream, can it read http requests? how?
// socket example
DataStream<XXX> socketStream = env
.socketTextStream("localhost", 9999)
.map(...);
There's an open JIRA ticket for creating an HTTP sink connector for Flink, but I've seen no discussion about creating a source connector.
Moreover, it's not clear this is a good idea. Flink's approach to fault tolerance requires sources that can be rewound and replayed, so it works best with input sources that behave like message queues. I would suggest buffering the incoming http requests in a distributed log.
For an example, look at how DriveTribe uses Flink to power their website on the data Artisans blog and on YouTube.
I write one custom http source. please ref OneHourHttpTextStreamFunction. you need create a fat jar to include apache httpserver classes if you want run my code.
package org.apache.flink.streaming.examples.http;
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.common.functions.ReduceFunction;
import org.apache.flink.api.java.utils.ParameterTool;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.source.SourceFunction;
import org.apache.flink.streaming.api.windowing.time.Time;
import org.apache.flink.streaming.examples.socket.SocketWindowWordCount.WordWithCount;
import org.apache.flink.util.Collector;
import org.apache.http.HttpException;
import org.apache.http.HttpRequest;
import org.apache.http.HttpResponse;
import org.apache.http.entity.StringEntity;
import org.apache.http.impl.bootstrap.HttpServer;
import org.apache.http.impl.bootstrap.ServerBootstrap;
import org.apache.http.protocol.HttpContext;
import org.apache.http.protocol.HttpRequestHandler;
import java.io.IOException;
import java.util.concurrent.TimeUnit;
import static org.apache.flink.util.Preconditions.checkArgument;
import static org.apache.flink.util.Preconditions.checkNotNull;
public class HttpRequestCount {
public static void main(String[] args) throws Exception {
// the host and the port to connect to
final String path;
final int port;
try {
final ParameterTool params = ParameterTool.fromArgs(args);
path = params.has("path") ? params.get("path") : "*";
port = params.getInt("port");
} catch (Exception e) {
System.err.println("No port specified. Please run 'SocketWindowWordCount "
+ "--path <hostname> --port <port>', where path (* by default) "
+ "and port is the address of the text server");
System.err.println("To start a simple text server, run 'netcat -l <port>' and "
+ "type the input text into the command line");
return;
}
// get the execution environment
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
// get input data by connecting to the socket
DataStream<String> text = env.addSource(new OneHourHttpTextStreamFunction(path, port));
// parse the data, group it, window it, and aggregate the counts
DataStream<WordWithCount> windowCounts = text
.flatMap(new FlatMapFunction<String, WordWithCount>() {
#Override
public void flatMap(String value, Collector<WordWithCount> out) {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
for (String word : value.split("\\s")) {
out.collect(new WordWithCount(word, 1L));
}
}
})
.keyBy("word").timeWindow(Time.seconds(5))
.reduce(new ReduceFunction<WordWithCount>() {
#Override
public WordWithCount reduce(WordWithCount a, WordWithCount b) {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return new WordWithCount(a.word, a.count + b.count);
}
});
// print the results with a single thread, rather than in parallel
windowCounts.print().setParallelism(1);
env.execute("Http Request Count");
}
}
class OneHourHttpTextStreamFunction implements SourceFunction<String> {
private static final long serialVersionUID = 1L;
private final String path;
private final int port;
private transient HttpServer server;
public OneHourHttpTextStreamFunction(String path, int port) {
checkArgument(port > 0 && port < 65536, "port is out of range");
this.path = checkNotNull(path, "path must not be null");
this.port = port;
}
#Override
public void run(SourceContext<String> ctx) throws Exception {
server = ServerBootstrap.bootstrap().setListenerPort(port).registerHandler(path, new HttpRequestHandler(){
#Override
public void handle(HttpRequest req, HttpResponse rep, HttpContext context) throws HttpException, IOException {
ctx.collect(req.getRequestLine().getUri());
rep.setStatusCode(200);
rep.setEntity(new StringEntity("OK"));
}
}).create();
server.start();
server.awaitTermination(1, TimeUnit.HOURS);
}
#Override
public void cancel() {
server.stop();
}
}
Leave you comment, if you want the demo jar.

Californium Framework CoAP and PUT request

I am trying to do a request to coap server (er-rest-example) using Californium.
I succesfully do a POST request.
But with PUT I am getting a BAD REQUEST, I try using this URLs in url:
coap://[aaaa::c30c:0000:0000:0002]:5683/actuators/leds
coap://[aaaa::c30c:0000:0000:0002]:5683/actuators/leds?
coap://[aaaa::c30c:0000:0000:0002]:5683/actuators/leds?color=r
But with no one get success.
What I am doing wrong?.
This is my simple script:
package coap_client;
import java.net.URI;
import java.net.URISyntaxException;
import java.util.Arrays;
import java.util.Timer;
import java.util.TimerTask;
import org.eclipse.californium.core.CoapClient;
import org.eclipse.californium.core.CoapResponse;
import org.eclipse.californium.core.coap.MediaTypeRegistry;
public class cliente {
public static void main(String[] args) throws Exception {
Timer timer;
timer = new Timer();
TimerTask task = new TimerTask(){
#Override
public void run(){
String url="coap://[aaaa::c30c:0000:0000:0002]:5683/actuators/leds";
URI uri= null;
try {
uri = new URI(url);
} catch (URISyntaxException e) {
e.printStackTrace();
}
CoapClient client = new CoapClient(uri);
CoapResponse response = client.put("color=r",MediaTypeRegistry.TEXT_PLAIN);
System.out.println(response.isSuccess());
if (response!=null) {
byte[] myreponse=response.getPayload();
String respuesta2 = new String(myreponse);
System.out.println(respuesta2);
}
}
};
timer.schedule(task, 10,10*1000);
}
}
In Contiki er-rest-example, see the POST/PUT handler(1) for the LED CoAP resource. It expects a mode param without which you will get a BAD_REQUEST as response. I assume that has to go in the request body.

How to verify digital signature of SOAP call?

I wrote an interceptor in Apache CXF and get a SoapMessage. How do I get the raw XML from the SOAP message without changing the data to hurt the verification of the digital signature?
I refer to org.apache.cxf.binding.soap.SoapMessage:
import org.apache.cxf.binding.soap.SoapMessage;
import org.apache.cxf.binding.soap.interceptor.AbstractSoapInterceptor;
import org.apache.cxf.binding.soap.interceptor.EndpointSelectionInterceptor;
import org.apache.cxf.binding.soap.interceptor.ReadHeadersInterceptor;
import org.apache.cxf.interceptor.Fault;
import org.apache.cxf.phase.Phase;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
public class XmlSignatureVerifyInInterceptor extends AbstractSoapInterceptor {
private static final Logger log = LogManager.getLogger(XmlSignatureVerifyInInterceptor.class);
public XmlSignatureVerifyInInterceptor() {
super(Phase.READ);
log.entry();
addAfter(ReadHeadersInterceptor.class.getName());
addAfter(EndpointSelectionInterceptor.class.getName());
}
#Override
public void handleMessage(SoapMessage soapMessage) throws Fault {
log.entry(soapMessage);
}
}
Cheers and thank you in advance!
Fireball
If you are refering a javax.xml.soap.SOAPMessage, and you want a String result of XML, use ByteArrayOutputStream:
SOAPMessage message;
ByteArrayOutputStream out = new ByteArrayOutputStream();
String msg = "";
try {
message.writeTo(out);
msg = out.toString("UTF-8");
} catch (Exception e) {
e.printStackTrace();
}
I use UTF-8 as encoding, you can change it to any others.

Map Reduce Distributed Cache

I am not able to compile my DriverClass at the job.waitforcompletion(boolean) clause.It gives me a NoClassFoundException.If I catch the exception ,the run method throws the error that its expecting a int value.I am using MapReduce New API.Could anyone suggest what is the issue :
import java.io.File;
import java.io.IOException;
import java.net.URI;
import java.net.URISyntaxException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.filecache.DistributedCache;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.util.GenericOptionsParser;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.KeyValueTextInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
public class Dist_Driver extends Configured implements Tool {
public int run(String args[]) throws IOException, InterruptedException {
// Configuration phase
// Configuration conf=new Configuration();
Job job = new Job(new Configuration());
job.setJarByClass(Dist_Driver.class);
// Mapper Reducer InputFormat
job.setInputFormatClass(FileInputFormat.class);
// Mapper and Reducer Class
job.setMapperClass(Dist_Mapper.class);
job.setReducerClass(DistCache_Reducer.class);
job.setOutputFormatClass(TextOutputFormat.class);
job.setInputFormatClass(KeyValueTextInputFormat.class);
// set FileInputOutput
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
// setting number of reduce tasks and submit it
job.setNumReduceTasks(2);
// Lets check if the file exist
File f1 = new File("/home/hdfs/trials_mapreduce_progams/emp_id");
if (f1.exists())
System.out.println("The Files Exists");
else
System.out.println("The File doesnot exist");
URI path1;
try {
path1 = new URI(
"/home/hdfs/trials_mapreduce_progams/emp_lookup.txt");
DistributedCache.addCacheFile(path1, job.getConfiguration());
} catch (URISyntaxException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
if (job.waitForCompletion(true))
return 0;
else
return 1;
}
public static void main(String[] args) throws Exception {
int exitcode = ToolRunner.run(new Dist_Driver(), args);
System.exit(exitcode);
}
}
Just add the ClassNotFoundException to the run method signature
public int run(String args[]) throws IOException,
InterruptedException,
ClassNotFoundException {
The reason you get an error when you try and try/catch it is because if there is a ClassNotFoundException thrown during execution, there will be no return value, and the method has to return something.
If you really want to catch it, just return 1 in the catch clause, which is the error exit code