TitanGraph slows down on commit() - titan

I am creating Titan-Graph through Java. Dynamo-db local is as a back-end. My problem is , on graph.tx().commit() it takes more time when i am creating 400+ vertices in graph. Time taken by commit() is about 10-15 minutes.
My titan configurations are as follows :
BaseConfiguration conf = new BaseConfiguration();
conf.setProperty("storage.backend", "com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager");
conf.setProperty("storage.dynamodb.client.endpoint", "http://localhost:4567");
conf.setProperty("storage.batch-loading ", "true");
conf.setProperty("storage.buffer-size", "10000");
conf.setProperty("cache.db-cache", "true");
conf.setProperty("index.search.backend", "elasticsearch");
conf.setProperty("index.search.directory", "/tmp/searchindex");
conf.setProperty("index.search.elasticsearch.client-only", "false");
conf.setProperty("index.search.elasticsearch.local-mode", "true");
TitanGraph graph = TitanFactory.open(conf);
please suggest me the solution for improving titan performance.

Related

FileSink in Apache Flink not generating logs in output folder

I am using Apache Flink to read data from kafka topic and to store it in files on server. I am using FileSink to store files, it creates the directory structure date and time wise but no logs files are getting created.
When i run the program it creates directory structure as below but log files are not getting stored here.
/flink/testlogs/2021-12-08--07
/flink/testlogs/2021-12-08--06
I want the log files should be written every 15 mins to a new log file.
Below is the code.
DataStream <String> kafkaTopicData = env.addSource(new FlinkKafkaConsumer<String>("MyTopic",new SimpleStringSchema(),p));
OutputFileConfig config = OutputFileConfig
.builder()
.withPartPrefix("prefix")
.withPartSuffix(".ext")
.build();
DataStream <Tuple6 < String,String,String ,String, String ,Integer >> newStream=kafkaTopicData.map(new LogParser());
final FileSink<Tuple6<String, String, String, String, String, Integer>> sink = FileSink.forRowFormat(new Path("/flink/testlogs"),
new SimpleStringEncoder < Tuple6 < String,String,String ,String, String ,Integer >> ("UTF-8"))
.withRollingPolicy(DefaultRollingPolicy.builder()
.withRolloverInterval(TimeUnit.MINUTES.toMillis(15))
.withInactivityInterval(TimeUnit.MINUTES.toMillis(5))
.withMaxPartSize(1024 * 1024 * 1024)
.build())
.withOutputFileConfig(config)
.build();
newStream.sinkTo(sink);
env.execute("DataReader");
LogParser returns Tuple6.
When used in streaming mode, Flink's FileSink requires that checkpointing be enabled. To do this, you need to specify where you want checkpoints to be stored, and at what interval you want them to occur.
To configure this in flink-conf.yaml, you would do something like this:
state.checkpoints.dir: s3://checkpoint-bucket
execution.checkpointing.interval: 10s
Or in your application code you can do this:
env.getCheckpointConfig().setCheckpointStorage("s3://checkpoint-bucket");
env.enableCheckpointing(10000L);
Another important detail from the docs:
Given that Flink sinks and UDFs in general do not differentiate between normal job termination (e.g. finite input stream) and termination due to failure, upon normal termination of a job, the last in-progress files will not be transitioned to the “finished” state.

Kafka Streams stops if I use persistentKeyValueStore but works fine with inMemoryKeyValueStore (running in Docker container)

I'm obviously a beginner with kafka/kafka streams. I just need to read given messages from a few topics, given their id. While our actual topology is fairly complex, this Stream app just needs to achieve this single simple goal
This is how a store is created :
final StreamsBuilder streamsBuilder = new StreamsBuilder();
streamsBuilder.table(
topic,
Materialized.<String, String>as( persistentKeyValueStore(storeNameOf(topic)))
.withKeySerde(Serdes.String()).withValueSerde(Serdes.String())
.withCachingDisabled());
// Materialized.<String, String>as( inMemoryKeyValueStore(storeNameOf(topic)))
// .withKeySerde(Serdes.String()).withValueSerde(Serdes.String())
// .withCachingDisabled());
);
KafkaStreams kafkaStreams = new KafkaStreams(streamsBuilder.build(), new Properties() {{ /** config items go here**/ }})
kafkaStreams.start();
//logic for awaiting kafkaStreams to reach `RUNNING` state as well as InvalidStateStoreException handling (by retrying) is ommited for simplicity :
ReadOnlyKeyValueStore<String, String> replyStore = kafkaStreams.store(storeNameOf(topicName), QueryableStoreTypes.keyValueStore());
So, when using the commented inMemoryKeyValueStore materialization replyStore is sucessfully created and I can query the values within without a problem
With persistentKeyValueStore the last line fails with java.lang.IllegalStateException: KafkaStreams is not running. State is ERROR. Note that I do check that KafkaStreams is in state RUNNING before the store call; the ERROR state is reached somehow within the call rather.
Do you think i might have missed anything when setting up the persistent store? Debugging hints would also greatly help, i'm quite stuck here I must confess
Thanks !
Edit : The execution happens under a docker container. This was quite relevant but I ommited to add initialy
As Matthias J. Sax pointed out in comment form, to debug the problem the uncaughtExceptionHandler registration helped greatly .
The actual issue was due to an incompatibility between RocksDB and the docker image I was using (so changed from openjdk:8-jdk-alpine to anapsix/alpine-java:8 )
Related :
https://issues.apache.org/jira/browse/KAFKA-4988
UnsatisfiedLinkError: /tmp/snappy-1.1.4-libsnappyjava.so Error loading shared library ld-linux-x86-64.so.2: No such file or directory

How to run periodic tasks in an Apache Storm topology?

I have an Apache Storm topology and would like to perform a certain action every once in a while. I'm not sure how to approach this in a way which would be natural and elegant.
Should it be a Bolt or a Spout using ScheduledExecutorService, or something else?
Tick tuples are a decent option https://kitmenke.com/blog/2014/08/04/tick-tuples-within-storm/
Edit: Here's the essential code for your bolt
#Override
public Map<String, Object> getComponentConfiguration() {
// configure how often a tick tuple will be sent to our bolt
Config conf = new Config();
conf.put(Config.TOPOLOGY_TICK_TUPLE_FREQ_SECS, 300);
return conf;
}
Then you can use TupleUtils.isTick(tuple) in execute to check whether the received tuple is a tick tuple.
I don't know if this is a correct approach, but it seems to be working fine:
At the end of the prepare method of a Bolt, I added a call to intiScheduler(), which contains the following code:
Calendar calendar = Calendar.getInstance();
ScheduledExecutorService scheduler = Executors.newSingleThreadScheduledExecutor();
scheduler.scheduleAtFixedRate(new PeriodicAction() [class implementing Runnable], millisToFullHour(calendar) [wanna start at the top of the hour], 60*60*1000 [run every hour], TimeUnit.MILLISECONDS);
This needs to be used with caution though, because the bolt can have multiple instances depending on your setup.

How to disable auto commit for SimpleConsumer kafka 0.8.1

I want to disable auto commit for kafka SimpleConsumer. I am using 0.8.1 version.For High level consumer, config options can be set and passed via consumerConfig as follows
kafka.consumer.Consumer.createJavaConsumerConnector(this.consumerConfig);
How can I achieve the same for SimpleConsumer? I mainly want to disable auto commit. I tried setting auto commit to false in consumer.properties and restarted kafka server, zookeeper and producer. But, that does not work. I think I need to apply this setting through code, not in consumer.properties.
Can anyone help here?
Here is how my code looks like
List<TopicAndPartition> topicAndPartitionList = new ArrayList<>();
topicAndPartitionList.add(topicAndPartition);
OffsetFetchResponse offsetFetchResponse = consumer.fetchOffsets(new OffsetFetchRequest("testGroup", topicAndPartitionList, (short) 0, correlationId, clientName));

Map<TopicAndPartition, OffsetMetadataAndError> offsets = offsetFetchResponse.offsets();


FetchRequest req = new FetchRequestBuilder()
.clientId(clientName)
 .addFetch(a_topic, a_partition, offsets.get(topicAndPartition).offset(), 100000) .build();

long readOffset = offsets.get(topicAndPartition).offset();

FetchResponse fetchResponse = consumer.fetch(req);
//Consume messages from fetchResponse
Map<TopicAndPartition, OffsetMetadataAndError > requestInfo = new HashMap<> ();
requestInfo.put(topicAndPartition, new OffsetMetadataAndError(readOffset, "metadata", (short)0));
OffsetCommitResponse offsetCommitResponse = consumer.commitOffsets(new OffsetCommitRequest("testGroup", requestInfo, (short)0, correlationId, clientName));
If above code crashes before committing offset, I still get latest offset as result of offsets.get(topicAndPartition).offset() in next run which makes me to think that auto commit of offset happens as code is executed.
Using SimpleConsumer just means you want to take care of everything about the message consuming including offset commits, so no auto commit is supported for low-level APIs.

OrientDB query way too slow

I can execute this query below just fine through web interface. It takes virtually no time at all to finish.
SELECT from Person;
But when I try to do it from my Java application, it takes more than 17 seconds to finish.
The code I'm using is basically this two lines:
OrientGraph graph = new OrientGraph("remote:93.x.x.x/test");
OCommandRequest req = graph.command(new OCommandSQL(query));
req.execute();
Could it be that the REST requests are so much slower? Web interface is using plocal (I guess), while my Java app uses remote connection.
Try to run the same query also from the console.
The time spent in the console should be about the same (just a little slower than that in java).
I did a test inserting 100,000 Vertex class Person. Doing various query response times is:
Studio = 7.72 sec, Console = 2,043 sec, Java = 1:23 to 1:41 sec
If revenues from a very different time, perhaps something is wrong in java.
You have shown "OCommandSQL", check with "OSQLSynchQuery" to see if there is a great difference.
String query = "";
Iterable<Vertex> result;
query = "select from Persona";
//query with OSQLSynchQuery
result = g.command(new OSQLSynchQuery<Vertex>(query)).execute();
List<OrientVertex> listVertex = new ArrayList<OrientVertex>();
CollectionUtils.addAll(listVertex, result.iterator());
//query with OCommandSQL
OCommandRequest req = g.command(new OCommandSQL(query));
req.execute();