How to enumerate all partitions and aggregate results - azure-service-fabric

I have a multiple partitioned stateful service. How can I enumerate all its partitions and aggregate results, using service remoting for communication between client and service?

You can enumerable the partitions using FabricClient:
var serviceName = new Uri("fabric:/MyApp/MyService");
using (var client = new FabricClient())
{
var partitions = await client.QueryManager.GetPartitionListAsync(serviceName);
foreach (var partition in partitions)
{
Debug.Assert(partition.PartitionInformation.Kind == ServicePartitionKind.Int64Range);
var partitionInformation = (Int64RangePartitionInformation)partition.PartitionInformation;
var proxy = ServiceProxy.Create<IMyService>(serviceName, new ServicePartitionKey(partitionInformation.LowKey));
// TODO: call service
}
}
Note that you should probably cache the results of GetPartitionListAsync since service partitions cannot be changed without recreating the service (you can just keep a list of the LowKey values).
In addition, FabricClient should also be shared as much as possible (see the documentation).

Related

Hazelcast AtomicLong Data loss When multiple member left

Hazelcast fails when multiple members disconnect from the cluster.
My scenario is so basic and my configuration has 3 bakcup option(it does not work). I have 4 members in a cluster and i use AtomicLong API to save my key->value. When all members are alive, everything is perfect. However, some data loss occurs when i kill 2 members at the same time(without waiting for a while). My member counts are always 4. is there any way to avoid this kind of data loss?
Config config = new Config();
NetworkConfig network = config.getNetworkConfig();
network.setPort(DistributedCacheData.getInstance().getPort());
config.getCacheConfig("default").setBackupCount(3);
JoinConfig join = network.getJoin();
join.getMulticastConfig().setEnabled(false);
join.getTcpIpConfig().setEnabled(true);
config.setNetworkConfig(network);
config.setInstanceName("member-name-here");
Thanks.
IAtomicLong has hard-coded 1 sync backup, you cannot configure it to have more than 1 backup. What you are doing is configuring Cache with 3 backups.
Below is a example demonstrates multiple member disconnect for IMap
Config config = new Config();
config.getMapConfig("myMap").setBackupCount(3);
HazelcastInstance[] instances = {
Hazelcast.newHazelcastInstance(config),
Hazelcast.newHazelcastInstance(config),
Hazelcast.newHazelcastInstance(config),
Hazelcast.newHazelcastInstance(config)
};
IMap<Integer, Integer> myMap = instances[0].getMap("myMap");
for (int i = 0; i < 1000; i++) {
myMap.set(i, i);
}
System.out.println(myMap.size());
instances[1].getLifecycleService().terminate();
instances[2].getLifecycleService().terminate();
System.out.println(myMap.size());

Spark Structured Streaming: Running Kafka consumer on separate worker thread

So I have a Spark application that needs to read two streams from two kafka clusters (Kafka A and B) using structured streaming, and do some joins and filtering on the two streams. So is it possible to have a Spark job that reads stream from A, and also run a thread (Called consumer) on each worker that reads Kafka B and put data into a map. So later when we are filtering we can do something like stream.filter(row => consumer.idNotInMap(row.id))?
I have some questions regarding this approach:
If this approach works, will it cause any problems when the application is run on a cluster?
Will all consumer instance on each worker receive the same data in cluster mode? Or can we even let each consumer only listen on the Kafka partition for that worker node (which is probably controlled by Spark)?
How will the consumer instance gets serialized and passed to workers?
Currently they are initialized on Driver node but are there some ways to initialize it once for each worker node?
I feel like in my case I should use stream joining instead. I've already tried that and it didn't work, that's why I am taking this approach. It didn't work because stream from Kafka A is append only and stream B needs to have a state that can be updated, which makes it update only. Then joining streams of append and update mode is not supported in Spark.
Here are some pseudo-code:
// SparkJob.scala
val consumer = Consumer()
val getMetadata = udf(id => consumer.get(id))
val enrichedDataSet = stream.withColumn("metadata", getMetadata(stream("id"))
// Consumer.java
class Consumer implements Serializable {
private final ConcurrentHashMap<Integer, String> metadata;
public MetadataConsumer() {
metadata = new ConcurrentHashMap<>();
// read stream
listen();
}
// process kafka data inside this loop
private void listen() {
Thread t = new Thread(() -> {
KafkaConsumer consumer = ...;
while (consumer.hasNext()) {
var message = consumer.next();
// update metadata or put in new metadata
metadata.put(message.id, message);
}
});
t.start();
}
public String get(Integer key) {
return metadata.get(key);
}
}

Loading million rows into partitioned Stateful service

I'm trying to load 20 million rows into partitioned stateful service ReliableDictionary. I partitioned stateful service into 10 partitions. Based on MSDN documentation, I understood that I need to use some hashing algorithm to find the correct partition and send data to it to load into IReliabledictionary. So I used the Hydra to get the partition number based on the value. All I'm storing is a List<long> in the IReliableDictionary.
So I created a Stateless service as wrapper,
which will fetch the rows from the SQL Server (20 million),
get the partition number using Hydra for each row,
group them by partition number
call the Stateful service for each partition using ServiceRemoting. However, I get fabric message too large exception if I send 1 million rows of data per each request so I chunked it into 100000 per request.
This is taking 74 minutes for it to complete. This is too long. Below is the code for uploading -
Please advise.
foreach (var itemKvp in ItemsDictionary)
{
var ulnv2Uri = new Uri("fabric:/TestApp/dataservice");
//Insert to the correct shard based on the hash algorithm
var dataService = _serviceProxyFactory.CreateServiceProxy<IDataService>(
dataStoreUri,
new ServicePartitionKey(itemKvp.Key), TargetReplicaSelector.PrimaryReplica, "dataServiceRemotingListener");
var itemsShard = itemKvp.Value;
//if the total records count is greater then 100000 then send it in chunks
if (itemsShard.Count > 1_000_000)
{
//var tasks = new List<Task>();
var totalCount = itemsShard.Count;
var pageSize = 100000;
var page = 1;
var skip = 0;
while (skip < totalCount)
{
await dataService.InsertData(itemsShard.Skip(skip).Take(pageSize).ToList());
page++;
skip = pageSize * (page - 1);
}
}
else
{
//otherwise send all together
await dataService.InsertData(itemsShard);
}
}
You can likely save some time here, by uploading to all partitions in parallel.
So create 10 service proxies (one for each partition) and use them simultaneously.

Parallel.Foreach and BulkCopy

I have a C# library which connects to 59 servers of the same database structure and imports data to my local db to the same table. At this moment I am retrieving data server by server in foreach loop:
foreach (var systemDto in systems)
{
var sourceConnectionString = _systemService.GetConnectionStringAsync(systemDto.Ip).Result;
var dbConnectionFactory = new DbConnectionFactory(sourceConnectionString,
"System.Data.SqlClient");
var dbContext = new DbContext(dbConnectionFactory);
var storageRepository = new StorageRepository(dbContext);
var usedStorage = storageRepository.GetUsedStorageForCurrentMonth();
var dtUsedStorage = new DataTable();
dtUsedStorage.Load(usedStorage);
var dcIp = new DataColumn("IP", typeof(string)) {DefaultValue = systemDto.Ip};
var dcBatchDateTime = new DataColumn("BatchDateTime", typeof(string))
{
DefaultValue = batchDateTime
};
dtUsedStorage.Columns.Add(dcIp);
dtUsedStorage.Columns.Add(dcBatchDateTime);
using (var blkCopy = new SqlBulkCopy(destinationConnectionString))
{
blkCopy.DestinationTableName = "dbo.tbl";
blkCopy.WriteToServer(dtUsedStorage);
}
}
Because there are many systems to retrieve data, I wonder if it is possible to use Pararel.Foreach loop? Will BulkCopy lock the table during WriteToServer and next WriteToServer will wait until previous will complete?
-- EDIT 1
I've changed Foreach to Parallel.Foreach but I face one problem. Inside this loop I have async method: _systemService.GetConnectionStringAsync(systemDto.Ip)
and this line returns error:
System.NotSupportedException: A second operation started on this
context before a previous asynchronous operation completed. Use
'await' to ensure that any asynchronous operations have completed
before calling another method on this context. Any instance members
are not guaranteed to be thread safe.
Any ideas how can I handle this?
In general, it will get blocked and will wait until the previous operation complete.
There are some factors that may affect if SqlBulkCopy can be run in parallel or not.
I remember when adding the Parallel feature to my .NET Bulk Operations, I had hard time to make it work correctly in parallel but that worked well when the table has no index (which is likely never the case)
Even when worked, the performance gain was not a lot faster.
Perhaps you will find more information here: MSDN - Importing Data in Parallel with Table Level Locking

using jmx monitor kafka topic

I am using jmx to monitoring kafka topic.
val url = new JMXServiceURL("service:jmx:rmi:///jndi/rmi://broker1:9393/jmxrmi");
val jmxc = JMXConnectorFactory.connect(url, null);
val mbsc = jmxc.getMBeanServerConnection();
val messageCountObj = new ObjectName("kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec,topic=mytopic");
val messagesInPerSec = mbsc.getAttribute(messageCountObj,"MeanRate")
using this code I can get the MeanRate of "mytopic" on broker1.
but I have 10 brokers,how can I get the "mytopic"'s MeanRate from all my brokers?
I have try "service:jmx:rmi:///jndi/rmi://broker1:9393,broker2:9393,broker3:9393/jmxrmi"
got an error :(
It would be nice if it were that simple ;)
There's no way to do this as you outlined. You will need to make a seperate connection to each broker.
One possible solution would be to use MBeanServer Federation which would register proxies for each of your brokers in one MBeanServer, so if you did this on broker1, you could connect to service:jmx:rmi:///jndi/rmi://broker1:9393/jmxrmi and query the stats for all your brokers in one go, but you would need to query 10 different ObjectNames, query the value for each and then compute the MeanRate yourself. [Java] Pseudo code:
ObjectName wildcard = new ObjectName("*:type=BrokerTopicMetrics,name=MessagesInPerSec,topic=mytopic");
double totalRate = 0d;
int respondingBrokers = 0;
for(ObjectName on : mbsc.queryNames(wildcard, null)) {
totalRate += (Double)mbsc.getAttribute(messageCountObj,"MeanRate");
respondingBrokers++;
}
// Average rate of mean rates: totalRate/respondingBrokers
Note: no exception handling, and I am assuming the rate type is a Double.
You could also create and register a custom MBean that computed the aggregate mean on the federated broker.
If you are maven oriented, you can build the OpenDMK from here.