Vert.x - Get deployment ID within currently running verticle - vert.x

I'm looking for the deployment ID for the currently running verticle.
The goal is to allow a verticle to undeploy itself. I currently pass the deploymentID into the deployed verticle over the event bus to accomplish this, but would prefer some direct means of access.
container.undeployVerticle(deploymentID)

There are 2 ways you can get the deployment id. If you have some verticle that starts and handles all the module deployments you can add an async result handler and then get the deployment id that way or you can get the platform manager from the container using reflection.
Async handler will be as follows:
container.deployVerticle("foo.ChildVerticle", new AsyncResultHandler<String>() {
public void handle(AsyncResult<String> asyncResult) {
if (asyncResult.succeeded()) {
System.out.println("The verticle has been deployed, deployment ID is " + asyncResult.result());
} else {
asyncResult.cause().printStackTrace();
}
}
});
Access Platform Manager as follows:
protected final PlatformManagerInternal getManager() {
try {
Container container = getContainer();
Field f = DefaultContainer.class.getDeclaredField("mgr");
f.setAccessible(true);
return (PlatformManagerInternal)f.get(container);
}
catch (Exception e) {
e.printStackTrace();
throw new ScriptException("Could not access verticle manager");
}
}
protected final Map<String, Deployment> getDeployments() {
try {
PlatformManagerInternal mgr = getManager();
Field d = DefaultPlatformManager.class.getDeclaredField("deployments");
d.setAccessible(true);
return Collections.unmodifiableMap((Map<String, Deployment>)d.get(mgr));
}
catch (Exception e) {
throw new ScriptException("Could not access deployments");
}
}
References:
http://grepcode.com/file/repo1.maven.org/maven2/io.vertx/vertx-platform/2.1.2/org/vertx/java/platform/impl/DefaultPlatformManager.java#DefaultPlatformManager.genDepName%28%29
https://github.com/crashub/mod-shell/blob/master/src/main/java/org/vertx/mods/VertxCommand.java
http://vertx.io/core_manual_java.html#deploying-a-module-programmatically

Somewhere in your desired verticle use this:
context.deploymentID();

Related

#KafkaListener : behavior and tracking processing of events

We are using spring-kafka 2.3.0 in our app . Have observed some processing glitches in the scenarios below with
#Service
#EnableScheduling
public class KafkaService {
public void sendToKafkaProducer(String data) {
kafkaTemplate.send(configuration.getProducer().getTopicName(), data);
}
#KafkaListener(id = "consumer_grpA_id",
topics = "#{__listener.getEnvironmentConfiguration().getConsumer().getTopicName()}", groupId = "consumer_grpA", autoStartup = "false")
public void onMessage(ConsumerRecord<String, String> data) throws Exception {
passA(data);
}
private void passB(String message) {
//counter to keep track of retry attempts
if (counter.containsKey(message.getEventID())) {
//RETRY_COUNT = 5
if (counter.get(message.getEventID()) < RETRY_COUNT) {
retryAgain(message);
}
} else {
firstRetryPass(message);
}
}
private void retryAgain(String message) {
counter.put(message.getEventID(), counter.get(message.getEventID()) + 1);
try {
registry.stop(); //pause the listener
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private void firstRetryPass(String message) {
// First Time Entry for count and time
counter.put(message.getEventID(), 1);
try {
registry.stop();//pause the listener
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private void passA(String message) {
try {
passToTarget(message); //Call target processor
LOGGER.info("Message Processed Successfully to the target");
} catch (Exception e) {
targetUnavailable= true;
passB(message);
}
}
private void passToTarget(String message){
//processor logic, if the target is not available, retry after 15 mins, call passB func
}
#Scheduled(cron = "0 0/15 * 1/1 * ?")
public void scheduledMethod() {
try {
if (targetUnavailable) {
registry.start();
firstTimeStart = false;
}
LOGGER.info(">>>Scheduler Running ?>>>" + registry.isRunning());
} catch (Exception e) {
LOGGER.error(e.getMessage());
}
}
}
On receipt of the first message after a gap in processing, the consumer doesn't pick up the first message. The subsequent messages are processed.
As we don't have the direct access to Kafka topics, we aren't able to identify the process that didn't get picked up from consumer.
How do we track those events that arenot picked up and why is it so.?
We also configured a scheduler whose job is to keep the registry for Kafka running . So is this scheduler required when we already have a listener configured ?
What is the mem and CPU utilization metrics if we keep the listener running. That was one of the reason we used the Kafka registry to stop the listener explicitly whenever the target is down. So need to validate if this approach is sustainable. My hunch is this is against the basic working of Listener, as it's main job is to continue listening for new events irrespective of target status
Edited*
You shouldn't stop the registry on the listener thread unless you use stop(Runnable) - otherwise there will be a deadlock and a delay since the container waits for the listener to exit.
Stopping the container (via the registry) won't actually take effect until any remaining records fetched by the last poll have been processed (unless you set max.poll.records=1.
When the listener exits normally, the record's offset will be committed so that record will not be redelivered on the next start.
You can use the ContainerStoppingErrorHandler for this use case. See here.
Throw an exception and the error handler will stop the container for you.
But that will stop the container on the first try.
If you want retries, use a SeekToCurrentErrorHandler and call the ContainerStoppingErrorHandler from the recoverer after retries are exhausted.

Service is not running in some devices and unable to stop running service

I have location service implemented in my app that runs for every x minutes.
Service is started when the user logs in and whenever the app is killed and opened, I check to see if the service is running or not in OnStart().
In App.xaml.cs,
Code to stop service:
DependencyService.Get<IDeviceService>().StopLocationService();
Code to check if the service is running:
bool IsLocationServiceRunning = DependencyService.Get<IServiceRunning>().IsServiceRunning();
In DeviceService.cs:
public void StopLocationService()
{
try
{
Android.App.Application.Context.StopService(new Intent(Android.App.Application.Context, typeof(LocationService)));
}
catch (Exception ex)
{
Utility.LogMessage("Error in StopLocationService(M): " + ex.Message, LogMessageType.error);
}
}
ServiceRunning.cs:
public bool IsServiceRunning()
{
try
{
ActivityManager manager = (ActivityManager)Forms.Context.GetSystemService(Context.ActivityService);
Type serviceClass = typeof(LocationService);
foreach (var service in manager.GetRunningServices(int.MaxValue))
{
if (service.Service.ClassName.EndsWith(typeof(LocationService).Name))
{
return true;
}
}
}
catch(Exception ex)
{
Utility.LogMessage("Error in IsServiceRunning(M): ", LogMessageType.error);
Utility.LogError(ex, LogMessageType.error);
}
return false;
}
Fisrt I'm stopping the service, and when I check the IsLocationService running, it still says true.
Any Idea how to stop a service?
I need to stop a service and start again in some scenarios.
thanks

Occasional Unknownhost Exception from for a service within kubernetes

I have a kubernetes cluster setup on AWS. When i make call to elasticsearch-client.default.svc.cluster.local from a pod, i get unknown host exception occasionaly. It must have something to do with the name resolution, coz hitting the service IP directly works fine.
Note : I already have kube-dns autoscaler enabled. I manually tried with almost 6 kube-dns pods. SO i dont think it is because of dns pod scaling.
When I set the kube-dns configMap with the upstreamserver values to google nameservers (8.8.8.8 and 8.8.4.4) I am not getting the issue. I assume it is because of api ratelimiting done by AWS on route53. But I dont know why the name resolution request would got to AWS NS.
Here's a good write-up that may be related to your problems, also check this one out by Weaveworks.
Basically there have been a number of issues during the last year created at the GitHub Kubernetes issue tracker that has to do with various DNS latencies/problems from within a cluster.
Worth mentioning, although not a fix to every DNS related problem, is that CoreDNS are generally available since version 1.11 and are or will be default thus replacing kube-dns as the default DNS add-on for clusters.
Here's a couple of issues that might be related to the problem you're experiencing:
#47142
#45976
#56903
Hopefully this may help you moving forward.
I also faced with the similar issue with my custom Kubernetes cluster and MySQL and Solr. Kube DNS checks suggested by tutorial from official site were fine (https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/) and I had to apply the following retry logic for data source and Solr client:
...
import org.apache.commons.dbcp.BasicDataSource;
...
public class CommunicationSafeDataSource extends BasicDataSource {
private static final Logger LOGGER = LoggerFactory.getLogger(CommunicationSafeDataSource.class);
#Override
public Connection getConnection() throws SQLException {
for (int i = 1; i <= 10; i++) {
try {
return super.getConnection();
} catch (Exception e) {
if ((e instanceof CommunicationsException) || (e.getCause() instanceof CommunicationsException)) {
LOGGER.warn("Communication exception occurred, retry " + i);
try {
Thread.sleep(i * 1000);
} catch (InterruptedException ie) {
//
}
} else {
throw e;
}
}
}
throw new IllegalStateException("Cannot get connection");
}
}
...
import org.apache.solr.client.solrj.impl.HttpSolrClient;
...
public class CommunicationSafeSolrClient extends HttpSolrClient {
private static final Logger LOGGER = LoggerFactory.getLogger(CommunicationSafeSolrClient.class);
protected CommunicationSafeSolrClient(Builder builder) {
super(builder);
}
#Override
protected NamedList<Object> executeMethod(HttpRequestBase method, ResponseParser processor, boolean isV2Api)
throws SolrServerException {
for (int i = 1; i <= 10; i++) {
try {
return super.executeMethod(method, processor, isV2Api);
} catch (Exception e) {
if ((e instanceof UnknownHostException) || (e.getCause() instanceof UnknownHostException)
|| (e instanceof ConnectException) || (e.getCause() instanceof ConnectException)) {
LOGGER.warn("Communication exception occurred, retry " + i);
try {
Thread.sleep(i * 1000);
} catch (InterruptedException ie) {
//
}
} else {
throw e;
}
}
}
throw new IllegalStateException("Cannot execute method");
}
}

Stopping a Windows Service in the event of a critical error

I have a Windows Service which basically wraps a task:
public partial class Service : ServiceBase
{
private Task task;
private CancellationTokenSource cancelToken;
public Service()
{
InitializeComponent();
this.task = null;
this.cancelToken = null;
}
protected override void OnStart(string[] args)
{
var svc = new MyServiceTask();
this.cancelToken = new CancellationTokenSource();
this.task = svc.RunAsync(cancelToken.Token);
this.task.ContinueWith(t => this.OnUnhandledException(t.Exception), TaskContinuationOptions.OnlyOnFaulted);
}
protected override void OnStop()
{
if (this.task != null)
{
this.cancelToken.Cancel();
this.task.Wait();
}
}
private void OnUnhandledException(Exception ex)
{
this.EventLog.WriteEntry(string.Format("Unhandled exception: {0}", ex), EventLogEntryType.Error);
this.task = null;
this.Stop();
}
}
As you can see, the service can catch unhandled exceptions. If this happens, the exception is logged and the service is stopped. This has the effect of writing two messages to the event log - one error stating there was an unhandled exception, and another stating that the service was successfully stopped.
This may sound minor, but I'm hoping to be able to suppress the 'successfully stopped' message. I find it misleading - it suggests that the service stopping was a normal occurrence. Is there another way I can force the service to stop itself without this message being logged?

creating and writing file in worker role

i m writing below code to create and write text file c drive for worker role.
it works fine on my local machine but fails on the cloud service
public override void Run()
{
// This is a sample worker implementation. Replace with your logic.
Trace.TraceInformation("Trail entry point called", "Information");
try
{
while (true)
{
TextWriter tsw = new StreamWriter(#"C:\Hello.txt", true);
tsw.WriteLine("Hi DA Testesting");
tsw.Close();
Thread.Sleep(10000);
Trace.TraceInformation("Working", "Information");
}
}
catch (Exception ex)
{
TextWriter tsw = new StreamWriter(#"C:\Hello.txt", true);
tsw.WriteLine(ex.Message);
tsw.Close();
}
}