creating and writing file in worker role - azure-worker-roles

i m writing below code to create and write text file c drive for worker role.
it works fine on my local machine but fails on the cloud service
public override void Run()
{
// This is a sample worker implementation. Replace with your logic.
Trace.TraceInformation("Trail entry point called", "Information");
try
{
while (true)
{
TextWriter tsw = new StreamWriter(#"C:\Hello.txt", true);
tsw.WriteLine("Hi DA Testesting");
tsw.Close();
Thread.Sleep(10000);
Trace.TraceInformation("Working", "Information");
}
}
catch (Exception ex)
{
TextWriter tsw = new StreamWriter(#"C:\Hello.txt", true);
tsw.WriteLine(ex.Message);
tsw.Close();
}
}

Related

ConnectionFactory throwing errors when shared

I've a very simple application that adds messages to a queue and reads them using a MessagerListener.
Edit: I was testing this on a single instance of Artemis that had been setup as part of a two instance cluster on docker.
I want to create the ConnectionFactory once and reuse it for all producers and consumers in the application.
I have created the ConnectionFactory and stored it in a static variable (singleton) so it can be accessed from anywhere.
The aim is that the client use this shared connection factory to create a new connection when required.
However, I have noticed that doing this causes a "Failed to create session factory" when trying to create a new connection.
javax.jms.JMSException: Failed to create session factory
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createConnectionInternal(ActiveMQConnectionFactory.java:886)
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createConnection(ActiveMQConnectionFactory.java:299)
at com.test.artemistest.jms.QueueTest2.getMessagesFromQueue(QueueTest2.java:137)
at com.test.artemistest.jms.QueueTest2.access$000(QueueTest2.java:61)
at com.test.artemistest.jms.QueueTest2$1.run(QueueTest2.java:75)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:830)
Caused by: ActiveMQNotConnectedException[errorType=NOT_CONNECTED message=AMQ219007: Cannot connect to server(s). Tried with all available servers.]
at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:690)
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createConnectionInternal(ActiveMQConnectionFactory.java:884)
If I create a connection factory per call this error does not occur.
Doing this seems very inefficient.
I've recreated a similar issue below.
If I create the connection factory in the main method the error occurs.
However if created just before use in a method it works as expected.
If I add two listeners the error occurs even though they are in separate threads. Could it be linked to the fact the connections are not closed in the consumers but are in the producers?
Why is this the case and do you recommend sharing the connection factory?
Thanks
public class QueueTest2 {
private static boolean shutdown = false;
private static ConnectionFactory cf;
public static void main(String[] args) {
// uncomment below for error to occur
// QueueTest2.getConnectionFactory("localhost", 61616);
ExecutorService executor = Executors.newCachedThreadPool();
executor.execute(new Runnable() {
#Override
public void run() {
getMessagesFromQueue("localhost", 61616);
while (!shutdown) {
try {
Thread.sleep(1000L);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
System.out.println("getMessagesFromQueue shutdown");
}
});
addMessagesToQueue("localhost", 61616);
// uncommenting below also causes the issue
// executor.execute(new Runnable() {
// #Override
// public void run() {
// getMessagesFromQueue("localhost", 61616);
// while (!shutdown) {
// try {
// Thread.sleep(1000L);
// } catch (InterruptedException e) {
// e.printStackTrace();
// }
// }
// System.out.println("getMessagesFromQueue shutdown");
// }
// });
addMessagesToQueue("localhost", 61616);
try {
Thread.sleep(1000L);
} catch (InterruptedException e) {
e.printStackTrace();
}
shutdown = true;
executor.shutdownNow();
}
private static void addMessagesToQueue(String host, int port) {
ConnectionFactory cf2 = getConnectionFactory(host, port);
Connection connection = null;
Session sessionQueue = null;
try {
connection = cf2.createConnection("artemis", "password");
connection.setClientID("Producer");
sessionQueue = connection.createSession(false, Session.CLIENT_ACKNOWLEDGE);
Queue orderQueue = sessionQueue.createQueue("exampleQueue");
MessageProducer producerQueue = sessionQueue.createProducer(orderQueue);
connection.start();
// send 100 messages
for (int i = 0; i < 100; i++) {
TextMessage message = sessionQueue.createTextMessage("This is an order: " + i);
producerQueue.send(message);
}
} catch (JMSException ex) {
Logger.getLogger(QueueTest2.class.getName()).log(Level.SEVERE, null, ex);
} finally {
try {
if (sessionQueue != null) {
sessionQueue.close();
}
} catch (JMSException ex) {
Logger.getLogger(QueueTest2.class.getName()).log(Level.SEVERE, null, ex);
}
try {
if (connection != null) {
connection.close();
}
} catch (JMSException ex) {
Logger.getLogger(QueueTest2.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
private static void getMessagesFromQueue(String host, int port) {
ConnectionFactory cf2 = getConnectionFactory(host, port);
Connection connection2 = null;
Session sessionQueue2;
try {
connection2 = cf2.createConnection("artemis", "password");
connection2.setClientID("Consumer2");
sessionQueue2 = connection2.createSession(false, Session.CLIENT_ACKNOWLEDGE);
Queue orderQueue = sessionQueue2.createQueue("exampleQueue");
MessageConsumer consumerQueue = sessionQueue2.createConsumer(orderQueue);
consumerQueue.setMessageListener(new MessageHandlerTest2());
connection2.start();
Thread.sleep(5000);
} catch (JMSException ex) {
Logger.getLogger(QueueTest2.class.getName()).log(Level.SEVERE, null, ex);
} catch (InterruptedException ex) {
Logger.getLogger(QueueTest2.class.getName()).log(Level.SEVERE, null, ex);
}
}
private static ConnectionFactory getConnectionFactory(String host, int port) {
if (cf == null) {
Map<String, Object> connectionParams2 = new HashMap<String, Object>();
connectionParams2.put(TransportConstants.PORT_PROP_NAME, port);
connectionParams2.put(TransportConstants.HOST_PROP_NAME, host);
TransportConfiguration transportConfiguration = new TransportConfiguration(NettyConnectorFactory.class
.getName(), connectionParams2);
cf = ActiveMQJMSClient.createConnectionFactoryWithoutHA(JMSFactoryType.CF, transportConfiguration);
}
return cf;
}
}
class MessageHandlerTest2 implements MessageListener {
#Override
public void onMessage(Message message) {
try {
System.out.println("new message: " + ((TextMessage) message).getText());
message.acknowledge();
} catch (JMSException ex) {
Logger.getLogger(MessageHandlerTest2.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
I've run your code, but I don't see any errors. My guess is that there may be a timing issue related to concurrency. Try adding synchronized to your getConnectionFactory method since it can theoretically be called concurrently by multiple threads in your application, e.g.:
private synchronized static ConnectionFactory getConnectionFactory(String host, int port)
I have found a solution that works on a clustered environment and docker.
It involves using the "pooled-jms" connection pool. Something I had planned to use anyway.
Although it does not explain the issues I was seeing above, it is at least a work around until I can investigate further.
The "WARN: AMQ212064: Unable to receive cluster topology " mentioned above appears to have been a red herring as it went away as quickly as it appeared.

Hibernate Search call to SearchFactory optimize does not invoke immediately

I'm trying to call SearchFactory optimize to run a scheduled index maintenance job (compacting segments - the application is a write intensive). But it does not seem to invoke immediately until I shutdown the Tomcat. My code is calling simply like this.
public synchronized void optimizeIndexes() {
getFullTextEntityManager().flushToIndexes(); //apply any changes before optimizing
getFullTextEntityManager().getSearchFactory().optimize();
logger.info("[Lucene] optimization has performed on all the indexes...");
}
I got it to work around by loaning IndexWriter from HSearch backend.
private synchronized void optimizeBareMetal() {
try {
LuceneBackendQueueProcessor backend = (LuceneBackendQueueProcessor) getIndexManager().getBackendQueueProcessor();
LuceneBackendResources resources = backend.getIndexResources();
AbstractWorkspaceImpl workspace = resources.getWorkspace();
IndexWriter indexWriter = workspace.getIndexWriter();
indexWriter.forceMerge(1, true);
indexWriter.commit();
} catch (LockObtainFailedException e) {
e.printStackTrace();
} catch (CorruptIndexException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
private synchronized DirectoryBasedIndexManager getIndexManager() {
SearchFactoryImplementor searchFactory = (SearchFactoryImplementor) getFullTextEntityManager().getSearchFactory();
IndexManagerHolder indexManagerHolder = searchFactory.getIndexManagerHolder();
return (DirectoryBasedIndexManager) indexManagerHolder.getIndexManager(getEntityClass().getName());
}

CAS consumer not working as expected

I have a CAS consumer AE which is expected to iterates over CAS objects in a pipeline, serialize them and add the serialized CASs to an xml file.
public class DataWriter extends JCasConsumer_ImplBase {
private File outputDirectory;
public static final String PARAM_OUTPUT_DIRECTORY = "outputDir";
#ConfigurationParameter(name=PARAM_OUTPUT_DIRECTORY, defaultValue=".")
private String outputDir;
CasToInlineXml cas2xml;
public void initialize(UimaContext context) throws ResourceInitializationException {
super.initialize(context);
ConfigurationParameterInitializer.initialize(this, context);
outputDirectory = new File(outputDir);
if (!outputDirectory.exists()) {
outputDirectory.mkdirs();
}
}
#Override
public void process(JCas jCas) throws AnalysisEngineProcessException {
String file = fileCollectionReader.fileName;
File outFile = new File(outputDirectory, file + ".xmi");
FileOutputStream out = null;
try {
out = new FileOutputStream(outFile);
String xmlAnnotations = cas2xml.generateXML(jCas.getCas());
out.write(xmlAnnotations.getBytes("UTF-8"));
/* XmiCasSerializer ser = new XmiCasSerializer(jCas.getCas().getTypeSystem());
XMLSerializer xmlSer = new XMLSerializer(out, false);
ser.serialize(jCas.getCas(), xmlSer.getContentHandler());*/
if (out != null) {
out.close();
}
}
catch (IOException e) {
throw new AnalysisEngineProcessException(e);
}
catch (CASException e) {
throw new AnalysisEngineProcessException(e);
}
}
I am using it inside a pipeline after all my annotators, but it couldn't read CAS objects (I am getting NullPointerException at jCas.getCas()). It looks like I don't seem to understand the proper usage of CAS consumer. I appreciate any suggestions.

Reading a File "changes" my path and FXMLLoader fails

I've been struggling with my code:
private void longStart() {
Task task = new Task<Void>() {
#Override
protected Void call() throws Exception {
System.out.println("Iniciando");
IOManager io = new IOManager();
System.out.println("Buscando archivo Jugadores");
boolean b = io.BuscarData("Jugadores");
System.out.println("Armando Grupos");
if (!b) {
ServiceManager.CargarGrupos(b);
} else {
if (!io.BuscarData("Grupos")) {
ServiceManager.CargarGrupos(b);
}else{
Grupo.setaGrupos(io.LeerGrupos());
}
}
System.out.println("Cargando Partidos");
ServiceManager.CargarPartidos();
System.out.println("Calculando puntos de Grupos");
ServiceManager.ActualizarPuntos();
//ServiceManager.CargarGoleador();
ready.setValue(Boolean.TRUE);
notifyPreloader(new StateChangeNotification(StateChangeNotification.Type.BEFORE_START));
return null;
}
};
new Thread(task).start();
}
To put it simply, what it does is to ask if a file exists and if it doesn't, then it connects with a web service, does some meaningless object creations and finally creates the File. After all that hustle I can start my UI like this:
try {
root = FXMLLoader.load(getClass().getResource("/fxml/Main.fxml"));
} catch (IOException ex) {
Logger.getLogger(Brasuca.class.getName()).log(Level.SEVERE, null, ex);
}
stage.setTitle("Brasil 2014");
stage.setScene(new Scene(root, 1140, 705));
stage.getIcons().add(new Image("/img/trophy.png"));
stage.setResizable(false);
stage.show();
Which works perfectly when the File its looking for doesn't exist. But when it does Exists it tries to read it:
public ArrayList<Grupo> LeerGrupos() throws FileNotFoundException, IOException, ClassNotFoundException {
ArrayList<Grupo> ag;
try (ObjectInputStream obj = new ObjectInputStream(new FileInputStream("data/Grupos.jug"))) {
ag = (ArrayList<Grupo>) obj.readObject();
}
return ag;
}
Which also works fine, but when the FXMLLoader tries to load, it fails and throws this exception:
GRAVE: null
javafx.fxml.LoadException:
file:/C:/Users/Francisco/Documents/NetBeansProjects/Brasuca/dist/run613176200/Brasuca.jar!/fxml/Main.fxm
Also, if I excecute the loader like this:
root = FXMLLoader.load(getClass().getResource("fxml/Main.fxml"));
The exception changes into:
Exception in thread "JavaFX Application Thread" java.lang.NullPointerException: Location is required.
Any help would be apretiated
root = FXMLLoader.load(getClass().getResource("/fxml/Main.fxml"));
Don't forget the "/" when you try to load fxml class because of it is another package.

Vert.x - Get deployment ID within currently running verticle

I'm looking for the deployment ID for the currently running verticle.
The goal is to allow a verticle to undeploy itself. I currently pass the deploymentID into the deployed verticle over the event bus to accomplish this, but would prefer some direct means of access.
container.undeployVerticle(deploymentID)
There are 2 ways you can get the deployment id. If you have some verticle that starts and handles all the module deployments you can add an async result handler and then get the deployment id that way or you can get the platform manager from the container using reflection.
Async handler will be as follows:
container.deployVerticle("foo.ChildVerticle", new AsyncResultHandler<String>() {
public void handle(AsyncResult<String> asyncResult) {
if (asyncResult.succeeded()) {
System.out.println("The verticle has been deployed, deployment ID is " + asyncResult.result());
} else {
asyncResult.cause().printStackTrace();
}
}
});
Access Platform Manager as follows:
protected final PlatformManagerInternal getManager() {
try {
Container container = getContainer();
Field f = DefaultContainer.class.getDeclaredField("mgr");
f.setAccessible(true);
return (PlatformManagerInternal)f.get(container);
}
catch (Exception e) {
e.printStackTrace();
throw new ScriptException("Could not access verticle manager");
}
}
protected final Map<String, Deployment> getDeployments() {
try {
PlatformManagerInternal mgr = getManager();
Field d = DefaultPlatformManager.class.getDeclaredField("deployments");
d.setAccessible(true);
return Collections.unmodifiableMap((Map<String, Deployment>)d.get(mgr));
}
catch (Exception e) {
throw new ScriptException("Could not access deployments");
}
}
References:
http://grepcode.com/file/repo1.maven.org/maven2/io.vertx/vertx-platform/2.1.2/org/vertx/java/platform/impl/DefaultPlatformManager.java#DefaultPlatformManager.genDepName%28%29
https://github.com/crashub/mod-shell/blob/master/src/main/java/org/vertx/mods/VertxCommand.java
http://vertx.io/core_manual_java.html#deploying-a-module-programmatically
Somewhere in your desired verticle use this:
context.deploymentID();