I am using install4j to install an intranet application which requires an HTTP and HTTPS port. I would like to test that these ports are available and warn the user/block the installation until they select unavailable ports.
The only avenue I see for this (besides custom code) is to ensure the windows service fails if the application cannot bind to needed ports, and use a Failure Strategy "Ask user whether to retry or quit on failure". In the web server startup code, I use System.exit(1) if the server cannot bind to ports. However, this does not appear to register as a failure to the installer - the installation proceeds without invoking the failure strategy.
What is the proper approach for signaling failure to the "Start a service" action? Have other people taken an alternate approach to guaranteeing the installation uses available ports?
A good alternate approach I've since found: add a custom code action:
List<Integer> takenPorts = new ArrayList<Integer>();
for (int port : Arrays.asList(80, 443)) {
java.net.ServerSocket socket = null;
try {
socket = new java.net.ServerSocket(port);
} catch (IOException e) {
takenPorts.add(port);
} finally {
if (socket != null) socket.close();
}
}
if (takenPorts.isEmpty()) {
return true;
}
else {
String msg;
if (takenPorts.size() == 2) {
msg = "Ports 80 and 443 must be available for uDiscovery";
}
else {
msg = "Port " + takenPorts.get(0) + " must be available for uDiscovery";
}
context.setVariable("portErrorMessage", msg);
return false;
}
Good explanation of how to wire this up here
All who are just getting an error dialog telling
"com.install4j.runtime.beans.action.control.RunScriptAction failed"
I think that also was the one, Adam got:
In the srciptlet above the variable named portErrorMessage was set.
I also didn't realise that first. The trick is quite simple: You have to insert ${installer:portErrorMessage} in the field "Error message". Doing so, you don't need the Util.showOptionDialog described by Ingo, since that method call opens another, second dialog that has to be acknowledged first, after that the user would have to acknowloedge the dialog from install4j, too.
Related
I am trying to publish to Google Pub/Sub topic using the following:
ProjectTopicName topicName = ProjectTopicName.of("my-project-id", "my-topic-id");
Publisher publisher = null;
try {
// Create a publisher instance with default settings bound to the topic
publisher = Publisher.newBuilder(topicName).build();
List<String> messages = Arrays.asList("first message", "second message");
for (final String message : messages) {
ByteString data = ByteString.copyFromUtf8(message);
PubsubMessage pubsubMessage = PubsubMessage.newBuilder().setData(data).build();
// Once published, returns a server-assigned message id (unique within the topic)
ApiFuture<String> future = publisher.publish(pubsubMessage);
// Add an asynchronous callback to handle success / failure
ApiFutures.addCallback(
future,
new ApiFutureCallback<String>() {
#Override
public void onFailure(Throwable throwable) {
if (throwable instanceof ApiException) {
ApiException apiException = ((ApiException) throwable);
// details on the API exception
System.out.println(apiException.getStatusCode().getCode());
System.out.println(apiException.isRetryable());
}
System.out.println("Error publishing message : " + message);
}
#Override
public void onSuccess(String messageId) {
// Once published, returns server-assigned message ids (unique within the topic)
System.out.println(messageId);
}
},
MoreExecutors.directExecutor());
}
} finally {
if (publisher != null) {
// When finished with the publisher, shutdown to free up resources.
publisher.shutdown();
publisher.awaitTermination(1, TimeUnit.MINUTES);
}
}
I have changed the default values you see here to the particulars of the account I am hitting.
The environment variable points to the JSON file containing the pub/sub authentication credentials:
GOOGLE_APPLICATION_CREDENTIALS
was set using:
export GOOGLE_APPLICATION_CREDENTIALS=path/to/file.json
and verified with echo $GOOGLE_APPLICATION_CREDENTIALS - after a reboot.
But I am still encountering:
The Application Default Credentials are not available. They are available
if running in Google Compute Engine. Otherwise, the environment variable
GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining
the credentials. See https://developers.google.com/accounts/docs/application-
default-credentials for more information.
I believe this is related to the default environment that the application is running in, or rather what GCP object thinks the context is -runningOnComputeEngine:
com.google.auth.oauth2.ComputeEngineCredentials runningOnComputeEngine
INFO: Failed to detect whether we are running on Google Compute Engine.
also, a dialog displayed:
Unable to launch App Engine Server
Cannot determine server execution context
and there are no Google Cloud Platform settings in project (Eclipse 2019-3):
This is not an App Engine application.
How to set the environment that GCP objects point to -> Non App Engine.
For reference:
Server to Server (link in error message)
Publish
Google Cloud Tools for Eclipse
Java 7 application
Mac OS (Sierra)
The file permissions are set that app can read the file.
Google's documentation on this is terrible - it does not mention this anywhere.
The answer is to use:
// create a credentials provider
CredentialsProvider credentialsProvider = FixedCredentialsProvider.create(ServiceAccountCredentials.fromStream(new FileInputStream(Constants.PUB_SUB_KEY)));
// apply credentials provider when creating publisher
publisher = Publisher.newBuilder(topicName).setCredentialsProvider(credentialsProvider).build();
The Environment variable usage is either deprecated or the documentation is flat out wrong, or I'm missing something,... which is entirely possible given the poor documentation.
When I am trying to connect solace VMR Server and deliver the messages from a Java client called Vertx AMQP Bridge.
I am able to connect the Solace VMR Server but after connecting, not able to send messages to solace VMR.
I am using below sender code from vertx client.
public class Sender extends AbstractVerticle {
private int count = 1;
// Convenience method so you can run it in your IDE
public static void main(String[] args) {
Runner.runExample(Sender.class);
}
#Override
public void start() throws Exception {
AmqpBridge bridge = AmqpBridge.create(vertx);
// Start the bridge, then use the event loop thread to process things thereafter.
bridge.start("13.229.207.85", 21196,"UserName" ,"Password", res -> {
if(!res.succeeded()) {
System.out.println("Bridge startup failed: " + res.cause());
return;
}
// Set up a producer using the bridge, send a message with it.
MessageProducer<JsonObject> producer =
bridge.createProducer("T/GettingStarted/pubsub");
// Schedule sending of a message every second
System.out.println("Producer created, scheduling sends.");
vertx.setPeriodic(1000, v -> {
JsonObject amqpMsgPayload = new JsonObject();
amqpMsgPayload.put(AmqpConstants.BODY, "myStringContent" + count);
producer.send(amqpMsgPayload);
System.out.println("Sent message: " + count++);
});
});
}
}
I am getting the error below:
Bridge startup failed: io.vertx.core.impl.NoStackTraceThrowable:
Error{condition=amqp:not-found, description='SMF AD bind response
error', info={solace.response_code=503, solace.response_text=Unknown
Queue}} Apr 27, 2018 3:07:29 PM io.vertx.proton.impl.ProtonSessionImpl
WARNING: Receiver closed with error
io.vertx.core.impl.NoStackTraceThrowable:
Error{condition=amqp:not-found, description='SMF AD bind response
error', info={solace.response_code=503, solace.response_text=Unknown
Queue}}
I have created queue and also topic correctly in solace VMR but not able to send/receive messages. Am I missing any configuration from solace VMR Server side? Is there any code-change required in the Vertx Sender Java code above? I am getting the error trace above when delivering message. Can someone help on the same?
Vertx AMQP Bridge Java client :https://vertx.io/docs/vertx-amqp-bridge/java/
There are a few different reason why you may be encountering this error.
It could be that the client is not authorized to publish guaranteed messages. To fix this, you need to enable "guaranteed endpoint create" in the client-profile on the Solace router side.
It may also be that the application is using Reply Handling. This is not currently supported with the Solace router. Support for this will be added in the 8.11 release of the Solace VMR. A workaround for this would be to set ReplyHandlingSupport to false.
AmqpBridgeOptions().setReplyHandlingSupport(false);
There is also a known issue in the Solace VMR which causes this error when unsubscribing from a durable topic endpoint. A fix for this issue will also be in the 8.11 release of the Solace VMR. A workaround for this is to disconnect the client without first unsubscribing.
I read the following sample code and I am wondering if anybody can say on which platforms it is possible for connect() to fail with something other than EINPROGRESS or EALREADY.
By fail I mean execute the else branch in the sample to execute. The comment in the source suggests FreeBSD. Are there any other systems? I was not able to get it to fail on Linux.
if (connect(hostp->sockets[i],
(struct sockaddr *)res->ai_addr,
res->ai_addrlen) == -1) {
/* This is what we expect. */
if (errno == EINPROGRESS) {
printf(" connect EINPROGRESS OK "
"(expected)\n");
FD_SET(hostp->sockets[i], &wrfds);
} else {
/*
* This may happen right here, on
* localhost for example (immediate
* connection refused).
* I can see that happen on FreeBSD
* but not on Solaris, for example.
*/
printf(" connect: %s\n",
strerror(errno));
++n;
}
[...]
source: http://mff.devnull.cz/pvu/src/tcp/non-blocking-connect.c
There are many ways why connect might fail. As the comment rightly says even a non-blocking connect might fail immediately on some platforms when connecting to localhost if no listening server is there. Also connect will usually fail immediately if no route can be determined to the target, for example if the interface for the default route is down. And then there are still other ways to fail, like lack of memory, permission denied to connect when running inside a sandbox or similar.
I used the stateful actor template in visual studio 2015 to create a service fabric service. In the same solution I created an MVC app and in the about controller I attempted to copy the code from the sample client. When I run the web app and execute about action it just hangs. I don't get an exception or anything that indicates why it didn't work. Running the sample client console app where I got the code works just fine. Any suggestions on what may be wrong?
public ActionResult About()
{
var proxy = ActorProxy.Create<IO365ServiceHealth>(ActorId.NewId(), "fabric:/O365Services");
try
{
int count = 10;
Console.WriteLine("Setting Count to in Actor {0}: {1}", proxy.GetActorId(), count);
proxy.SetCountAsync(count).Wait(); /* Hangs here */
Console.WriteLine("Count from Actor {0}: {1}", proxy.GetActorId(), proxy.GetCountAsync().Result);
}
catch (Exception ex)
{
Console.WriteLine("{0}", ex.Message);
}
ViewBag.Message = "Your application description page.";
return View();
}
Is the MVC app hosted within Service Fabric? If not then it won't be able to access Service Fabric information unless it's exposed in some way (e.g. through an OwinCommunicationListener on a service).
TLDR:
Lots of TCP connections in OPEN_WAIT status shutting down server
Setup:
riak_1.2.0-1_amd64.deb installed on Ubuntu12
Spring MVC 3.2.5
riak-client-1.1.0.jar
Tomcat7.0.51 hosted on Windows Server 2008 R2
JRE6_45
Full Description:
How do I ensure that the Java RiakClient is properly cleaning up it's connections to that I'm not left with an abundance of CLOSE_WAIT tcp connections?
I have a Spring MVC application which uses the Riak java client to connect to the remote instance/cluster.
We are seeing a lot of TCP Connections on the server hosting the Spring MVC application, which continue to build up until the server can no longer connect to anything because there are no ports available.
Restarting the Riak cluster does not clean the connections up.
Restarting the webapp does clean up the extra connections.
We are using the HTTPClientAdapter and REST api.
When connecting to a relational database, I would normally clean up connections by either explicitly calling close on the connection, or by registering the datasource with a pool and transaction manager and then Annotating my Services with #Transactional.
But since using the HTTPClientAdapter, I would have expected this to be more like an HttpClient.
With an HttpClient, I would consume the Response entity, with EntityUtils.consume(...), to ensure that the everything is properly cleaned up.
HTTPClientAdapter does have a shutdown method, and I see it being called in the online examples.
When I traced the method call through to the actual RiakClient, the method is empty.
Also, when I dig through the source code, nowhere in it does it ever close the Stream on the HttpResponse or consume any response entity (as with the standard Apache EntityUtils example).
Here is an example of how the calls are being made.
private RawClient getRiakClientFromUrl(String riakUrl) {
return new HTTPClientAdapter(riakUrl);
}
public IRiakObject fetchRiakObject(String bucket, String key, boolean useCache) {
try {
MethodTimer timer = MethodTimer.start("Fetch Riak Object Operation");
//logger.debug("Fetching Riak Object {}/{}", bucket, key);
RiakResponse riakResponse;
riakResponse = riak.fetch(bucket, key);
if(!riakResponse.hasValue()) {
//logger.debug("Object {}/{} not found in riak data store", bucket, key);
return null;
}
IRiakObject[] riakObjects = riakResponse.getRiakObjects();
if(riakObjects.length > 1) {
String error = "Got multiple riak objects for " + bucket + "/" + key;
logger.error(error);
throw new RuntimeException(error);
}
//logger.debug("{}", timer);
return riakObjects[0];
}
catch(Exception e) {
logger.error("Error fetching " + bucket + "/" + key, e);
throw new RuntimeException(e);
}
}
The only option I can think of, is to create the RiakClient separately from the adapter so I can access the HttpClient and then the ConnectionManager.
I am currently working on switching over to the PBClientAdapter to see if that might help, but for the purposes of this question (and because the rest of the team may not like me switching for whatever reason), let's assume that I must continue to connect over HTTP.
So it's been almost a year, so I thought I would go ahead and post how I solved this problem.
The solution was to change the client implementation we were using to the HTTPClientAdapter provided by the java client, passing in the configuration to implement pools and max connections. Here's some code example of how to do it.
First, we are on an older version of RIAK, so here's the amven dependency:
<dependency>
<groupId>com.basho.riak</groupId>
<artifactId>riak-client</artifactId>
<version>1.1.4</version>
</dependency>
And here's the example:
public RawClient riakClient(){
RiakConfig config = new RiakConfig(riakUrl);
//httpConnectionsTimetolive is in seconds, but timeout is in milliseconds
config.setTimeout(30000);
config.setUrl("http://myriakurl/);
config.setMaxConnections(100);//Or whatever value you need
RiakClient client = new RiakClient(riakConfig);
return new HTTPClientAdapter(client);
}
I actually broke that up a bit in my implementation and used Spring to inject values; I just wanted to show a simplified example for it.
By setting the timeout to something less than the standard five minutes, the system will not hang to the connections for too long (so, 5 minutes + whatever you set the timeout to) which causes the connectiosn to enter the close_wait status sooner.
And of course setting the max connections in the pool prevents the application from opening up 10's of thousands of connections.