I faced the following problem: I need to create a messaging queue in HAL Management Console. In the video tutorial, it looks like this:
But in tutorial author uses an old version of the console as I understand. I have a bit another menu which doesn't have a messaging menu item. I found mail menu item:
But as I understand, this is not what I need as I didn't find any way to create the queue here. Maybe someone knows how to create a messaging queue? I will appreciate any help. Thanks in advance!
I've not used the console as it is difficult to script for the next time you need to do this again. Instead, I've used the CLI to do this.
To create the topic:
${wildfly.home}/bin/jboss-cli.sh --connect --controller=127.0.0.1:8080 --command="jms-topic add --topic-address=yourTopicName --entries=java:/jms/yourTopicName"
where wildfly.home is the directory where Wildfly is installed. To remove a JMS queue, you'll run something like:
${wildfly.home}/bin/jboss-cli.sh --connect --controller=127.0.0.1:8080 --command="jms-topic remove --topic-address=yourTopicName"
My producer code looks like:
#Stateless
public class MyProducer {
#Resource(lookup = "java:/jms/yourTopicName")
private Topic topic;
#Inject
private JMSContext context;
public void sendMessage(MyCustomMessage customMessage) {
try {
ObjectMessage message = context.createObjectMessage();
message.setObject(customMessage);
context.createProducer().send(topic, message);
}
catch (JMSException e) {
// handle error
}
}
}
and my listener looks like:
#MessageDriven(activationConfig = {
#ActivationConfigProperty(propertyName = "destinationLookup", propertyValue = "java:/jms/yourTopicName"),
#ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Topic") })
public class MyListener implements MessageListener {
#Override
public void onMessage(Message message) {
}
}
Remember that to use JMS you need to run with the "full" configuration, i.e.
bin/standalone.sh -c standalone-full.xml
You probably had started Wildfly in the default mode (> without the JMS broker !).
If you want to see the Messaging menu in the console, you need to use the alternate configuration named standalone-full.
In a terminal session, go into the "bin" folder of Wildfy, and then type:
./standalone.sh --server-config=standalone-full.xml
(or standalone.bat for Windows)
More info here
Related
I am setting up the logging for a project and was wondering, whether it is possible to send id to postges database. Later I would collect all logs with fluentd and use the efk(elastic search, fluentd, kibana) stack to look through the logs. That is why it would be very helpful if can set the id in the database logs.
You can have your connection set the application_name to something that includes the id you want, and then configure your logging to include the application_name field. Or you could include the id in an SQL comment, something like select /* I am number 6 */ whatever from whereever. But then the id will only be visible for log messages that include the sql. (The reason for putting the comment after the select keyword is that some clients will strip out leading comments so the serve never sees them)
Thank you #jjane,
for giving me such a great idea. After some googling I found a page describing how to intercept hibernate logs
then I made some more research on how to add the inspector to spring-boot and this is the solution I came up with:
#Component
public class SqlCommentStatementInspector
implements StatementInspector {
private static final Logger LOGGER = LoggerFactory
.getLogger(
SqlCommentStatementInspector.class
);
private static final Pattern SQL_COMMENT_PATTERN = Pattern
.compile(
"\\/\\*.*?\\*\\/\\s*"
);
#Override
public String inspect(String sql) {
LOGGER.info(
"Repo log, Correlation_id:"+ MDC.get(Slf4jMDCFilter.MDC_UUID_TOKEN_KEY),
sql
);
return String.format("/*Correlation_id: %s*/", MDC.get(Slf4jMDCFilter.MDC_UUID_TOKEN_KEY)) + SQL_COMMENT_PATTERN
.matcher(sql)
.replaceAll("");
}
}
and its configuration:
#Configuration
public class HibernateConfiguration implements HibernatePropertiesCustomizer {
#Autowired
private SqlCommentStatementInspector myInspector;
#Override
public void customize(Map<String, Object> hibernateProperties) {
hibernateProperties.put("hibernate.session_factory.statement_inspector", myInspector);
}
}
Are there any example projects showing how to use Kafka with Micronaut? I am having problems with getting it to work.
I have the following producer:
#KafkaClient
interface AppClient {
#Topic("topic-name")
void sendMessage(#KafkaKey String id, Event event)
}
and listener:
#KafkaListener(
groupId="group-id",
offsetReset = OffsetReset.EARLIEST
)
class AppListener {
#Topic("topic-name")
void onMessage(Event event) {
// do stuff
}
}
My application.yml contains:
kafka:
bootstrap:
servers: localhost:2181
and application-test.yml (is this right and should it be in the same directory as application.yml?. Also unsure how the embedded server should be used):
kafka:
# embedded:
# enabled: true
# topics: promo-api-promotions
bootstrap:
servers: localhost:9092
My test looks like:
#MicronautTest
class AppSpec extends Specification {
#Shared
#AutoCleanup
EmbeddedServer server = ApplicationContext.run(EmbeddedServer)
#Shared
private AppClient appClient =
server.applicationContext.getBean(AppClient)
def 'The upload endpoint is called'() {
// test here
appClient.sendMessage(id, event)
// other test stuff
}
The main problems I am having are:
My consumer is not consuming from my topic. I can see the producer creates the topic in Kafka and the client group is created, but the offset stays at 0.
I am having problems when the test is started up where it looks as if two instances of the client are created and therefore the MBean registration fails (also, if I try to use the embedded Kafka, I get a different message about port 9092 already being in use because it tries to start the server up twice):
javax.management.InstanceAlreadyExistsException:
kafka.consumer:type=app-info,id=app-kafka-client-app-listener
at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
Managed to fix the second problem - the object passed into the listener did not have a #JsonCreator. I found this out by trying to use the Jackson object mapper to construct the object from it's JSON while playing around.
If anyone else has the same problem - make sure that the object model works with Jackson before going any further!
You should add the embedded configuration kafka.embedded.enabled to a map with configuration and pass it to the ApplicationContext.run method.
Map<String, Object> config = Collections.
unmodifiableMap(new HashMap<String, Object>() {
{
put(AbstractKafkaConfiguration.EMBEDDED, true);
put(AbstractKafkaConfiguration.EMBEDDED_TOPICS, "test_topic");
}
});
try (ApplicationContext ctx = ApplicationContext.run(config)) {
The consumer consumes from Kafka in another thread and you have to wait for a while until your AppListener catches up.
You can see a short example in KafkaProducerListenerTest
Remember the Kafka dependencies described in the Micronaut doc: Embedding Kafka
I am in the process of converting a home grown logging system to NLog and am wondering if there is a way to add an event to a Logger or otherwise support a mechanism where when I log a message I can get a callback with the final formatted message and the LogLevel. I currently use something like this to send server messages back to a connected client.
Thx
This is an MCVE of what I was talking about in the comments. Create a target that accepts some callback functions:
[Target("MyFirst")]
public sealed class MyFirstTarget : TargetWithLayout
{
private readonly Action<string>[] _callbacks;
public MyFirstTarget(params Action<string>[] callbacks)
{
_callbacks = callbacks;
}
protected override void Write(LogEventInfo logEvent)
{
foreach (var callback in _callbacks)
{
callback(logEvent.FormattedMessage);
}
}
}
Configure NLog to use the target. I do this programmatically since the callbacks are passed in the constructor. You can also configure the target in the NLog.config, but your target will need to be a singleton then so you can register the callbacks in code.
class Program
{
public static void Main()
{
LogManager.Configuration.AddTarget("MyFirst", new MyFirstTarget(s => Debug.WriteLine(s)));
var logger = LogManager.GetCurrentClassLogger();
logger.Debug("test");
}
}
With no other NLog configuration (copy this code into an empty project and add the NLog nuget package), this will emit a message to your debug window.
I have implementated a Rest web service (the function is not relevant) using JAX-RS. Now I want to generate its documentation using Swagger. I have followed these steps:
1) In build.gradle I get all the dependencies I need:
compile 'org.glassfish.jersey.media:jersey-media-moxy:2.13'
2) I documentate my code with Swagger annotations
3) I hook up Swagger in my Application subclass:
public class ApplicationConfig extends ResourceConfig {
/**
* Main constructor
* #param addressBook a provided address book
*/
public ApplicationConfig(final AddressBook addressBook) {
register(AddressBookService.class);
register(MOXyJsonProvider.class);
register(new AbstractBinder() {
#Override
protected void configure() {
bind(addressBook).to(AddressBook.class);
}
});
register(io.swagger.jaxrs.listing.ApiListingResource.class);
register(io.swagger.jaxrs.listing.SwaggerSerializers.class);
BeanConfig beanConfig = new BeanConfig();
beanConfig.setVersion("1.0.2");
beanConfig.setSchemes(new String[]{"http"});
beanConfig.setHost("localhost:8282");
beanConfig.setBasePath("/");
beanConfig.setResourcePackage("rest.addressbook");
beanConfig.setScan(true);
}
}
However, when going to my service in http://localhost:8282/swagger.json, I get this output.
You can check my public repo here.
It's times like this (when there is no real explanation for the problem) that I throw in an ExceptionMapper<Throwable>. Often with server related exceptions, there are no mappers to handle the exception, so it bubbles up to the container and we get a useless 500 status code and maybe some useless message from the server (as you are seeing from Grizzly).
import javax.ws.rs.WebApplicationException;
import javax.ws.rs.core.Response;
import javax.ws.rs.ext.ExceptionMapper;
public class DebugMapper implements ExceptionMapper<Throwable> {
#Override
public Response toResponse(Throwable exception) {
exception.printStackTrace();
if (exception instanceof WebApplicationException) {
return ((WebApplicationException)exception).getResponse();
}
return Response.serverError().entity(exception.getMessage()).build();
}
}
Then just register with the application
public ApplicationConfig(final AddressBook addressBook) {
...
register(DebugMapper.class);
}
When you run the application again and try to hit the endpoint, you will now see a stacktrace with the cause of the exception
java.lang.NullPointerException
at io.swagger.jaxrs.listing.ApiListingResource.getListingJson(ApiListingResource.java:90)
If you look at the source code for ApiListingResource.java:90, you will see
Swagger swagger = (Swagger) context.getAttribute("swagger");
The only thing here that could cause the NPE is the context, which scrolling up will show you it's the ServletContext. Now here's the reason it's null. In order for there to even be a ServletContext, the app needs to be run in a Servlet environment. But look at your set up:
HttpServer server = GrizzlyHttpServerFactory
.createHttpServer(uri, new ApplicationConfig(ab));
This does not create a Servlet container. It only creates an HTTP server. You have the dependency required to create the Servlet container (jersey-container-grizzly2-servlet), but you just need to make use of it. So instead of the previous configuration, you should do
ServletContainer sc = new ServletContainer(new ApplicationConfig(ab));
HttpServer server = GrizzlyWebContainerFactory.create(uri, sc, null, null);
// you will need to catch IOException or add a throws clause
See the API for GrizzlyWebContainerFactory for other configuration options.
Now if you run it and hit the endpoint again, you will see the Swagger JSON. Do note that the response from the endpoint is only the JSON, it is not the documentation interface. For that you need to use the Swagger UI that can interpret the JSON.
Thanks for the MCVE project BTW.
Swagger fixed this issue in 1.5.7. It was Issue 1103, but the fix was rolled in last February. peeskillet's answer will still work, but so will OP's now.
Does anyone have any experience with getServiceReference returning null for what seems like no reason?
The following bundle registers the service, and then proceeds to confirm that it's registered (whether or not this is even a valid test from the same package, idk).
package db.connector;
...
public class Activator implements BundleActivator {
private static ServiceRegistration registration;
...
public void start(BundleContext _context) throws Exception {
DatabaseConnector dbc = new DatabaseConnectorImpl();
registration = context.registerService(
DatabaseConnector.class.getName(),
dbc, null);
checkServiceRegistered();
}
...
public void checkServiceRegistered() {
System.out.println("Printing all entries:");
ServiceReference sr = context.getServiceReference(DatabaseConnector.class.getName());
DatabaseConnector dbc = (DatabaseConnector) context.getService(sr);
List<Protocol> result = dbc.getAllProtocols();
for(int i=0; i<result.size(); i++) {
Protocol p = result.get(i);
System.out.println("\t" + p.getId()+": "+p.getName()+"("+p.getOwner()+")");
}
}
}
The output runs successfully, everything seems OK. Checking in the karaf webconsole, the service seems to be registered correctly:
267 [db.connector.DatabaseConnector] database-connector (144)
The code to get the registered service is as follows:
import db.connector.DatabaseConnector;
...
public List<Protocol> printAllEntries() {
ServiceReference sr = Activator.getContext().getServiceReference(DatabaseConnector.class.getName());
DatabaseConnector dbc = (DatabaseConnector) Activator.getContext().getService(sr);
return dbc.getAllProtocols();
}
...
The DatabaseConnector bundle exports the correct package, and the one using the service imports the same.
What could possibly be going wrong here? I'm at a complete loss.
It looks alright.
What comes to mind: Is the ordering ok? Are you sure the registration is done before checking the reference? The way you check in printAllEntries you check if the service is there on just that moment. As OSGi bundles can come and go, this isn't a reliable way to check. You should use either a ServiceTracker, or better still something like Declarative Services or Blueprint.
You could add a ServiceListener to the BundleContext, then you can print out what's happening in what order.
Hope this helps.
Turns out, it was just that I didn't refresh the OSGi bundles. My servlet was pointing to a now-obsolete bundle ID, so of course the service find was failing.