I am new to Web Service development, currently building a SOAP Web Service with Spring Boot 2.7.0, Java 17.
As well as a client application that communicates with this soap service via JMS.
But I do not know the procedure of the process.
The way I see it -> The client application (Producer) sends a message to a queue that lives on the server side (Consumer), the queue pops the message when ready to consume and redirects it to the endpoint handler method and then sends a response in the response queue back to the client side.
However, I don't know how to redirect the JMS message to the endpoint. Nor do I know how to send it back. I have read all of the documentations related to "SOAP over JMS", CXF-SOAP-JMS", "ActiveMQ with Spring", etc... None of them helped me fix this problem.
Using SOAP with http is pretty easy by exploiting the "WebServiceTemplate", provided by Spring-WS API. But when I tried using it over JMS I encountered several problems, including the following:
What to do with the JMS Message once in the destination object?
How do I send it specifically to my endpoint handler method?
What and how do I send back to the response destination?
Sample code of what I've tried latest
CLIENT APP
Client Configuration
#Configuration
public class SoapClientConfiguration {
#Value("${spring.activemq.broker-url}")
private String activeMqUrl;
#Value("${spring.activemq.user}")
private String userName;
#Value("${spring.activemq.password}")
private String password;
#Bean
Jaxb2Marshaller jaxb2Marshaller() {
Jaxb2Marshaller marshaller = new Jaxb2Marshaller();
marshaller.setPackagesToScan("com.mile.soap.client.app.quiz");
return marshaller;
}
#Bean
WebServiceTemplate template() {
return new WebServiceTemplate(jaxb2Marshaller());
}
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(mqConnectionFactory());
return factory;
}
#Bean
public SingleConnectionFactory mqConnectionFactory(){
SingleConnectionFactory factory = new SingleConnectionFactory();
ActiveMQConnectionFactory mqConnectionFactory = new ActiveMQConnectionFactory();
mqConnectionFactory.setBrokerURL(activeMqUrl);
mqConnectionFactory.setUserName(userName);
mqConnectionFactory.setPassword(password);
factory.setTargetConnectionFactory(mqConnectionFactory);
return factory;
}
#Bean
public JmsTemplate jmsTemplate(){
JmsTemplate template = new JmsTemplate();
template.setConnectionFactory(mqConnectionFactory());
return template;
}
Client Service
#Service
public class SoapClient extends WebServiceGatewaySupport{
#Autowired WebServiceTemplate template;
#Autowired JmsTemplate jmsTemplate;
public CategoriesResponse getCategories() {
CategoriesResponse response = new CategoriesResponse();
try {
SAAJResult soapRequest = new SAAJResult();
template.getMarshaller().marshal(new GetCategoriesRequest(), soapRequest);
Message m = jmsTemplate.sendAndReceive("example.queue", new MessageCreator() {
#Override
public Message createMessage(Session session) throws JMSException {
return session.createObjectMessage(soapRequest.toString());
}
});
response = m.getBody(CategoriesResponse.class);
}
catch (Exception e) {
e.printStackTrace();
}
return response;
}
SERVER SIDE APP
ActiveMQ Configuration
#Configuration #EnableJms
public class ActiveMqConfig {
#Value("${spring.activemq.broker-url}")
private String activeMqUrl;
#Value("${spring.activemq.user}")
private String userName;
#Value("${spring.activemq.password}")
private String password;
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(mqConnectionFactory());
return factory;
}
#Bean
public SingleConnectionFactory mqConnectionFactory(){
SingleConnectionFactory factory = new SingleConnectionFactory();
ActiveMQConnectionFactory mqConnectionFactory = new ActiveMQConnectionFactory();
mqConnectionFactory.setBrokerURL(activeMqUrl);
mqConnectionFactory.setUserName(userName);
mqConnectionFactory.setPassword(password);
factory.setTargetConnectionFactory(mqConnectionFactory);
return factory;
}
}
Main Configuration (WSDL/SERVLET)
#Configuration
#EnableWs
public class SoapConfiguration extends WsConfigurerAdapter{
#Bean(name = Bus.DEFAULT_BUS_ID)
public SpringBus springBus(){
SpringBus bus = new SpringBus();
return bus;
}
#Bean
public ServletRegistrationBean<MessageDispatcherServlet> messageDispatcherServlet(
ApplicationContext applicationContext, SpringBus springBus){
MessageDispatcherServlet servlet = new MessageDispatcherServlet();
servlet.setApplicationContext(applicationContext);
servlet.setTransformWsdlLocations(true);
return new ServletRegistrationBean<>(servlet, "/*");
}
//wsdl
#Bean(name = "quiz") #SneakyThrows
public DefaultWsdl11Definition defaultWsdl11Definition(XsdSchema schema) {
DefaultWsdl11Definition defaultWsdl11Definition = new DefaultWsdl11Definition();
defaultWsdl11Definition.setPortTypeName("QuizMainEndPoint");
defaultWsdl11Definition.setLocationUri("/");
defaultWsdl11Definition.setTargetNamespace("http://www.mile.com/collection/management/soap/Quiz");
defaultWsdl11Definition.setTransportUri("http://www.openuri.org/2002/04/soap/jms/");
defaultWsdl11Definition.setSchema(schema);
return defaultWsdl11Definition;
}
#Override
public void addInterceptors(List<EndpointInterceptor> interceptors) {
EndpointInterceptor endpointInterceptor = new PayloadRootSmartSoapEndpointInterceptor(
new QuizMainEndpointInterceptor(), "http://www.mile.com/collection/management/soap/Quiz", "GetCategoriesRequest");
interceptors.add(endpointInterceptor);
}
#Bean
public XsdSchema schema() {
return new SimpleXsdSchema(new ClassPathResource("/schemas/QuizSchema/quiz.xsd"));
}
}
Listener
#Component
public class Listener {
#JmsListener(destination = "example.queue")
public void listenRequests(Message message) {
System.out.println(message.toString());
/*I RECEIVE THE MESSAGE BUT I HAVE NO IDEA WHAT TO DO WITH IT.
* HOW DO I CONSUME IT?
*/
}
}
Method in a class annotated with #Endpoint
#ResponsePayload
#PayloadRoot(namespace = NAMESPACE, localPart = "GetCategoriesRequest")
public CategoriesResponse getCategories( #RequestPayload GetCategoriesRequest request) {
CategoriesResponse response = new CategoriesResponse(service.getCategories());
/*
* How to CONVERT my JMS Message living in the Destination Object - "example.queue" To a SOAP Message
* and be RECOGNISED by this exact method??
Do i send a JMS response here or somewhere else?
Is it sent by default?
*/
return response;
}
Thank you for reading thoroughly. I'd appreciate any kind of help.
I'm reading a ton of questions and answers about this topic, but I can't solve my problem.
I initialized a Springboot project with Kafka and spring-data-jdbc.
What I'm trying to do is
Configure a Kafka JDBC Connector in order to push record changes from a PostgreSQL DB into a Kafka topic
Setup a Kafka Consumer in order to consume records pushed into the topic by inserting them into another PostgresSQL DB.
For point 1 is everything ok.
For point 2 I'm having some problem.
This is how is organized the project
com.migration
- MigrationApplication.java
com.migration.config
- KafkaConsumerConfig.java
com.migration.db
- JDBCConfig.java
- RecordRepository.java
com.migration.listener
- MessageListener.java
com.migration.model
- Record.java
- AbstractRecord.java
- PostgresRecord.java
This is the MessageListener class
#EnableJdbcRepositories("com.migration.db")
#Transactional
#Configuration
public class MessageListener {
#Autowired
private RecordRepository repository;
#KafkaListener(topics={"author"}, groupId = "migrator", containerFactory = "migratorKafkaListenerContainerFactory")
public void listenGroupMigrator(Record record) {
repository.insert(message);
throw new RuntimeException();
}
I think is pretty clear, it setup a Kafka Consumer in order to listen on "author" topic and consume the record by inserting it into DB.
As you can see, inside listenGroupMigrator() method is performed the insert into DB of the record and then is thrown RuntimeException because I'm checking if #Transactional works and if rollback is performed.
But not, rollback is not performed, even if the class is annotated with #Transactional.
For completeness these are other classes
RecordRepository class
#Repository
public class RecordRepository {
public RecordRepository() {}
public void insert(Record record) {
JDBCConfig jdbcConfig = new JDBCConfig();
SimpleJdbcInsert messageInsert = new SimpleJdbcInsert(jdbcConfig.postgresDataSource());
messageInsert.withTableName(record.tableName()).execute(record.content());
}
}
JDBCConfig class
#Configuration
public class JDBCConfig {
#Bean
public DataSource postgresDataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName("org.postgresql.Driver");
dataSource.setUrl("jdbc:postgresql://localhost:5432/db");
dataSource.setUsername("postgres");
dataSource.setPassword("root");
return dataSource;
}
}
KafkaConsumerConfig class:
#EnableKafka
#Configuration
public class KafkaConsumerConfig {
#Value(value = "${kafka.bootstrap-server}")
private String bootstrapServer;
private <T extends Record> ConsumerFactory<String, T> consumerFactory(String groupId, Class<T> clazz) {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServer);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(JsonSerializer.ADD_TYPE_INFO_HEADERS, false);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props, new StringDeserializer(), new JsonDeserializer<>(clazz));
}
private <T extends Record> ConcurrentKafkaListenerContainerFactory<String, T> kafkaListenerContainerFactory(String groupId, Class<T> clazz) {
ConcurrentKafkaListenerContainerFactory<String, T> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory(groupId, clazz));
return factory;
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, PostgresRecord> migratorKafkaListenerContainerFactory() {
return kafkaListenerContainerFactory("migrator", PostgresRecord.class);
}
}
MigrationApplication class
#SpringBootApplication
public class MigrationApplication {
public static void main(String[] args) {
ConfigurableApplicationContext context = SpringApplication.run(MigrationApplication.class, args);
MessageListener listener = context.getBean(MessageListener.class);
}
}
How can I make the listenGroupMigrator method transactional?
I am using Spring kafka transaction for my producer and consumer applications.
The requirement is on producer side there are multiple steps: send message to kafka and then save to db. If save to db failed want to rollback the message send to kafka as well.
So on the consumer side, i set the isolation.leve to read_committed, then if the message is rollback from kafka, the consumer shouldn't read it.
Code for Producer application is:
#Configuration
#EnableKafka
public class KafkaConfiguration {
#Bean
public ProducerFactory<String, Customer> producerFactory() {
DefaultKafkaProducerFactory<String, Customer> pf = new DefaultKafkaProducerFactory<>(producerConfigs());
pf.setTransactionIdPrefix("customer.txn.tx-");
return pf;
}
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
// create a minimum Producer configs
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "http://127.0.0.1:9092");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, KafkaAvroSerializer.class);
props.put("schema.registry.url", "http://127.0.0.1:8081");
// create safe Producer
props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "true");
props.put(ProducerConfig.ACKS_CONFIG, "all");
props.put(ProducerConfig.RETRIES_CONFIG, Integer.toString(Integer.MAX_VALUE));
props.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, "5"); // kafka 2.0 >= 1.1 so we can keep this as 5. Use 1 otherwise.
// high throughput producer (at the expense of a bit of latency and CPU usage)
props.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, "snappy");
props.put(ProducerConfig.LINGER_MS_CONFIG, "20");
props.put(ProducerConfig.BATCH_SIZE_CONFIG, Integer.toString(32 * 1024)); // 32 KB batch size
return props;
}
#Bean
public KafkaTemplate<String, Customer> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
#Bean
public KafkaTransactionManager kafkaTransactionManager(ProducerFactory<String, Customer> producerFactory) {
KafkaTransactionManager<String, Customer> ktm = new KafkaTransactionManager<>(producerFactory);
ktm.setTransactionSynchronization(AbstractPlatformTransactionManager.SYNCHRONIZATION_ON_ACTUAL_TRANSACTION);
return ktm;
}
#Bean
#Primary
public JpaTransactionManager jpaTransactionManager(EntityManagerFactory entityManagerFactory) {
return new JpaTransactionManager(entityManagerFactory);
}
#Bean(name = "chainedTransactionManager")
public ChainedTransactionManager chainedTransactionManager(JpaTransactionManager jpaTransactionManager,
KafkaTransactionManager kafkaTransactionManager) {
return new ChainedTransactionManager(kafkaTransactionManager, jpaTransactionManager);
}
}
#Component
#Slf4j
public class KafkaProducerService {
private KafkaTemplate<String, Customer> kafkaTemplate;
private CustomerConverter customerConverter;
private CustomerRepository customerRepository;
public KafkaProducerService(KafkaTemplate<String, Customer> kafkaTemplate, CustomerConverter customerConverter, CustomerRepository customerRepository) {
this.kafkaTemplate = kafkaTemplate;
this.customerConverter = customerConverter;
this.customerRepository = customerRepository;
}
#Transactional(transactionManager = "chainedTransactionManager", rollbackFor = Exception.class)
public void sendEvents(String topic, CustomerModel customer) {
LOGGER.info("Sending to Kafka: topic: {}, key: {}, customer: {}", topic, customer.getKey(), customer);
// kafkaTemplate.send(topic, customer.getKey(), customerConverter.convertToAvro(customer));
kafkaTemplate.executeInTransaction(kt -> kt.send(topic, customer.getKey(), customerConverter.convertToAvro(customer)));
customerRepository.saveToDb();
}
}
So i explicitly throw an exception in the saveToDb method and I can see exception throw out. But the consumer application can still see the message.
Code for consumer:
#Slf4j
#Configuration
#EnableKafka
public class KafkaConfiguration {
#Bean
ConcurrentKafkaListenerContainerFactory<String, Customer> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Customer> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setAfterRollbackProcessor(new DefaultAfterRollbackProcessor<String, Customer>(-1));
// SeekToCurrentErrorHandler errorHandler =
// new SeekToCurrentErrorHandler((record, exception) -> {
// // recover after 3 failures - e.g. send to a dead-letter topic
//// LOGGER.info("***in error handler data, {}", record);
//// LOGGER.info("***in error handler headers, {}", record.headers());
//// LOGGER.info("value: {}", new String(record.headers().headers("springDeserializerExceptionValue").iterator().next().value()));
// }, 3);
//
// factory.setErrorHandler(errorHandler);
return factory;
}
#Bean
public ConsumerFactory<String, Customer> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
// props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
// props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer2.class);
props.put(ErrorHandlingDeserializer2.VALUE_DESERIALIZER_CLASS, KafkaAvroDeserializer.class);
props.put("schema.registry.url", "http://127.0.0.1:8081");
props.put("specific.avro.reader", "true");
props.put("isolation.level", "read_committed");
// props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false"); // disable auto commit of offsets
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "100"); // disable auto commit of offsets
return props;
}
}
#Component
#Slf4j
public class KafkaConsumerService {
#KafkaListener(id = "demo-consumer-stream-group", topics = "customer.txn")
#Transactional
public void process(ConsumerRecord<String, Customer> record) {
LOGGER.info("Customer key: {} and value: {}", record.key(), record.value());
LOGGER.info("topic: {}, partition: {}, offset: {}", record.topic(), record.partition(), record.offset());
}
}
Did I miss something here?
executeInTransaction will run in a separate transaction. See the javadocs:
/**
* Execute some arbitrary operation(s) on the operations and return the result.
* The operations are invoked within a local transaction and do not participate
* in a global transaction (if present).
* #param callback the callback.
* #param <T> the result type.
* #return the result.
* #since 1.1
*/
<T> T executeInTransaction(OperationsCallback<K, V, T> callback);
Just use send() to participate in the existing transaction.
I am new to Kakfa and learning on to produce and consume messages to and from a Kafka Topic.
I am using the Kafka configuration using #EnableKafka
#EnableKafka
#Configuration
public class ConsumerConfig implements ApplicationContextAware {
#Value("${kafka.servers}")
private String kafkaServerAddress;
#Value("${kafka.ca.groupid}")
private String groupId;
private ApplicationContext context;
public DefaultKafkaConsumerFactory<String, Object> consumerFactory() {
Map<String, Object> props = new HashMap<>();
return new DefaultKafkaConsumerFactory<>(props);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Object> binlogListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
DefaultKafkaConsumerFactory<String, Object> defaultFactory = consumerFactory();
defaultFactory.setKeyDeserializer(new StringDeserializer());
defaultFactory.setValueDeserializer(new JsonDeserializer(BinlogMessage.class));
factory.setConsumerFactory(defaultFactory);
return factory;
}
#Override
public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
context = applicationContext;
}
}
Got the answer, it can be done by setting the property AUTO_OFFSET_RESET_CONFIG to latest as follows:
public DefaultKafkaConsumerFactory<String, Object> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
return new DefaultKafkaConsumerFactory<>(props);
}
I wrote sample spring amqp producer which is running on RabbitMQ server which sends messages and consuming those messages uisng MessageListener using Spring AMQP. Here, I want to set queue and message durability to false. Could you please any one help me on how to set "durable" flag to false using annotations.
Here is sample code
#Configuration
public class ProducerConfiguration {
protected final String queueName = "hello.queue";
#Bean
public RabbitTemplate rabbitTemplate() {
RabbitTemplate template = new RabbitTemplate(connectionFactory());
template.setRoutingKey(this.queueName);
template.setQueue(this.queueName);
return template;
}
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory("localhost");
connectionFactory.setUsername("guest");
connectionFactory.setPassword("guest");
return connectionFactory;
}
}
public class Producer {
public static void main(String[] args) throws Exception {
new Producer().send();
}
public void send() {
ApplicationContext context = new AnnotationConfigApplicationContext(
ProducerConfiguration.class);
RabbitTemplate rabbitTemplate = context.getBean(RabbitTemplate.class);
for (int i = 1; i <= 10; i++) {
rabbitTemplate.convertAndSend(i);
}
}
}
Thanks in Advance.
#Configuration
public class Config {
#Bean
public ConnectionFactory connectionFactory() {
return new CachingConnectionFactory();
}
#Bean
public Queue foo() {
return new Queue("foo", false);
}
#Bean
public RabbitAdmin rabbitAdmin() {
return new RabbitAdmin(connectionFactory());
}
}
The rabbit admin will declare the queue the first time the connection is opened. Note that you can't change a queue from durable to not; delete it first.