Go: Append Logs to CloudWatch in Configured Pattern - rest

I'm moving to Go from Java SpringBoot world...
go version: 1.18.3
I'm wring a AWS lambda using Go. It sits behind AWS api gateway as REST.
What it all does is adapt an incoming json payload into an external provider's REST API contract and sends out the request to it.
Now I want to log the following in AWS Cloudwatch,
incoming JSON payload
adapted JSON it sent out to the external API
response JSON it got back from the external API
response JSON set back
In SpringBoot Java we did this setting up Logback CloudWatch Appender configuration to add pattern to the logs.
Here's a sample pattern,
<appender name="AWS_CLOUDWATCH_LOGGER" class="com.abc.common.logback.CloudwatchAppender">
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<jsonFactoryDecorator class="com.abc.commonlogging.common.LogstashFactoryDecorator"/>
<providers>
<logstashMarkers/>
<pattern>
<pattern>
{"#timestamp":"%d{yyyy-MM-dd'T'HH:mm:ss.SSSZZ}",
"logger": "%logger",
"level": "%level",
"thread": "%thread",
"caller": "%caller",
"class": "%class",
"line_number":"#asLong{%line}",
"method": "%method",
"message": "%message",
"hostname":"${HOSTNAME}",
"transactionId":"%logcxt{t}"
</pattern>
</pattern>
<arguments>
<includeNonStructuredArguments>false</includeNonStructuredArguments>
</arguments>
</providers>
</encoder>
<!-- Log Group Name - be mindful of the name used here as it will create a new group if none is found-->
<logGroupName>BACKEND-SERVICE-LOGS</logGroupName>
<!-- vertical name - be mindful of the name used here as it will create a new stream if none is found-->
<logStreamName>${APPNAME}</logStreamName>
<!-- Set Flush Time -->
<!-- Required to make logging process asynchronous -->
<maxFlushTimeMillis>30000</maxFlushTimeMillis>
<!-- Hardcoded AWS region -->
<!-- So even when running inside an AWS instance in us-west-1, logs will go to ap-southeast-2 -->
<logRegion>ap-southeast-2</logRegion>
<!-- Maximum number of events in each batch (50 is the default) -->
<!-- will flush when the event queue has 50 elements, even if still in quiet time (see maxFlushTimeMillis) -->
<maxBatchLogEvents>50</maxBatchLogEvents>
</appender>
Here's how it's being setup for the log level,
<root level="INFO">
<appender-ref ref="CONSOLE" />
<appender-ref ref="FILE" />
<springProfile name="!LOCALHOST">
<appender-ref ref="AWS_CLOUDWATCH_LOGGER" />
</springProfile>
</root>
Here's how it's shown when logged into CloudWatch,
{
"#timestamp": "2022-08-05T01:47:27.491+0000",
"logger": "com.abc.logging.TRANSACTION_LOGS",
"level": "INFO",
"thread": "SimpleAsyncTaskExecutor-5",
"caller": "Caller+0\t at com.abc.service.logger.sifting.util.LogUtil.log(LogUtil.java:62)\nCaller+1\t at com.abc.service.logger.sifting.SiftingLoggingService.log(SiftingLoggingService.java:11)\nCaller+2\t at com.abc.service.logger.sifting.SiftingLoggingService.log(SiftingLoggingService.java:7)\nCaller+3\t at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\nCaller+4\t at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n",
"class": "com.abc.service.logger.sifting.util.LogUtil",
"line_number": 62,
"method": "log",
"message": "Service Original Request: \n----------------------------\n{
"name": "Yomiko",
"address": {
"city": "Tokyo",
"street": "Shibaura St"
},
"children":[
{
"lastName": "Takayashi"
}
],
"isEmployed": false
}\n--------------------------------------",
"transactionId": "36654",
"HOSTNAME": "dataeng-apply-deployment-7d8f48cbcb-ddfhf"
}
How can I achieve this using Go?
Is there a straight forward way of doing it using Go?
Or can someone please suggest a library which would help achieving this?

Related

Kafka connector error on connecting WSO2 Micro Integrator with Kafka

I'm trying to connect Kafka with wso2 Micro Integrator by following this instructions. I used WSO2's Integration Studio to develop this. Here is the code,
<?xml version="1.0" encoding="UTF-8"?>
<api context="/create-customer" name="create-customer" xmlns="http://ws.apache.org/ns/synapse">
<resource methods="POST">
<inSequence>
<kafkaTransport.init>
<bootstrapServers>localhost:9092</bootstrapServers>
<keySerializerClass>org.apache.kafka.common.serialization.StringSerializer</keySerializerClass>
<valueSerializerClass>org.apache.kafka.common.serialization.StringSerializer</valueSerializerClass>
</kafkaTransport.init>
<kafkaTransport.publishMessages>
<topic>customer</topic>
</kafkaTransport.publishMessages>
<respond/>
</inSequence>
<outSequence/>
<faultSequence/>
</resource>
</api>
But when I send the request, I got following Error.
[2023-01-14 21:27:36,570] INFO {KafkaProduceConnector} - {api:create-customer} SEND : send message to Broker lists
[2023-01-14 21:27:36,583] ERROR {KafkaProduceConnector} - {api:create-customer} Kafka producer connector : Error sending the message to broker org.wso2.carbon.connector.exception.InvalidConfigurationException: Connection name is not set.
at org.wso2.carbon.connector.KafkaProduceConnector.getConnectionName(KafkaProduceConnector.java:262)
at org.wso2.carbon.connector.KafkaProduceConnector.publishMessage(KafkaProduceConnector.java:237)
at org.wso2.carbon.connector.KafkaProduceConnector.connect(KafkaProduceConnector.java:138)
at org.wso2.carbon.connector.core.AbstractConnector.mediate(AbstractConnector.java:32)
at org.apache.synapse.mediators.ext.ClassMediator.updateInstancePropertiesAndMediate(ClassMediator.java:178)
at org.apache.synapse.mediators.ext.ClassMediator.mediate(ClassMediator.java:97)
at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:110)
at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:72)
at org.apache.synapse.mediators.template.TemplateMediator.mediate(TemplateMediator.java:136)
at org.apache.synapse.mediators.template.InvokeMediator.mediate(InvokeMediator.java:170)
at org.apache.synapse.mediators.template.InvokeMediator.mediate(InvokeMediator.java:93)
at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:110)
at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:72)
at org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:158)
at org.apache.synapse.api.Resource.process(Resource.java:342)
at org.apache.synapse.api.API.process(API.java:477)
at org.apache.synapse.api.AbstractApiHandler.apiProcess(AbstractApiHandler.java:93)
at org.apache.synapse.api.AbstractApiHandler.dispatchToAPI(AbstractApiHandler.java:71)
at org.apache.synapse.api.rest.RestRequestHandler.dispatchToAPI(RestRequestHandler.java:90)
at org.apache.synapse.api.rest.RestRequestHandler.process(RestRequestHandler.java:76)
at org.apache.synapse.rest.RESTRequestHandler.process(RESTRequestHandler.java:54)
at org.apache.synapse.core.axis2.Axis2SynapseEnvironment.injectMessage(Axis2SynapseEnvironment.java:344)
at org.apache.synapse.core.axis2.SynapseMessageReceiver.receive(SynapseMessageReceiver.java:101)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:376)
at org.apache.synapse.transport.passthru.ServerWorker.processEntityEnclosingRequest(ServerWorker.java:435)
at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:183)
at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
I'm using MI 4.1.0 version and kafka 2.12 as required in the documentation. The Kafka connector used inside Integration studio is 3.12. As I noticed, according to this reference, inside <kafkaTransport.init> we can send a connection name. But Integration Studio doesn't allows me to add it inside it. When I checked the properties of <kafkaTransport.init>, it doesn't have a connection name field. Can anyone help me to get over this?
Indeed in <kafkaTransport.init> there should be name field usable. That may be a bug. In sourceCode it looks like can be retrieve from message context. Try declare this as property name, like below:
<property name="name" value="Kafka_Sample" scope="default"/>
Set this before <kafkaTransport.publishMessages>

Add a hook to Sentry Logback to scrub data

I'm using the Logback SDK for Java to send events to Sentry as described in the documentation.
Snippet:
<conversionRule conversionWord="CUSTOM_CONVERSION_RULE"
converterClass="clazz..." />
...
<property scope="context" name="myEnc" value="%d{ISO8601,UTC} | %-5level | %-50thread | %-55logger{55} | %CUSTOM_CONVERSION_RULE" />
...
<appender name="SENTRY" class="io.sentry.logback.SentryAppender">
<dsn>...</dsn>
<encoder>${myEnc}</encoder>
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>ERROR</level>
</filter>
</appender>
....
The initial problem was that the events sent to Sentry were not converted by my custom conversion rule. All the other appenders such as Console that use the property myEnc containing the conversion rule parse the data as expected. But it seems that the io.sentry.logback.SentryAppender in combination with the encoder somehow doesn't do that. The filter property is working as well as the dsn one, so I get the errors in Sentry but not with my custom parsing.
The version that I use for io.sentry.sentry-logback (and transitively sentry) is 1.7.24.
I then read about before-send hook from Sentry docs which is what I want to control what data is sent to Sentry and I had to upgrade to latest for that which is 3.1.3 at the time of writing this.
The Logback XML config changed a bit:
<appender name="SENTRY" class="io.sentry.logback.SentryAppender">
<options>
<dsn>...</dsn>
<beforeSend>????</beforeSend>
</options>
...
</appender>
From what I can see the before-send hook is exactly what I need to scrub the data when required because I don't want some info to be sent to Sentry. docs
Now, the second issue is that I don't know how to ref here a method. In the Java config there is a BiFunction that takes the event and can alter it. But I want to apply this hook to all my log events, that's why the only place it is configured is in the Logback SDK.
In Spring Boot for example there is a starter for Sentry and, off course, a bean that you can inject in the auto-configuration.
But, I'm using Scala with no Spring Boot.
Also, the project is already in prod so I cannot change lots of things and I'm looking for the smallest one that will allow me to add a hook to Logback's SDK for Sentry.
Here is the appender and it looks like (I'm not sure how it works) the options can be populated from XML and than pass to init that will take all them into account, including my before send hook.
I don't know if it's accepted to have two questions and only one referenced in the title but I didn't find a nicer way to ask/explain the problem, because one thing lead to another.
To summarize the questions:
Why that custom rule is not working with Logback Sentry's appender.
How can I let the appender know about my hook and use it.
Thanks in advance!
You can configure Sentry independently from appender configuration in logback.xml. For example:
public class Main {
private static final Logger LOGGER = LoggerFactory.getLogger(Main.class);
static {
Sentry.init(options -> {
options.setDsn("PUT YOUR DSN HERE");
options.setBeforeSend((sentryEvent, o) -> {
sentryEvent.setTag("custom", "tag");
return sentryEvent;
});
});
}
public static void main(String[] args) {
LOGGER.error("oops");
}
}
<configuration>
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<appender name="sentry" class="io.sentry.logback.SentryAppender" />
<root level="debug">
<appender-ref ref="console"/>
<appender-ref ref="sentry"/>
</root>
</configuration>
Check complete code sample in github repo: https://github.com/maciej-scratches/sentry-logback-custom-config

Mule Persistent storage

I am trying to use persistent queue storage to recover from unexpected failure. My mule version is 3.3.1
I pick up messages from a queue and enter "until successful" loop. If mule stops for some reason, I would like the message to be persistent.
Here is my relevant code
<spring:bean id="outboundStore" class="org.mule.util.store.QueuePersistenceObjectStore" />
<until-successful objectStore-ref="outboundStore"
I do not see the messages in .mule directory. What am I doing wrong?
Sorry if the question is not clear.
Adding Flows as requested:
<flow name="InitialFlow" processingStrategy="synchronous">
<inbound-endpoint ref="firstQueue"/>
<until-successful objectStore-ref="outboundStore" maxRetries="6" secondsBetweenRetries="5" deadLetterQueue-ref="secondQueue" failureExpression="groovy:message.getInvocationProperty('soapResponse') == 'BAD'">
<flow-ref name="somSubFlow" />
</until-successful>
</flow>
<sub-flow name="someSubFlow">
<http:outbound-endpoint ref="someEndpoint" exchange-pattern="request-response" method="GET" />
</sub-flow>
Please do let me know if you need more information.
Using a configuration very similar to yours, I can totally see messages pending delivery written in the .mule/queuestore/queuestore directory.
The only thing I can think of is an issue with this expression groovy:message.getInvocationProperty('soapResponse') == 'BAD' that would somehow mess with the processing.
Is this expression correct? Why not using MEL?

Transaction commit delay when routing message from one jms queue to another

We are trying to build simple transacted jms-to-jms router using Mule ESB and JBoss Messaging. When we run Mule ESB with application configured as below, we observe strange behaviour.
Approximately 10 messages are routed from queue test1 to test2
Nothing happens for ~40 seconds.
goto 1
Queue test1 is filled with around 500 messages when we start test. We use Mule 3.2 and JBoss 5.1.
If I remove transactions from code below everything works fine, all messages are sent to queue test2 instantly. Also, everything is fine if I change transactions from xa to jms -- by replacing xa-transaction tags with jms:transaction.
I don't know what causes this pause in message processing on ESB, probably transaction commit is delayed.
My question is: what should I do to have xa transactions working correctly?
I'll provide more details if needed. I asked this question on Mule ESB forum before with no answer http://forum.mulesoft.org/mulesoft/topics/transaction_commit_delay_when_routing_message_from_one_jms_queue_to_another
<?xml version="1.0" encoding="UTF-8"?>
<mule xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:jms="http://www.mulesoft.org/schema/mule/jms" xmlns:doc="http://www.mulesoft.org/schema/mule/documentation" xmlns:spring="http://www.springframework.org/schema/beans" xmlns:core="http://www.mulesoft.org/schema/mule/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jbossts="http://www.mulesoft.org/schema/mule/jbossts" version="CE-3.2.1" xsi:schemaLocation="
http://www.mulesoft.org/schema/mule/jms http://www.mulesoft.org/schema/mule/jms/current/mule-jms.xsd
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/jbossts http://www.mulesoft.org/schema/mule/jbossts/current/mule-jbossts.xsd ">
<jbossts:transaction-manager> </jbossts:transaction-manager>
<configuration>
<default-threading-profile maxThreadsActive="30" maxThreadsIdle="5"/>
<default-receiver-threading-profile maxThreadsActive="10" maxThreadsIdle="5"/>
</configuration>
<spring:beans>
<spring:bean id="jmsJndiTemplate" class="org.springframework.jndi.JndiTemplate" doc:name="Bean">
<spring:property name="environment">
<spring:props>
<spring:prop key="java.naming.factory.url.pkgs">org.jboss.naming:org.jnp.interfaces</spring:prop>
<spring:prop key="jnp.disableDiscovery">true</spring:prop>
<spring:prop key="java.naming.factory.initial">org.jnp.interfaces.NamingContextFactory</spring:prop>
<spring:prop key="java.naming.provider.url">localhost:1099</spring:prop>
</spring:props>
</spring:property>
</spring:bean>
<spring:bean id="jmsConnectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean" doc:name="Bean">
<spring:property name="jndiTemplate">
<spring:ref bean="jmsJndiTemplate"/>
</spring:property>
<spring:property name="jndiName">
<spring:value>XAConnectionFactory</spring:value>
</spring:property>
</spring:bean>
</spring:beans>
<jms:connector name="JMS" specification="1.1" numberOfConsumers="10" connectionFactory-ref="jmsConnectionFactory" doc:name="JMS"/>
<flow name="flow" doc:name="flow">
<jms:inbound-endpoint queue="test1" connector-ref="JMS" doc:name="qt1">
<xa-transaction action="ALWAYS_BEGIN"/>
</jms:inbound-endpoint>
<echo-component doc:name="Echo"/>
<jms:outbound-endpoint queue="test2" connector-ref="JMS" doc:name="qt2">
<xa-transaction action="ALWAYS_JOIN"/>
</jms:outbound-endpoint>
<echo-component doc:name="Echo"/>
</flow>
</mule>
Here you can find log fragment for 1 message interaction. Please note that in this case there was no delay.
And here is log fragment for 11 messages. All of them were in queue test1 when app started, as you can see 10 messages are routed instantly and one is delayed by 1 minute.
I've found root of my problem: my queues were defined with following attribute:
<attribute name="RedeliveryDelay">60000</attribute>
Removing it or setting low value solves my problem with delays. Problem is, I don't know why :)
I always thought that redelivery delay is used when delivery fails, which was not the case in my app.

jboss-esb fs-listener jbm message queue overflow

We have a jboss esb server which is reading files from the file system in a scheduled way (schedule frequency of 20sec) and convert them into the esb message then we parse the message.
There are some other providers/listeners (jms) and services configured on the esb servers. When there is an error in one of the services it effects the above process. File system provider (gateway) is working fine but the jms-listener who takes the gateway messages are not working and lots of messages are accumulated in the jbm queue (jbm_msg Oracle DB table).
Here is the problem, when my server is restarted messages in the jbm-queue is parsed in the esb for just 20 seconds which is the scheduled frequency of fs-provider, never process messages again and cpu usage goes up to 100% and stays there. We believe somehow fs-providers interrupts the jms-provider.
Is there any configuration we have been missing out.
Here are the configuration files that we have:
jboss-esb.xml
<?xml version = "1.0" encoding = "UTF-8"?>
<jbossesb xmlns="http://anonsvn.labs.jboss.com/labs/jbossesb/trunk/product/etc/schemas/xml/jbossesb-1.0.1.xsd" parameterReloadSecs="5">
<providers>
<fs-provider name="SitaIstProvider">
<fs-bus busid="gw_sita_ist" >
<fs-message-filter
directory="/ikarussita/IST/IN"
input-suffix=".RCV"
work-suffix=".lck"
post-delete="false"
post-directory="/ikarussita/IST/OK"
post-suffix=".ok"
error-delete="false"
error-directory="/ikarussita/IST/ERR"
error-suffix=".err"/>
</fs-bus>
</fs-provider>
<jms-provider name="SitaESBQueue" connection-factory="ConnectionFactory">
<jms-bus busid="esb_sita_queue">
<jms-message-filter dest-type="QUEUE" dest-name="queue/esb_sita_queue"/>
</jms-bus>
</jms-provider>
</providers>
<services>
<service category="SITA" name="SITA_IST" description="SITA Daemon For ISTCOXH">
<listeners>
<fs-listener name="Sita_Ist_Gateway" busidref="gw_sita_ist" is-gateway="true" schedule-frequency="20" />
<jms-listener name="Jms_Sita_EsbAware" busidref="esb_sita_queue" />
</listeners>
<actions mep="OneWay">
<action name="parse_msg" class="com.celebi.integration.action.sita.inbound.SitaHandler" process="parseMessage" />
<action name="send_ikarus" class="com.celebi.integration.action.ikarus.outbound.fis.FlightJmsSender" />
</actions>
</service>
</services>
</jbossesb>
jbm-queue-service.xml
<?xml version="1.0" encoding="UTF-8"?>
<server>
<mbean code="org.jboss.jms.server.destination.QueueService"
name="jboss.messaging.destination:service=Queue,name=esb_sita_queue"
xmbean-dd="xmdesc/Queue-xmbean.xml">
<depends optional-attribute-name="ServerPeer">jboss.messaging:service=ServerPeer</depends>
<depends>jboss.messaging:service=PostOffice</depends>
</mbean>
<server>
deployment.xml
<jbossesb-deployment>
<depends>jboss.messaging.destination:service=Queue,name=esb_sita_queue</depends>
</jbossesb-deployment>
Thanx
Split the service into 2 separate services, one handling the JMS queue, the other the file poller. Specify the same action pipeline. That way you get the same functionality but without the threading issue. Also use max-threads attr on the listener to specify the number of reading threads.