Log not being recorded for quickfix c++ Session - quickfix

I am currently running a FIX initiator and succesfully connecting to an acceptor. The problem is, no log is being recorded. The .cfg for the application is written below.
[DEFAULT]
ConnectionType=initiator
ReconnectInterval=2
ResetOnLogon=Y
FileStorePath=store
FileLogPath=logging
StartTime=00:00:00
EndTime=00:00:00
UseDataDictionary=Y
# standard config elements
[SESSION]
# inherit ConnectionType, ReconnectInterval and SenderCompID from default
BeginString=FIX.4.4
SenderCompID=INIT
TargetCompID=ACCEPT
SocketConnectHost=xxx
SocketConnectPort=xxx
HeartBtInt=30
DataDictionary=FIX44MD.xml
[SESSION]
BeginString=FIX.4.4
SenderCompID=INIT
TargetCompID=ACCEPT1
SocketConnectHost=xxx
SocketConnectPort=xxx
HeartBtInt=30
DataDictionary=FIX44OMS.xml
The excerpt from the code which initiates the connection is written below:
std::string file = argv[ 1 ];
FIX::SessionSettings settings( file );
Application application;
FIX::FileStoreFactory storeFactory( settings );
FIX::ScreenLogFactory logFactory( settings );
FIX::SocketInitiator initiator( application, storeFactory, settings, logFactory);
initiator.start();
application.run();
initiator.stop();
I'm pretty sure the problem isn't related to writing permissions, as i am running the app as an administrator.

That's because you're using a ScreenLogFactory, which, as its name implies, only logs to the screen (e.g. your terminal).
Change it to a FileLogFactory and you should be in business.

Related

QuickFix rejecting messages with error "Out of order repeating group members"

We are using QuickFix to extract trades and orders from ICE exchange. The current .NET application which is fetching the security definition requests is running quite fine. Our task is to migrate the application to Scala. While submitting the security definition request QuickFIx is rejecting the messages with "Out of order repeating group members" error .
Request : 8=FIX.4.2_9=51_35=c_48=0_167=FUT_320=XXX_Security_Request 0_321=3_10=175
QuickFix is rejecting with message : 8=FIX.4.2_9=132_35=3_34=12_49=xxxx_50=xxxxxx_52=20200805-10:49:01.193_56=ICE_45=12_58=Out of order repeating group members, field=305_371=305_372=d_10=097_
Session config settings:
[DEFAULT]
ConnectionType=initiator
StartTime=00:00:00
EndTime=23:59:59
FileLogPath=log
FileStorePath=store
SocketConnectPort=*****
SocketConnectHost=*****
ResetOnLogon=Y
ResetOnDisconnect=Y
AllowUnknownMsgFields=Y
ReconnectInterval=8
ValidateIncomingMessages=N
FileLogPath=.\Log\FixLog
FileLogBackupPath=.\Log\Backup
[SESSION]
BeginString=FIX.4.2
SenderCompID=****
TargetCompID=ICE
HeartBtInt=30
SenderSubID=******
UseDataDictionary=Y
DataDictionary=.\Config\FIX42.xml
ReconnectInterval=8
ValidateUserDefinedFields=N
AllowUnknownMsgFields=Y
ValidateFieldsOutOfOrder=N
We are using same FIX dictionary(FIX42.xml) as the currently running .NET application so not sure how the this is happening. Could someone please help us in resolving this ?

Kafka Server - Could not find a 'KafkaServer' in JAAS

I have a standalone kafka broker that I'm trying to configure SASL for. Configurations are below. I'm trying to set up SASL_PLAIN authentication on the broker.
My understanding is that with the listener.name... configuration in the server.properties, I shouldn't need the jaas file. But I've experimented with one to see if that might be a better approach.
I have experimented with each of these commands, but both result in the same exception.
sudo bin/kafka-server-start etc/kafka/server.properties
sudo -Djava.security.auth.login.config=etc/kafka/kafka_server_jaas.conf bin/kafka-server-start etc/kafka/server.properties
the exception displayed is:
Fatal error during KafkaServer startup. Prepare to shutdown... Could
not find a 'KafkaServer' or 'sasl_plaintext.KafkaServer' entry in the
JAAS configuration. System property 'java.security.auth.login.config'
is not set
server.properties:
listeners=SASL_PLAINTEXT://0.0.0.0:9092
listener.security.protocol.map: SASL_PLAINTEXT:SASL_PLAINTEXT
listener.name.SASL_PLAINTEXT.plain.sasl.jaas.config:
org.apache.kafka.common.security.plain.PlainLoginModule required /
username="username" /
password="Password" /
user_username="Password";
advertised.listeners=SASL_PLAINTEXT://[ipaddress]:9092
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
secutiy.inter.broker.protocol=SASL_PLAINTEXT
kafka_server_jaas.conf:
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="username"
password="Password"
user_username="Password";
};
I've spent a day looking at this already - has anyone else had experience with this problem?
You need to export a variable, not in-line the config to kafka-server-start (or sudo).
export KAFKA_OPTS="-Djava.security.auth.login.config=/path/to/kafka_server_jaas.conf"
bin/kafka-server-start /path/to/server.properties
Ref. Confluent's sections on Kafka security
Putting my mistakes here for austerity:
Don't do your startup commands from the cli, put them in a .sh file and run from there:
For example, something like this:
zkstart
export KAFKA_OPTS="-Djava.security.auth.login.config=etc/kafka/zookeeper_jaas.conf"
bin/zookeeper-server-start etc/kafka/zookeeper.properties &
kafkastart
export KAFKA_OPTS=-Djava.security.auth.login.config=etc/kafka/kafka_server_jaas.conf
bin/kafka-server-start etc/kafka/server.properties
If you still encounter an error related to the configs, check your _jaas files to ensure all the configuration sections in the error messages are present. If they are, it's likely the format isn't quite correct - check for the two semi-colons in each section and if that fails, try recreating the file entirely from scratch (or from a c&p from the documentation).
edit
So, the final solution for me was to add the export.... lines to the beginning of the corresponding kafka-server-start and zookeeper-server-start files. It took me a while before the 'everything is a file' finally clicked and I realized the script files were the actual basis for the services.

How can I set the log format in Dancer2?

I am trying to change the logging format to include the line number of the file for a Dancer2 app. The default does not seem to do this. If I add the line
log_format: "[%f--%l] %m"
(which seems correct based on the Dancer2::Core::Role::Logger documentation) nothing changes.
log_format isn't a global configuration directive. It's specific to the particular logging engine you're using, so you have to put it in the configuration section for that engine.
For example, if you're using the Dancer2::Logger::File engine:
logger: "File"
engines:
logger:
File:
log_format: "[%f--%l] %m"
Thanks #ThisSuitIsBlackNot. I've also discovered that if there are other engines (like for session), they need to be nested in the same "engines" section. I have a session engine and it appears that it needs to be done as
logger: Console
session: YAML
engines:
logger:
Console:
log_level: debug
log_format: "[%f----%l] %m"
session:
YAML:
session_dir: /tmp/dancer-sessions
I had the session engine information and it appeared that the console engine information was overwritten.

hornetq restart overrides the log files

hornetq restart overrides the log files,although the log file rotation is working fine, I am using the following config, I am running hornet in a standalone clustered mode
# File handler configuration
handler.FILE=org.jboss.logmanager.handlers.PeriodicRotatingFileHandler
handler.FILE.level=DEBUG
handler.FILE.properties=autoFlush,fileName,suffix,append
handler.FILE.autoFlush=true
handler.FILE.fileName=../logs/hornetq.log
handler.FILE.suffix=.yyyy-MM-dd
handler.FILE.append=true
handler.FILE.formatter=PATTERN
found out the issue, the order of the properties matter!
# File handler configuration
handler.FILE=org.jboss.logmanager.handlers.PeriodicRotatingFileHandler
handler.FILE.level=DEBUG
handler.FILE.properties=autoFlush,append,fileName,suffix
handler.FILE.autoFlush=true
handler.FILE.append=true
handler.FILE.fileName=../logs/hornetq.log
handler.FILE.suffix=.yyyy-MM-dd
handler.FILE.formatter=PATTERN
https://community.jboss.org/message/742699

How can I set the logger level with Quartz Scheduler and Cocoon?

I have a project with an old version of Cocoon. There two cron jobs.
The project has the following log4j config:
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.conversionPattern=%d %-5p - %-26.26c{1} - %m\n
log4j.rootLogger=WARN,CONSOLE
In logs folder there exists file cron.log. But there are some INFO entries. How can I setup log level for this?
You can try adding the following line to set the debug level of the org.quartz package.
log4j.logger.org.quartz=WARN,CONSOLE
BTW, you probably have something that configures this file appender (cron.log) because by default quartz (2.x) does not provides such configuration.
HIH