I'm trying to control the minimum level of logs written to SEQ using remote level control in Serilog. However, it seems like this will ignore the overrides that I have set for individual log sources.
This is the config I have:
var logLevelSwitch = new LoggingLevelSwitch(LogEventLevel.Warning);
var logConfig = new LoggerConfiguration()
.MinimumLevel.ControlledBy(logLevelSwitch)
.MinimumLevel.Override("MyApplication", LogEventLevel.Debug)
.MinimumLevel.Override("Microsoft.Hosting.Lifetime", LogEventLevel.Information)
.MinimumLevel.Override("Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker", LogEventLevel.Information)
.Enrich.FromLogContext()
.Enrich.WithProperty("Source", applicationName)
.Enrich.WithMachineName()
.WriteTo.Seq(
serverUrl: seqUrl,
apiKey: seqApiKey,
compact: true,
eventBodyLimitBytes: 262144,
batchPostingLimit: 40,
controlLevelSwitch: logLevelSwitch);
If the level in SEQ is set to Information or Warning, then the Debug logs from MyApplication isn't being written. Is this expected?
For me it's easier to say: "Only log Warning, but do log debug/info for these few specific sources."
Related
In one of my project (.net core 3.1), I need a way to redirect System.Diagnostics.Trace.WriteLine to Serilog file. I found the SerilogTraceListener package which seems to be the right candidate.
Unfortunately until now, I haven't been able to find a way to make it works.
To reproduce it,
1) Create a .net core console project
2) Add the following nuget package : Serilog, SerilogTraceListener, Serilog.Sink.Console, Serilog.Sink.File
3) Overwrite the Program class code by the following
class Program
{
static void Main(string[] args)
{
// Works fine
Log.Logger = new LoggerConfiguration()
.WriteTo.Console()
.WriteTo.File("log.txt")
.CreateLogger();
Log.Logger.Information("A string written using Logger.Information");
// Trace is written in the console but not in the file
Trace.Listeners.Add(new ConsoleTraceListener());
Trace.Listeners.Add(new global::SerilogTraceListener.SerilogTraceListener());
System.Diagnostics.Trace.WriteLine("A string written using Trace.WriteLine");
}
}
What am I doing wrong?
TL;DR; You need to set the MinimumLevel to Debug or Verbose in order to see the Trace.WriteLine messages.
SerilogTraceListener maps System.Diagnostic.TraceEventType to Serilog.LogEventLevel, and when you call Trace.WriteLine, it maps these events to the Debug log event level.
That means Serilog's logger is receiving a message of type LogEventLevel.Debug.
The minimum level configured in Serilog by default is Information, which means Debug messages are being suppressed.
You have to configure the MinimumEventLevel to Debug (or Verbose) in order to see the Trace.WriteLine messages:
Log.Logger = new LoggerConfiguration()
.MinimumLevel.Debug() // <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
.WriteTo.Console()
.WriteTo.File("log.txt")
.CreateLogger();
I am trying to convert documents using the Bluemix Document Conversion service with a Node.js application. I am getting nothing but errors in my app, but the test document I'm using converts fine using the demo page. Below is a minimal app that demonstrates the problem (Note that, while this app is converting a PDF from disk, the "real" app can't do that, hence the Buffer object).
'use strict';
var fs = require('fs');
var DocumentConversionV1 = require('watson-developer-cloud/document-conversion/v1');
var bluemix=require('./my_bluemix');
var extend=require('util')._extend; //Node.js' built-in object extend function
var dcCredentials = extend({
url: '<url>',
version: 'v1',
username: '<username>',
password: '<password>'
}, bluemix.getServiceCreds('document_conversion')); // VCAP_SERVICES
var document_conversion = new DocumentConversionV1(dcCredentials);
var contents = fs.readFileSync('./testdoc.pdf', 'utf8');
var parms={
file: new Buffer(contents,'utf8'),
conversion_target: 'ANSWER_UNITS', // (JSON) ANSWER_UNITS, NORMALIZED_HTML, or NORMALIZED_TEXT
content_type:'application/pdf',
contentType:'application/pdf', //don't know which of these two works, seems to be inconsistent so I include both
html_to_answer_units: {selectors: [ 'h1', 'h2','h3', 'h4']},
};
console.log('First 100 chars of file:\n******************\n'+contents.substr(0,100)+'\n******************\n');
document_conversion.convert(parms, function(err,answerUnits)
{
if (!err)
console.log('Returned '+answerUnits.length);
else
console.log('Error: '+JSON.stringify(err));
});
The results from running this program against the test PDF (782K) is:
$ node test.js
[DocumentConversion] WARNING: No version_date specified. Using a (possibly old) default. e.g. watson.document_conversion({ version_date: "2015-12-15" })
[DocumentConversion] WARNING: No version_date specified. Using a (possibly old) default. e.g. watson.document_conversion({ version_date: "2015-12-15" })
First 100 chars of file:
******************
%PDF-1.5
%����
1 0 obj
<</Type/Catalog/Pages 2 0 R/Lang(en-US) /StructTreeRoot 105 0 R/MarkInfo<<
******************
Error: {"code":400,"error":"Could not push back 82801 bytes in order to reparse stream. Try increasing push back buffer using system property org.apache.pdfbox.baseParser.pushBackSize"}
$
Can someone tell me
How to get rid of the warning messages
Why the document is not getting converted
How do I "increase the push back buffer"
Other documents give different errors, but I'm hoping if I can make this one work then the other errors will go away too.
You can get rid of the warning message by specifying a version date in your configuration. See the tests for an example. 1
If the document converted through the demo but failed to convert when using your application, it is likely an error with how the binary data is passed to the service. (For example, it's getting corrupted or truncated.) You can see the Node.js source code for the demo here 2. It may help you figure out the mistake or give you a different approach to loading/sending the file.
That is an error from one of the underlying libraries used by the service. Unfortunately, it's not something that a caller can adjust at this point.
I am trying to speed up "SELECT * FROM WHERE name=?" kind of queries in Play! + Scala app. I am using Play 2.4 + Scala 2.11 + play-slick-1.1.1 package. This package uses Slick-3.1 version.
My hypothesis was that slick generates Prepared statements from DBIO actions and they get executed. So I tried to cache them buy turning on flag cachePrepStmts=true
However I still see "Preparing statement..." messages in the log which means that PS are not getting cached! How should one instructs slick to cache them?
If I run following code shouldn't the PS be cached at some point?
for (i <- 1 until 100) {
Await.result(db.run(doctorsTable.filter(_.userName === name).result), 10 seconds)
}
Slick config is as follows:
slick.dbs.default {
driver="slick.driver.MySQLDriver$"
db {
driver="com.mysql.jdbc.Driver"
url="jdbc:mysql://localhost:3306/staging_db?useSSL=false&cachePrepStmts=true"
user = "user"
password = "passwd"
numThreads = 1 // For not just one thread in HikariCP
properties = {
cachePrepStmts = true
prepStmtCacheSize = 250
prepStmtCacheSqlLimit = 2048
}
}
}
Update 1
I tried following as per #pawel's suggestion of using compiled queries:
val compiledQuery = Compiled { name: Rep[String] =>
doctorsTable.filter(_.userName === name)
}
val stTime = TimeUtil.getUtcTime
for (i <- 1 until 100) {
FutureUtils.blockFuture(db.compiledQuery(name).result), 10)
}
val endTime = TimeUtil.getUtcTime - stTime
Logger.info(s"Time Taken HERE $endTime")
In my logs I still see statement like:
2017-01-16 21:34:00,510 DEBUG [db-1] s.j.J.statement [?:?] Preparing statement: select ...
Also timing of this is also remains the same. What is the desired output? Should I not see these statements anymore? How can I verify if Prepared statements are indeed reused.
You need to use Compiled queries - which are exactly doing what you want.
Just change above code to:
val compiledQuery = Compiled { name: Rep[String] =>
doctorsTable.filter(_.userName === name)
}
for (i <- 1 until 100) {
Await.result(db.run(compiledQuery(name).result), 10 seconds)
}
I extracted above name as a parameter (because you usually want to change some parameters in your PreparedStatements) but that's definitely an optional part.
For further information you can refer to: http://slick.lightbend.com/doc/3.1.0/queries.html#compiled-queries
For MySQL you need to set an additional jdbc flag, useServerPrepStmts=true
HikariCP's MySQL configuration page links to a quite useful document that provides some simple performance tuning configuration options for MySQL jdbc.
Here are a few that I've found useful (you'll need to & append them to jdbc url for options not exposed by Hikari's API). Be sure to read through linked document and/or MySQL documentation for each option; should be mostly safe to use.
zeroDateTimeBehavior=convertToNull&characterEncoding=UTF-8
rewriteBatchedStatements=true
maintainTimeStats=false
cacheServerConfiguration=true
avoidCheckOnDuplicateKeyUpdateInSQL=true
dontTrackOpenResources=true
useLocalSessionState=true
cachePrepStmts=true
useServerPrepStmts=true
prepStmtCacheSize=500
prepStmtCacheSqlLimit=2048
Also, note that statements are cached per thread; depending on what you set for Hikari connection maxLifetime and what server load is, memory usage will increase accordingly on both server and client (e.g. if you set connection max lifetime to just under MySQL default of 8 hours, both server and client will keep N prepared statements alive in memory for the life of each connection).
p.s. curious if bottleneck is indeed statement caching or something specific to Slick.
EDIT
to log statements enable the query log. On MySQL 5.7 you would add to your my.cnf:
general-log=1
general-log-file=/var/log/mysqlgeneral.log
and then sudo touch /var/log/mysqlgeneral.log followed by a restart of mysqld. Comment out above config lines and restart to turn off query logging.
I want Administrators to enable/disable logging at runtime by changing the enabled property of the LogEnabledFilter in the config.
There are several threads on SO that explain workarounds, but I want it this way.
I tried to change the Logging Enabled Filter like this:
private static void FileConfigurationSourceChanged(object sender, ConfigurationSourceChangedEventArgs e)
{
var fcs = sender as FileConfigurationSource;
System.Diagnostics.Debug.WriteLine("----------- FileConfigurationSourceChanged called --------");
LoggingSettings currentLogSettings = e.ConfigurationSource.GetSection("loggingConfiguration") as LoggingSettings;
var fdtl = currentLogSettings.TraceListeners.Where(tld => tld is FormattedDatabaseTraceListenerData).FirstOrDefault();
var currentLogFileFilter = currentLogSettings.LogFilters.Where(lfd => { return lfd.Name == "Logging Enabled Filter"; }).FirstOrDefault();
var filterNewValue = (bool)currentLogFileFilter.ElementInformation.Properties["enabled"].Value;
var runtimeFilter = Logger.Writer.GetFilter<LogEnabledFilter>("Logging Enabled Filter");
runtimeFilter.Enabled = filterNewValue;
var test = Logger.Writer.IsLoggingEnabled();
}
But test reveals always the initially loaded config value, it does not change.
I thought, that when changing the value in the config the changes will be propagated automatically to the runtime configuration. But this isn't the case!
Setting it programmatically as shown in the code above, doesn't work either.
It's time to rebuild Enterprise Library or shut it down.
You are right that the code you posted does not work. That code is using a config file (FileConfigurationSource) as the method to configure Enterprise Library.
Let's dig a bit deeper and see if programmatic configuration will work.
We will use the Fluent API since it is the preferred method for programmatic configuration:
var builder = new ConfigurationSourceBuilder();
builder.ConfigureLogging()
.WithOptions
.DoNotRevertImpersonation()
.FilterEnableOrDisable("EnableOrDisable").Enable()
.LogToCategoryNamed("General")
.WithOptions.SetAsDefaultCategory()
.SendTo.FlatFile("FlatFile")
.ToFile(#"fluent.log");
var configSource = new DictionaryConfigurationSource();
builder.UpdateConfigurationWithReplace(configSource);
var defaultWriter = new LogWriterFactory(configSource).Create();
defaultWriter.Write("Test1", "General");
var filter = defaultWriter.GetFilter<LogEnabledFilter>();
filter.Enabled = false;
defaultWriter.Write("Test2", "General");
If you try this code the filter will not be updated -- so another failure.
Let's try to use the "old school" programmatic configuration by using the classes directly:
var flatFileTraceListener = new FlatFileTraceListener(
#"program.log",
"----------------------------------------",
"----------------------------------------"
);
LogEnabledFilter enabledFilter = new LogEnabledFilter("Logging Enabled Filter", true);
// Build Configuration
var config = new LoggingConfiguration();
config.AddLogSource("General", SourceLevels.All, true)
.AddTraceListener(flatFileTraceListener);
config.Filters.Add(enabledFilter);
LogWriter defaultWriter = new LogWriter(config);
defaultWriter.Write("Test1", "General");
var filter = defaultWriter.GetFilter<LogEnabledFilter>();
filter.Enabled = false;
defaultWriter.Write("Test2", "General");
Success! The second ("Test2") message was not logged.
So, what is going on here? If we instantiate the filter ourselves and add it to the configuration it works but when relying on the Enterprise Library configuration the filter value is not updated.
This leads to a hypothesis: when using Enterprise Library configuration new filter instances are being returned each time which is why changing the value has no effect on the internal instance being used by Enterprise Library.
If we dig into the Enterprise Library code we (eventually) hit on LoggingSettings class and the BuildLogWriter method. This is used to create the LogWriter. Here's where the filters are created:
var filters = this.LogFilters.Select(tfd => tfd.BuildFilter());
So this line is using the configured LogFilterData and calling the BuildFilter method to instantiate the applicable filter. In this case the BuildFilter method of the configuration class LogEnabledFilterData BuildFilter method returns an instance of the LogEnabledFilter:
return new LogEnabledFilter(this.Name, this.Enabled);
The issue with this code is that this.LogFilters.Select returns a lazy evaluated enumeration that creates LogFilters and this enumeration is passed into the LogWriter to be used for all filter manipulation. Every time the filters are referenced the enumeration is evaluated and a new Filter instance is created! This confirms the original hypothesis.
To make it explicit: every time LogWriter.Write() is called a new LogEnabledFilter is created based on the original configuration. When the filters are queried by calling GetFilter() a new LogEnabledFilter is created based on the original configuration. Any changes to the object returned by GetFilter() have no affect on the internal configuration since it's a new object instance and, anyway, internally Enterprise Library will create another new instance on the next Write() call anyway.
Firstly, this is just plain wrong but it is also inefficient to create new objects on every call to Write() which could be invoked many times..
An easy fix for this issue is to evaluate the LogFilters enumeration by calling ToList():
var filters = this.LogFilters.Select(tfd => tfd.BuildFilter()).ToList();
This evaluates the enumeration only once ensuring that only one filter instance is created. Then the GetFilter() and update filter value approach posted in the question will work.
Update:
Randy Levy provided a fix in his answer above.
Implement the fix and recompile the enterprise library.
Here is the answer from Randy Levy:
Yes, you can disable logging by setting the LogEnabledFiter. The main
way to do this would be to manually edit the configuration file --
this is the main intention of that functionality (developers guide
references administrators tweaking this setting). Other similar
approaches to setting the filter are to programmatically modify the
original file-based configuration (which is essentially a
reconfiguration of the block), or reconfigure the block
programmatically (e.g. using the fluent interface). None of the
programmatic approaches are what I would call simple – Randy Levy 39
mins ago
If you try to get the filter and disable it I don't think it has any
affect without a reconfiguration. So the following code still ends up
logging: var enabledFilter = logWriter.GetFilter();
enabledFilter.Enabled = false; logWriter.Write("TEST"); One non-EntLib
approach would just to manage the enable/disable yourself with a bool
property and a helper class. But I think the priority approach is a
pretty straight forward alternative.
Conclusion:
In your custom Logger class implement a IsLoggenabled property and change/check this one at runtime.
This won't work:
var runtimeFilter = Logger.Writer.GetFilter<LogEnabledFilter>("Logging Enabled Filter");
runtimeFilter.Enabled = false/true;
I wanted to use the --in-source-map option that is available in UglifyJS2 for consuming a source map file and outputting a new one along with the compressed js, thereby performing a multi-level source mapping. I am using r.js to do the minification of the javascript files and so I saw there was a way to define config values that will be passed on to the UglifyJS2. Here is a sample showing how it is done in the r.js build file:
uglify2: {
//Example of a specialized config. If you are fine
//with the default options, no need to specify
//any of these properties.
output: {
beautify: true
},
compress: {
sequences: false,
global_defs: {
DEBUG: false
}
},
warnings: true,
mangle: false
},
Obvously r.js has its own way of structuring the config options and I was not able to figure out how to set the --in-source-map option following this structure. I tried putting the following statement in the output element or the compress element or even outside, next to the warnings config option.
in-source-map: sample.map
I have also tried putting quotes around the file name. Unfortunately none of the methods worked. Can anyone help me figure out this problem? Can it also be the case that this option is not supported in r.js?