I am using the Fluent API to handle various configuration options for Logging using EntLib.
I am building up the loggingConfiguration section manually in code. It seems to work great except that the RollingFlatFileTraceListener doesn't actually Roll the file. It will respect the size limit and cap the amount of data it writes to the file appropriately, but it doesn't not actually create a new file and continue the logs.
I've tested it with a sample app and the app.config and it seems to work. So I'm guess that I am missing something although every config option that seems like it needs is there.
Here is the basics of the code (with hard-coded values to show a config that doesn't seem to be working):
//Create the config builder for the Fluent API
var configBuilder = new ConfigurationSourceBuilder();
//Start building the logging config section
var logginConfigurationSection = new LoggingSettings("loggingConfiguration", true, "General");
logginConfigurationSection.RevertImpersonation = false;
var _rollingFileListener = new RollingFlatFileTraceListenerData("Rolling Flat File Trace Listener", "C:\\tracelog.log", "----------------------", "",
10, "MM/dd/yyyy", RollFileExistsBehavior.Increment,
RollInterval.Day, TraceOptions.None,
"Text Formatter", SourceLevels.All);
_rollingFileListener.MaxArchivedFiles = 2;
//Add trace listener to current config
logginConfigurationSection.TraceListeners.Add(_rollingFileListener);
//Configure the category source section of config for flat file
var _rollingFileCategorySource = new TraceSourceData("General", SourceLevels.All);
//Must be named exactly the same as the flat file trace listener above.
_rollingFileCategorySource.TraceListeners.Add(new TraceListenerReferenceData("Rolling Flat File Trace Listener"));
//Add category source information to current config
logginConfigurationSection.TraceSources.Add(_rollingFileCategorySource);
//Add the loggingConfiguration section to the config.
configBuilder.AddSection("loggingConfiguration", logginConfigurationSection);
//Required code to update the EntLib Configuration with settings set above.
var configSource = new DictionaryConfigurationSource();
configBuilder.UpdateConfigurationWithReplace(configSource);
//Set the Enterprise Library Container for the inner workings of EntLib to use when logging
EnterpriseLibraryContainer.Current = EnterpriseLibraryContainer.CreateDefaultContainer(configSource);
Any help would be appreciated!
Your timestamp pattern is wrong. It should be yyy-mm-dd instead of MM/dd/yyyy. The ‘/’ character is not supported.
Also, you could accomplish your objective by using the fluent configuration interface much easier. Here's how:
ConfigurationSourceBuilder formatBuilder = new ConfigurationSourceBuilder();
ConfigurationSourceBuilder builder = new ConfigurationSourceBuilder();
builder.ConfigureLogging().LogToCategoryNamed("General").
SendTo.
RollingFile("Rolling Flat File Trace Listener")
.CleanUpArchivedFilesWhenMoreThan(2).WhenRollFileExists(RollFileExistsBehavior.Increment)
.WithTraceOptions(TraceOptions.None)
.RollEvery(RollInterval.Minute)
.RollAfterSize(10)
.UseTimeStampPattern("yyyy-MM-dd")
.ToFile("C:\\logs\\Trace.log")
.FormatWith(new FormatterBuilder().TextFormatterNamed("textFormatter"));
var configSource = new DictionaryConfigurationSource();
builder.UpdateConfigurationWithReplace(configSource);
EnterpriseLibraryContainer.Current = EnterpriseLibraryContainer.CreateDefaultContainer(configSource);
var writer = EnterpriseLibraryContainer.Current.GetInstance<LogWriter>();
DateTime stopWritingTime = DateTime.Now.AddMinutes(10);
while (DateTime.Now < stopWritingTime)
{
writer.Write("test", "General");
}
Related
new to Mirth, not new to engines... finding it a bit challenging to do a basic source to destination HL7v2 transformation.
I've set up my Channel to read from a file as the source, and spit out the destination to a file as well. My output template is ${message.encodedData}. The channel seems to be reading the source correctly, and generating an output. But what I'm struggling with is how cumbersome this is.
I'm playing with an HL7 SIU message, my source has a lot more fields than the destination wants to receive, just need a simple way to map the few fields that are required.
I inserted the source system message template into the Destination Transformer Inbound Message Templates, then I'm doing the following which seems to work:
//MSH Segment
if (msg['MSH'][0]){
var MSH1 = msg['MSH']['MSH.1'];
var MSH2 = msg['MSH']['MSH.2'];
var MSH7 = msg['MSH']['MSH.7'];
var MSH9 = msg['MSH']['MSH.9'];
msg['MSH'] = '';
msg['MSH']['MSH.1']=MSH1;
msg['MSH']['MSH.2']=MSH2;
msg['MSH']['MSH.7']=MSH7;
msg['MSH']['MSH.9']=MSH9;
}
Rinse and repeat for the segments that I need, seems very painful to me.
On a second destination, I'm trying to leverage the Inbound and Outbound Message Template. Inserted the source system template as above, inserted the destination system template in Outbound Message Template.
My Javascript for that one looks something like this:
//MSH Segment
if (msg['MSH'][0]){
tmp['MSH'] = "";
tmp['MSH']['MSH.1'] = msg['MSH']['MSH.1'];
tmp['MSH']['MSH.2'] = msg['MSH']['MSH.2'];
tmp['MSH']['MSH.7'] = msg['MSH']['MSH.7'];
tmp['MSH']['MSH.9'] = msg['MSH']['MSH.9'];
}
It's cleaner, but doesn't seem to work properly, in some messages, my source doesn't have a PV1 segment, but the output contains the sample PV1 segment in the Output Message Template. Do I need to have an initial statement that is tmp = "";
There has to be a easier way to accomplish what I'm trying here, any advise is appreciated!
M
Eventually figured out a different route. Removed the outbound template entirely and built the outbound message from scratch. Here's a snapshot of what it looks like.
var output = <HL7Message/>;
//MSH Segment
createSegment('MSH',output);
output.MSH['MSH.1'] = msg['MSH']['MSH.1'];
output.MSH['MSH.2'] = msg['MSH']['MSH.2'];
output.MSH['MSH.7'] = msg['MSH']['MSH.7'];
output.MSH['MSH.9'] = msg['MSH']['MSH.9'];
//SCH Segment
if (msg['SCH'][0]){
createSegment('SCH',output);
output.SCH['SCH.1'] = msg['SCH']['SCH.1'];
output.SCH['SCH.2'] = msg['SCH']['SCH.2'];
output.SCH['SCH.6'] = msg['SCH']['SCH.6'];
output.SCH['SCH.7'] = msg['SCH']['SCH.7'];
output.SCH['SCH.8'] = msg['SCH']['SCH.8'];
output.SCH['SCH.11'] = msg['SCH']['SCH.11'];
output.SCH['SCH.12'] = msg['SCH']['SCH.12'];
output.SCH['SCH.16'] = msg['SCH']['SCH.16'];
output.SCH['SCH.25'] = msg['SCH']['SCH.25'];
}
var message = SerializerFactory.getSerializer('HL7V2').fromXML(output);
channelMap.put('outmsg',message);
And then in my destination, I use ${outmsg} for the Template.
I use the following configuration for my logger, in the conf file :
log4cplus.appender.TestLogAppender = log4cplus::TimeBasedRollingFileAppender
log4cplus.appender.TestLogAppender.FilenamePattern = %d{yyyyMMdd}.log
log4cplus.appender.TestLogAppender.MaxHistory = 365
log4cplus.appender.TestLogAppender.Schedule = DAILY
log4cplus.appender.TestLogAppender.RollOnClose = false
log4cplus.appender.TestLogAppender.layout = log4cplus::PatternLayout
log4cplus.appender.TestLogAppender.layout.ConversionPattern = %m%n
And in my code, I have the following initializing function for my logger, in which first, I load the configuration file, and then I wish to set the 'FilenamePattern' property to a new value, so that when I run multiple applications, each application will write to it's own log file:
void InitLogger()
{
ProperyConfigurator::doConfigure (L"LogConf.conf");
helpers:Properties prop(L"LogConf.conf");
props.setPropery(L"log4cplus.appender.TestLogAppender.FilenamePattern" ,
"Log/AppLogName.log.%d{yyyy-MM-dd}");
}
The problem is that when I run even one application, the log messages are written to the file as given in the original configuration file (in the 'FilenamePattern' property).
It seems the 'setproperty' didn't set the new value I gave it.
Is there a problem with my initializing logger function?
Have I used the setProperty method wrong?
You are obviously changing the properties after you have already configured the system, so your changes will be ignored. Do this instead:
helpers:Properties props(L"LogConf.conf");
props.setPropery(L"log4cplus.appender.TestLogAppender.FilenamePattern" ,
"Log/AppLogName.log.%d{yyyy-MM-dd}");
ProperyConfigurator propConf (props);
propConf.configure();
We have a lot content that need to be imported in AEM.
what is best way to import it? Is that any possibilities to import from excel file?
check this example
An good example for exporting is here /etc/importers/bulkeditor.html we can export the file with single "Properties / Columns" where i can define the Root Path and Properties.
I tried this packet but dos not contain what I like. https://helpx.adobe.com/experience-manager/using/creating-custom-excel-service-experience.html
http://localhost:4502/etc/importers/bulkeditor.html
documentation
I just made a test following the instructions on the link above and it worked.
My test:
Root path = /content/myapp-path/rootpage
Query parameters = "jcr:title": Title of pages that I what to include in the search
Content mode = unchecked
Properties / Columnos =
sling:resourceType and
jcr:title
Custom properties / Columns =
landingTags
Clicked Search... and worked.
Importing data to AEM can be done in lots of ways.
What is the "right" way for you, depends on your specific requirements.
Is this a one time import, or are you rather planning to write a
reusable import tool?
Is the import done by programmers or admins, or
rather by editors?
Do you need fault tolerance or a rollback? Are you
working on productive instances?
...
Here are a few of the more or less common ways (ordered from cheap/fast to extensive/comfortable) along with links to the documentation or examples:
Write a bash-script and post values with cURL requests (1)
Upload data in DAM, write EventHandler (2), (7) or a workflow (3) with a workflow launcher (4) and parse the data, afterwards change the repository.
Upload data in separate file in an own component with a file upload section (5), parse the data and change the repository.
For parsing excel data I would suggest you use apache poi (6), which is already included in AEM. But using formats like csv, json or xml will maybe save you lots of parsing efforts.
(1): http://www.aemcq5tutorials.com/tutorials/adobe-cq5-aem-curl-commands/
(2): https://osgi.org/javadoc/r4v42/index.html?org/osgi/service/event/EventHandler.html
(3): https://docs.adobe.com/docs/en/aem/6-1/develop/extending/workflows/wf-extending.html
(4): https://docs.adobe.com/docs/en/aem/6-1/administer/operations/workflows/wf-start.html
(5): https://helpx.adobe.com/experience-manager/using/uploading-files-aem1.html
(6): https://poi.apache.org/spreadsheet/index.html
(7): code example
#Service
#Component(immediate = true, policy = ConfigurationPolicy.OPTIONAL, description = "Listen to page modification events and track them.")
#Properties(value = { #Property(name = "event.topics", value = { PageEvent.EVENT_TOPIC, DamEvent.EVENT_TOPIC}, propertyPrivate = true),
#Property(name = JobConsumer.PROPERTY_TOPICS, value = ModificationEventHandler.JOB_TOPICS, propertyPrivate = true) })
public class ModificationEventHandler implements EventHandler, JobConsumer {
#Override public void handleEvent(Event event) {
logger.trace("Checking event.");
PageEvent pageEvent = PageEvent.fromEvent(event);
DamEvent damEvent = DamEvent.fromEvent(event);
Map<String, Object> properties = new HashMap<>();
if (damEvent != null) {
// DamEvent is not serializable, so we cannot add the complete event to the map.
logger.trace("Event on {} is a dam event ({}).", damEvent.getAssetPath(), damEvent.getType().name());
properties.put(DAM_EVENT_ASSET_PATH, damEvent.getAssetPath());
}
if (pageEvent != null) {
logger.trace("Event is a page event.");
properties.put(PAGE_EVENT, pageEvent);
}
logger.trace("Adding new job.");
jobManager.addJob(JOB_TOPICS, properties);
}
Content can be imported via the SlingPostServlet with :operation=import: https://sling.apache.org/documentation/bundles/manipulating-content-the-slingpostservlet-servlets-post.html#importing-content-structures. Here is an example adapted from the page:
curl -u admin:admin http://localhost:4502/content/mysite/mypage \
-F":operation=import" \
-F":contentType=json"
-F":name=sample" \
-F":content={ 'jcr:primaryType': 'nt:unstructured', 'propOne' : 'propOneValue', 'childOne' : { 'childPropOne' : true } }"
Another author-friendly option is the CSV Asset Importer included with the ACS AEM Tools package. You can save an Excel file to CSV so this should be the easy option.
Is there a way to configure the Apache Tikka so that it only extracts the metadata properties from the file and does not access the content of the file. ? We need a way to do this so as to avoid reading the entire content in larger files.
The code to extract we are using is as follows:
var tikaConfig = TikaConfig.getDefaultConfig();
var metadata = new Metadata();
AutoDetectParser parser = new AutoDetectParser(tikaConfig);
BodyContentHandler handler = new BodyContentHandler();
using (TikaInputStream stream = TikaInputStream.get(new File(filename), metadata))
{
parser.parse(stream, handler, metadata, new ParseContext());
Array metadataKeys = metadata.names();
Array.Sort(metadataKeys);
}
With the above code sample, when we try to extract the metadata even the content is being read. We would need a way to avoid the same.
I have a custom template in which I'd like to control (as best I can) the types of content that can exist in a document. To that end, I disable controls, and I also intercept pastes to remove some of those content types, e.g. charts. I am aware that this content can also be drag-and-dropped, so I also check for it later, but I'd prefer to stop or warn the user as soon as possible.
I have tried a few strategies:
RTF manipulation
Open XML manipulation
RTF manipulation is so far working fairly well, but I'd really prefer to use Open XML as I expect it to be more useful in the future. I just can't get it working.
Open XML Manipulation
The wonderfully-undocumented (as far as I can tell) "Embed Source" appears to contain a compound document object, which I can use to modify the copied content using the Open XML SDK. But I have been unable to put the modified content back into an object that lets it be pasted correctly.
The modification part seems to work fine. I can see, if I save the modified content to a temporary .docx file, that the changes are being made correctly. It's the return to the clipboard that seems to be giving me trouble.
I have tried assigning just the Embed Source object back to the clipboard (so that the other types such as RTF get wiped out), and in this case nothing at all gets pasted. I've also tried re-assigning the Embed Source object back to the clipboard's data object, so that the remaining data types are still there (but with mismatched content, probably), which results in an empty embedded document getting pasted.
Here's a sample of what I'm doing with Open XML:
using OpenMcdf;
using DocumentFormat.OpenXml;
using DocumentFormat.OpenXml.Packaging;
using DocumentFormat.OpenXml.Wordprocessing;
...
object dataObj = Forms.Clipboard.GetDataObject();
object embedSrcObj = dateObj.GetData("Embed Source");
if (embedSrcObj is Stream)
{
// read it with OpenMCDF
Stream stream = embedSrcObj as Stream;
CompoundFile cf = new CompoundFile(stream);
CFStream cfs = cf.RootStorage.GetStream("package");
byte[] bytes = cfs.GetData();
string savedDoc = Path.GetTempFileName() + ".docx";
File.WriteAllBytes(savedDoc, bytes);
// And then use the OpenXML SDK to read/edit the document:
using (WordprocessingDocument openDoc = WordprocessingDocument.Open(savedDoc, true))
{
OpenXmlElement body = openDoc.MainDocumentPart.RootElement.ChildElements[0];
foreach (OpenXmlElement ele in body.ChildElements)
{
if (ele is Paragraph)
{
Paragraph para = (Paragraph)ele;
if (para.ParagraphProperties != null && para.ParagraphProperties.ParagraphStyleId != null)
{
string styleName = para.ParagraphProperties.ParagraphStyleId.Val;
Run run = para.LastChild as Run; // I know I'm assuming things here but it's sufficient for a test case
run.RunProperties = new RunProperties();
run.RunProperties.AppendChild(new DocumentFormat.OpenXml.Wordprocessing.Text("test"));
}
}
// etc.
}
openDoc.MainDocumentPart.Document.Save(); // I think this is redundant in later versions than what I'm using
}
// repackage the document
bytes = File.ReadAllBytes(savedDoc);
cf.RootStorage.Delete("Package");
cfs = cf.RootStorage.AddStream("Package");
cfs.Append(bytes);
MemoryStream ms = new MemoryStream();
cf.Save(ms);
ms.Position = 0;
dataObj.SetData("Embed Source", ms);
// or,
// Clipboard.SetData("Embed Source", ms);
}
Question
What am I doing wrong? Is this just a bad/unworkable approach?