Import content to AEM - import

We have a lot content that need to be imported in AEM.
what is best way to import it? Is that any possibilities to import from excel file?
check this example
An good example for exporting is here /etc/importers/bulkeditor.html we can export the file with single "Properties / Columns" where i can define the Root Path and Properties.
I tried this packet but dos not contain what I like. https://helpx.adobe.com/experience-manager/using/creating-custom-excel-service-experience.html

http://localhost:4502/etc/importers/bulkeditor.html
documentation
I just made a test following the instructions on the link above and it worked.
My test:
Root path = /content/myapp-path/rootpage
Query parameters = "jcr:title": Title of pages that I what to include in the search
Content mode = unchecked
Properties / Columnos =
sling:resourceType and
jcr:title
Custom properties / Columns =
landingTags
Clicked Search... and worked.

Importing data to AEM can be done in lots of ways.
What is the "right" way for you, depends on your specific requirements.
Is this a one time import, or are you rather planning to write a
reusable import tool?
Is the import done by programmers or admins, or
rather by editors?
Do you need fault tolerance or a rollback? Are you
working on productive instances?
...
Here are a few of the more or less common ways (ordered from cheap/fast to extensive/comfortable) along with links to the documentation or examples:
Write a bash-script and post values with cURL requests (1)
Upload data in DAM, write EventHandler (2), (7) or a workflow (3) with a workflow launcher (4) and parse the data, afterwards change the repository.
Upload data in separate file in an own component with a file upload section (5), parse the data and change the repository.
For parsing excel data I would suggest you use apache poi (6), which is already included in AEM. But using formats like csv, json or xml will maybe save you lots of parsing efforts.
(1): http://www.aemcq5tutorials.com/tutorials/adobe-cq5-aem-curl-commands/
(2): https://osgi.org/javadoc/r4v42/index.html?org/osgi/service/event/EventHandler.html
(3): https://docs.adobe.com/docs/en/aem/6-1/develop/extending/workflows/wf-extending.html
(4): https://docs.adobe.com/docs/en/aem/6-1/administer/operations/workflows/wf-start.html
(5): https://helpx.adobe.com/experience-manager/using/uploading-files-aem1.html
(6): https://poi.apache.org/spreadsheet/index.html
(7): code example
#Service
#Component(immediate = true, policy = ConfigurationPolicy.OPTIONAL, description = "Listen to page modification events and track them.")
#Properties(value = { #Property(name = "event.topics", value = { PageEvent.EVENT_TOPIC, DamEvent.EVENT_TOPIC}, propertyPrivate = true),
#Property(name = JobConsumer.PROPERTY_TOPICS, value = ModificationEventHandler.JOB_TOPICS, propertyPrivate = true) })
public class ModificationEventHandler implements EventHandler, JobConsumer {
#Override public void handleEvent(Event event) {
logger.trace("Checking event.");
PageEvent pageEvent = PageEvent.fromEvent(event);
DamEvent damEvent = DamEvent.fromEvent(event);
Map<String, Object> properties = new HashMap<>();
if (damEvent != null) {
// DamEvent is not serializable, so we cannot add the complete event to the map.
logger.trace("Event on {} is a dam event ({}).", damEvent.getAssetPath(), damEvent.getType().name());
properties.put(DAM_EVENT_ASSET_PATH, damEvent.getAssetPath());
}
if (pageEvent != null) {
logger.trace("Event is a page event.");
properties.put(PAGE_EVENT, pageEvent);
}
logger.trace("Adding new job.");
jobManager.addJob(JOB_TOPICS, properties);
}

Content can be imported via the SlingPostServlet with :operation=import: https://sling.apache.org/documentation/bundles/manipulating-content-the-slingpostservlet-servlets-post.html#importing-content-structures. Here is an example adapted from the page:
curl -u admin:admin http://localhost:4502/content/mysite/mypage \
-F":operation=import" \
-F":contentType=json"
-F":name=sample" \
-F":content={ 'jcr:primaryType': 'nt:unstructured', 'propOne' : 'propOneValue', 'childOne' : { 'childPropOne' : true } }"
Another author-friendly option is the CSV Asset Importer included with the ACS AEM Tools package. You can save an Excel file to CSV so this should be the easy option.

Related

Metadata-Extractor -- Missing List of Tags?

I'm using metadata-extractor to retrieve metadata from video files. I have it successfully retrieving the directories. Now I need to query the directories for specific info -- duration, height, etc.
The metadata-extractor docs give this example of how to query for a specific tag value:
// obtain the Exif directory
ExifSubIFDDirectory directory
= metadata.getFirstDirectoryOfType(ExifSubIFDDirectory.class);
// query the tag's value
Date date
= directory.getDate(ExifSubIFDDirectory.TAG_DATETIME_ORIGINAL);
So it appears I need to get a list of the relevant tags, such as TAG_DATETIME_ORIGINAL, for duration, height, etc.
This page in the metadata-extractor docs contains a link titled "the various tag values", but the page it goes to lists tags for still images only, not for video files.
Googling for Metadata-Extractor -- Complete List of All Tags does not seem to bring up a list of all tags.
Are the metadata-extractor docs really missing a list of tags, or am I approaching this the wrong way somehow?
I found a list of tags at:
https://developer.tizen.org/dev-guide/2.3.1/org.tizen.guides/html/native/multimedia/metadata_extractor_n.htm
However, those constants don't seem to be what's needed in actual code. Here's Java code that works:
import com.drew.imaging.ImageMetadataReader;
import com.drew.metadata.Directory;
import com.drew.metadata.Metadata;
import com.drew.metadata.Tag;
import com.drew.metadata.file.FileTypeDirectory;
import com.drew.metadata.mp4.Mp4Directory;
import com.drew.metadata.mp4.media.Mp4SoundDirectory;
import com.drew.metadata.mp4.media.Mp4VideoDirectory;
[.....]
Metadata theMetadata = null;
try {
InputStream stream = new URL(theVideoInfo.getLinkToVideo()).openStream();
theMetadata = ImageMetadataReader.readMetadata(stream);
}
} catch (java.lang.Exception exception) {
exception.printStackTrace();
}
Mp4SoundDirectory soundDirectory
= theMetadata.getFirstDirectoryOfType(Mp4SoundDirectory.class);
Mp4VideoDirectory videoDirectory
= theMetadata.getFirstDirectoryOfType(Mp4VideoDirectory.class);
Mp4Directory mp4Directory
= theMetadata.getFirstDirectoryOfType(Mp4Directory.class);
FileTypeDirectory fileTypeDirectory
= theMetadata.getFirstDirectoryOfType(FileTypeDirectory.class);
String numberOfAudioChannels
= soundDirectory.getString(Mp4SoundDirectory.TAG_NUMBER_OF_CHANNELS);
String duration = mp4Directory.getString(Mp4Directory.TAG_DURATION);
String frameRate = videoDirectory.getString(Mp4VideoDirectory.TAG_FRAME_RATE);
String height = videoDirectory.getString(Mp4VideoDirectory.TAG_HEIGHT);
String width = videoDirectory.getString(Mp4VideoDirectory.TAG_WIDTH);
String type = fileTypeDirectory.getString(FileTypeDirectory.TAG_DETECTED_FILE_MIME_TYPE);
I found the constants (TAG_HEIGHT, TAG_WIDTH, etc.) by directly examining the metadata-extractor objects in the debugger. For example, I'd type:
Mp4VideoDirectory.WIDTH
...and the debugger (IntelliJ) would auto-complete the available constants that had the text "WIDTH" in them.

Apache Tika - Parsing and extracting only metadata without reading content

Is there a way to configure the Apache Tikka so that it only extracts the metadata properties from the file and does not access the content of the file. ? We need a way to do this so as to avoid reading the entire content in larger files.
The code to extract we are using is as follows:
var tikaConfig = TikaConfig.getDefaultConfig();
var metadata = new Metadata();
AutoDetectParser parser = new AutoDetectParser(tikaConfig);
BodyContentHandler handler = new BodyContentHandler();
using (TikaInputStream stream = TikaInputStream.get(new File(filename), metadata))
{
parser.parse(stream, handler, metadata, new ParseContext());
Array metadataKeys = metadata.names();
Array.Sort(metadataKeys);
}
With the above code sample, when we try to extract the metadata even the content is being read. We would need a way to avoid the same.

How to edit pasted content using the Open XML SDK

I have a custom template in which I'd like to control (as best I can) the types of content that can exist in a document. To that end, I disable controls, and I also intercept pastes to remove some of those content types, e.g. charts. I am aware that this content can also be drag-and-dropped, so I also check for it later, but I'd prefer to stop or warn the user as soon as possible.
I have tried a few strategies:
RTF manipulation
Open XML manipulation
RTF manipulation is so far working fairly well, but I'd really prefer to use Open XML as I expect it to be more useful in the future. I just can't get it working.
Open XML Manipulation
The wonderfully-undocumented (as far as I can tell) "Embed Source" appears to contain a compound document object, which I can use to modify the copied content using the Open XML SDK. But I have been unable to put the modified content back into an object that lets it be pasted correctly.
The modification part seems to work fine. I can see, if I save the modified content to a temporary .docx file, that the changes are being made correctly. It's the return to the clipboard that seems to be giving me trouble.
I have tried assigning just the Embed Source object back to the clipboard (so that the other types such as RTF get wiped out), and in this case nothing at all gets pasted. I've also tried re-assigning the Embed Source object back to the clipboard's data object, so that the remaining data types are still there (but with mismatched content, probably), which results in an empty embedded document getting pasted.
Here's a sample of what I'm doing with Open XML:
using OpenMcdf;
using DocumentFormat.OpenXml;
using DocumentFormat.OpenXml.Packaging;
using DocumentFormat.OpenXml.Wordprocessing;
...
object dataObj = Forms.Clipboard.GetDataObject();
object embedSrcObj = dateObj.GetData("Embed Source");
if (embedSrcObj is Stream)
{
// read it with OpenMCDF
Stream stream = embedSrcObj as Stream;
CompoundFile cf = new CompoundFile(stream);
CFStream cfs = cf.RootStorage.GetStream("package");
byte[] bytes = cfs.GetData();
string savedDoc = Path.GetTempFileName() + ".docx";
File.WriteAllBytes(savedDoc, bytes);
// And then use the OpenXML SDK to read/edit the document:
using (WordprocessingDocument openDoc = WordprocessingDocument.Open(savedDoc, true))
{
OpenXmlElement body = openDoc.MainDocumentPart.RootElement.ChildElements[0];
foreach (OpenXmlElement ele in body.ChildElements)
{
if (ele is Paragraph)
{
Paragraph para = (Paragraph)ele;
if (para.ParagraphProperties != null && para.ParagraphProperties.ParagraphStyleId != null)
{
string styleName = para.ParagraphProperties.ParagraphStyleId.Val;
Run run = para.LastChild as Run; // I know I'm assuming things here but it's sufficient for a test case
run.RunProperties = new RunProperties();
run.RunProperties.AppendChild(new DocumentFormat.OpenXml.Wordprocessing.Text("test"));
}
}
// etc.
}
openDoc.MainDocumentPart.Document.Save(); // I think this is redundant in later versions than what I'm using
}
// repackage the document
bytes = File.ReadAllBytes(savedDoc);
cf.RootStorage.Delete("Package");
cfs = cf.RootStorage.AddStream("Package");
cfs.Append(bytes);
MemoryStream ms = new MemoryStream();
cf.Save(ms);
ms.Position = 0;
dataObj.SetData("Embed Source", ms);
// or,
// Clipboard.SetData("Embed Source", ms);
}
Question
What am I doing wrong? Is this just a bad/unworkable approach?

How to store and compare annotation (with Gold Standard) in GATE

I am very comfortable with UIMA, but my new work require me to use GATE
So, I started learning GATE. My question is regarding how to calculate performance of my tagging engines (java based).
With UIMA, I generally dump all my system annotation into a xmi file and, then using a Java code compare that with a human annotated (gold standard) annotations to calculate Precision/Recall and F-score.
But, I am still struggling to find something similar with GATE.
After going through Gate Annotation-Diff and other info on that page, I can feel there has to be an easy way to do it in JAVA. But, I am not able to figure out how to do it using JAVA. Thought to put this question here, someone might have already figured this out.
How to store system annotation into a xmi or any format file programmatically.
How to create one time gold standard data (i.e. human annotated data) for performance calculation.
Let me know if you need more specific or details.
This code seems helpful in writing the annotations to a xml file.
http://gate.ac.uk/wiki/code-repository/src/sheffield/examples/BatchProcessApp.java
String docXMLString = null;
// if we want to just write out specific annotation types, we must
// extract the annotations into a Set
if(annotTypesToWrite != null) {
// Create a temporary Set to hold the annotations we wish to write out
Set annotationsToWrite = new HashSet();
// we only extract annotations from the default (unnamed) AnnotationSet
// in this example
AnnotationSet defaultAnnots = doc.getAnnotations();
Iterator annotTypesIt = annotTypesToWrite.iterator();
while(annotTypesIt.hasNext()) {
// extract all the annotations of each requested type and add them to
// the temporary set
AnnotationSet annotsOfThisType =
defaultAnnots.get((String)annotTypesIt.next());
if(annotsOfThisType != null) {
annotationsToWrite.addAll(annotsOfThisType);
}
}
// create the XML string using these annotations
docXMLString = doc.toXml(annotationsToWrite);
}
// otherwise, just write out the whole document as GateXML
else {
docXMLString = doc.toXml();
}
// Release the document, as it is no longer needed
Factory.deleteResource(doc);
// output the XML to <inputFile>.out.xml
String outputFileName = docFile.getName() + ".out.xml";
File outputFile = new File(docFile.getParentFile(), outputFileName);
// Write output files using the same encoding as the original
FileOutputStream fos = new FileOutputStream(outputFile);
BufferedOutputStream bos = new BufferedOutputStream(fos);
OutputStreamWriter out;
if(encoding == null) {
out = new OutputStreamWriter(bos);
}
else {
out = new OutputStreamWriter(bos, encoding);
}
out.write(docXMLString);
out.close();
System.out.println("done");

Enterprise Library Fluent API and Rolling Log Files Not Rolling

I am using the Fluent API to handle various configuration options for Logging using EntLib.
I am building up the loggingConfiguration section manually in code. It seems to work great except that the RollingFlatFileTraceListener doesn't actually Roll the file. It will respect the size limit and cap the amount of data it writes to the file appropriately, but it doesn't not actually create a new file and continue the logs.
I've tested it with a sample app and the app.config and it seems to work. So I'm guess that I am missing something although every config option that seems like it needs is there.
Here is the basics of the code (with hard-coded values to show a config that doesn't seem to be working):
//Create the config builder for the Fluent API
var configBuilder = new ConfigurationSourceBuilder();
//Start building the logging config section
var logginConfigurationSection = new LoggingSettings("loggingConfiguration", true, "General");
logginConfigurationSection.RevertImpersonation = false;
var _rollingFileListener = new RollingFlatFileTraceListenerData("Rolling Flat File Trace Listener", "C:\\tracelog.log", "----------------------", "",
10, "MM/dd/yyyy", RollFileExistsBehavior.Increment,
RollInterval.Day, TraceOptions.None,
"Text Formatter", SourceLevels.All);
_rollingFileListener.MaxArchivedFiles = 2;
//Add trace listener to current config
logginConfigurationSection.TraceListeners.Add(_rollingFileListener);
//Configure the category source section of config for flat file
var _rollingFileCategorySource = new TraceSourceData("General", SourceLevels.All);
//Must be named exactly the same as the flat file trace listener above.
_rollingFileCategorySource.TraceListeners.Add(new TraceListenerReferenceData("Rolling Flat File Trace Listener"));
//Add category source information to current config
logginConfigurationSection.TraceSources.Add(_rollingFileCategorySource);
//Add the loggingConfiguration section to the config.
configBuilder.AddSection("loggingConfiguration", logginConfigurationSection);
//Required code to update the EntLib Configuration with settings set above.
var configSource = new DictionaryConfigurationSource();
configBuilder.UpdateConfigurationWithReplace(configSource);
//Set the Enterprise Library Container for the inner workings of EntLib to use when logging
EnterpriseLibraryContainer.Current = EnterpriseLibraryContainer.CreateDefaultContainer(configSource);
Any help would be appreciated!
Your timestamp pattern is wrong. It should be yyy-mm-dd instead of MM/dd/yyyy. The ‘/’ character is not supported.
Also, you could accomplish your objective by using the fluent configuration interface much easier. Here's how:
ConfigurationSourceBuilder formatBuilder = new ConfigurationSourceBuilder();
ConfigurationSourceBuilder builder = new ConfigurationSourceBuilder();
builder.ConfigureLogging().LogToCategoryNamed("General").
SendTo.
RollingFile("Rolling Flat File Trace Listener")
.CleanUpArchivedFilesWhenMoreThan(2).WhenRollFileExists(RollFileExistsBehavior.Increment)
.WithTraceOptions(TraceOptions.None)
.RollEvery(RollInterval.Minute)
.RollAfterSize(10)
.UseTimeStampPattern("yyyy-MM-dd")
.ToFile("C:\\logs\\Trace.log")
.FormatWith(new FormatterBuilder().TextFormatterNamed("textFormatter"));
var configSource = new DictionaryConfigurationSource();
builder.UpdateConfigurationWithReplace(configSource);
EnterpriseLibraryContainer.Current = EnterpriseLibraryContainer.CreateDefaultContainer(configSource);
var writer = EnterpriseLibraryContainer.Current.GetInstance<LogWriter>();
DateTime stopWritingTime = DateTime.Now.AddMinutes(10);
while (DateTime.Now < stopWritingTime)
{
writer.Write("test", "General");
}