I'm developing a simple JScript script to be run by Windows Script Host.
This script needs to read some data from the Task Scheduler. I have no clue how to get started.
I've already implemented similar functionality in c++ using Task Scheduler 2.0 Interfaces
Can I use those interfaces in JScript somehow?
No, you can't use the Task Scheduler 2.0 interfaces from JScript.
What you can do however, is read the XML files that the task scheduler creates. They contain all properties of all defined tasks.
They reside in %windir%\system32\tasks (you need Administrator permissions to read this directory and its contents).
Here is an example of such a file, it's very straightforward XML:
<Task version="1.1" xmlns="http://schemas.microsoft.com/windows/2004/02/mit/task">
<RegistrationInfo>
<Author>SYSTEM</Author>
<Description>Some text here...</Description>
</RegistrationInfo>
<Triggers>
<LogonTrigger>
<Enabled>true</Enabled>
</LogonTrigger>
<CalendarTrigger>
<Enabled>true</Enabled>
<StartBoundary>2015-07-16T05:32:00</StartBoundary>
<ScheduleByDay>
<DaysInterval>1</DaysInterval>
</ScheduleByDay>
</CalendarTrigger>
</Triggers>
<Settings>
<Enabled>true</Enabled>
<ExecutionTimeLimit>PT0S</ExecutionTimeLimit>
<Hidden>false</Hidden>
<WakeToRun>false</WakeToRun>
<DisallowStartIfOnBatteries>false</DisallowStartIfOnBatteries>
<StopIfGoingOnBatteries>false</StopIfGoingOnBatteries>
<RunOnlyIfIdle>false</RunOnlyIfIdle>
<Priority>5</Priority>
<IdleSettings>
<Duration>PT600S</Duration>
<WaitTimeout>PT3600S</WaitTimeout>
<StopOnIdleEnd>false</StopOnIdleEnd>
<RestartOnIdle>false</RestartOnIdle>
</IdleSettings>
</Settings>
<Principals>
<Principal id="Author">
<UserId>System</UserId>
<RunLevel>HighestAvailable</RunLevel>
<LogonType>InteractiveTokenOrPassword</LogonType>
</Principal>
</Principals>
<Actions Context="Author">
<Exec>
<Command>C:\path\to\executable.exe</Command>
<Arguments>/args</Arguments>
</Exec>
</Actions>
</Task>
List of things to find out:
How to run a script with elevated permissions.
How to navigate a directory structure using the FileSystemObject.
How to open XML files using the MSXML2 COM objects
How to use XPath to navigate those XML documents.
How to deal with a default XML namespace (this is more important than it sounds - you won't get any results from XPath until you did this part correctly).
If necessary for your task, find out how ISO 8601 time period notation works so you can decode values like PT600S.
Luckily, for all of those things there are any number of examples available (on this site and elsewhere) to get you started.
Related
The server.log says "signature_status": "UNVERIFIED". Is this a certificate issue?
Also what are the best ways to read the pingfederate logs in windows machine.
That sounds like an issue with signature verification, which could be the cert itself but is more likely a configuration issue. More information is really needed to know which it is.
I assume the issue you are having with reading logs on windows machines is because the files are large or are moving quickly. If the files are too big you can modify the log4j2.xml config file at appdir/pingfed*/pingfed*/server/default/conf/log4j2.xml to reduce the log size to something easier to read in notepad. Here is an example rolling file appender that should leave you with easily maneageable files.
<RollingFile name="FILE" fileName="${sys:pf.log.dir}/server.log"
filePattern="${sys:pf.log.dir}/server.log.%i" ignoreExceptions="false">
<PatternLayout>
<!-- Uncomment this if you want to use UTF-8 encoding instead
of system's default encoding.
<charset>UTF-8</charset> -->
<pattern>%d %X{trackingid} %-5p [%c] %m%n</pattern>
</PatternLayout>
<Policies>
<SizeBasedTriggeringPolicy
size="20000 KB" />
</Policies>
<DefaultRolloverStrategy max="5" />
</RollingFile>
If you issue is that the files are moving too fast to read then you might consider using something like baretail or Get-Content in powershell now that it has a tail switch.
following is my configration of log4j2:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="trace" name="MyApp" packages="com.swimap.base.launcher.log">
<Appenders>
<RollingFile name="RollingFile" fileName="logs/app-${date:MM-dd-yyyy-HH-mm-ss-SSS}.log"
filePattern="logs/$${date:yyyy-MM}/app-%d{MM-dd-yyyy}-%i.log.gz">
<PatternLayout>
<Pattern>%d %p %c{1.} [%t] %m%n</Pattern>
</PatternLayout>
<Policies>
<SizeBasedTriggeringPolicy size="1 KB"/>
</Policies>
<DefaultRolloverStrategy max="3"/>
</RollingFile>
</Appenders>
<Loggers>
<Root level="trace">
<AppenderRef ref="RollingFile"/>
</Root>
</Loggers>
</Configuration>
the issue is that each time when start up my service, a new log will be created even the old one has not reached the specific size. If the program restart frequently, i will got many log files end with ‘.log’ which never be compressed.
the logs i got like this:
/log4j2/logs
/log4j2/logs/2017-07
/log4j2/logs/2017-07/app-07-18-2017-1.log.gz
/log4j2/logs/2017-07/app-07-18-2017-2.log.gz
/log4j2/logs/2017-07/app-07-18-2017-3.log.gz
/log4j2/logs/app-07-18-2017-20-42-06-173.log
/log4j2/logs/app-07-18-2017-20-42-12-284.log
/log4j2/logs/app-07-18-2017-20-42-16-797.log
/log4j2/logs/app-07-18-2017-20-42-21-269.log
someone can tell me how can i append log to the exists log file when i start up my program? much thanks whether u can help me closer to the answer!!
I suppose that your problem it that you have fileName="logs/app-${date:MM-dd-yyyy-HH-mm-ss-SSS}.log in your log4j2 configuration file.
This fileName template means that log4j2 will create log file with name that contains current date + hours + minutes + seconds + milliseconds in its name.
You should probably remove HH-mm-ss-SSS section and this will allow you to have daily rolling file and to not create new file every app restart.
You can play with template and choose format that you need.
If you want only one log file forever - then create constant fileName, like fileName=app.log
It's not hard to implement this. There is a interface DirectFileRolloverStrategy, implement below method:
public String getCurrentFileName(RollingFileManager manager)
Mybe someone met same problem and this can help him.
I am implementing UIMA pipeline with CASMultiplier and UIMA AS. I have a Segmenter Analysis Engine (A CASMultiplier) and a Analysis Engine (Annotator A). I created a Aggregate Analysis Engine of the Segmenter and Annotator A, and then I create a UIMA AS Deployment descriptor file with intention, the Segmenter produces CASes, and then the Annotator A process with CASes concurrently. The contents of the aggregate analysis engine descriptor file and the deployment descriptor file are as following:
AAE descriptor file:
<analysisEngineDescription xmlns="http://uima.apache.org/resourceSpecifier">
<frameworkImplementation>org.apache.uima.java</frameworkImplementation>
<primitive>false</primitive>
<delegateAnalysisEngineSpecifiers>
<delegateAnalysisEngine key="Segmenter">
<import location="../cas_multiplier/SimpleTextSegmenter.xml"/>
</delegateAnalysisEngine>
<delegateAnalysisEngine key="AnnotatorA">
<import location="AnnotatorA.xml"/>
</delegateAnalysisEngine>
</delegateAnalysisEngineSpecifiers>
<analysisEngineMetaData>
<name>Segmenter and AnnotatorA</name>
<description>Splits a document into pieces and runs Annotator on each
piece independently. All segments are output.</description>
<configurationParameters/>
<configurationParameterSettings/>
<flowConstraints>
<fixedFlow>
<node>Segmenter</node>
<node>AnnotatorA</node>
</fixedFlow>
</flowConstraints>
<capabilities>
<capability>
<inputs/>
<outputs>
<type allAnnotatorFeatures="true">com.trang.uima.types.Target</type>
<type allAnnotatorFeatures="true">com.eg.uima.types.IntermediateResult</type>
</outputs>
<languagesSupported>
</languagesSupported>
</capability>
</capabilities>
<operationalProperties>
<modifiesCas>true</modifiesCas>
<multipleDeploymentAllowed>true</multipleDeploymentAllowed>
<outputsNewCASes>true</outputsNewCASes>
</operationalProperties>
</analysisEngineMetaData>
</analysisEngineDescription>
Deployment descriptor file:
<?xml version="1.0" encoding="UTF-8"?><analysisEngineDeploymentDescription xmlns="http://uima.apache.org/resourceSpecifier">
<name>SegmenterAndBackTranstion</name>
<description>Deploys Segmenter and BackTranskation with 3 instances of BackTransation</description>
<version/>
<vendor/>
<deployment protocol="jms" provider="activemq">
<casPool numberOfCASes="5" initialFsHeapSize="2000000"/>
<service>
<inputQueue endpoint="SegmentAnBackTranslationQueue" brokerURL="tcp://localhost:61616" prefetch="0"/>
<topDescriptor>
<import location="../../descriptors/langrid_uima/SegmenterAndBackTranslationAE.xml"/>
</topDescriptor>
<analysisEngine async="false">
<scaleout numberOfInstances="5"/>
<casMultiplier poolSize="8" initialFsHeapSize="2000000" processParentLast="false"/>
<asyncPrimitiveErrorConfiguration>
<processCasErrors thresholdCount="0" thresholdWindow="0" thresholdAction="terminate"/>
<collectionProcessCompleteErrors timeout="0" additionalErrorAction="terminate"/>
</asyncPrimitiveErrorConfiguration>
</analysisEngine>
</service>
</deployment>
</analysisEngineDeploymentDescription>
After have this setting, I run the pipeline, however, it seems the CASes are process synchronously, one at a time.
Could anyone tell me, what am doing wrong? Is there a way to process CASes produced by CASMultiplier concurrently?
Thank you very much!
I'm trying to use the RollingFlatFileTraceListener to provide rolling logs in my app along side the XmlLogFormatter so that the logs are in an XML format, however the app no longer seems to be logging anything.
<listeners>
<clear />
<add name="Rolling Flat File Trace Listener" type="Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.RollingFlatFileTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.505.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"
listenerDataType="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.RollingFlatFileTraceListenerData, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.505.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"
fileName="C:\Inetpub\logs\rolling.log" rollFileExistsBehavior="Increment" header="~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~" footer="~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~" formatter="Error Formatter"
rollSizeKB="12499" maxArchivedFiles="3200" traceOutputOptions="None" timeStampPattern="yyyy-MM-dd" rollInterval="Midnight" />
</listeners>
<formatters>
<clear />
<add type="Microsoft.Practices.EnterpriseLibrary.Logging.Formatters.XmlLogFormatter, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.505.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" template="Timestamp: {timestamp(local)}
Message: {message}
Category: {category}
Severity: {severity}
Title:{title}
Machine: {machine}
Extended Properties: {dictionary({key} - {value}
)}" name="Error Formatter" />
</formatters>
Since the application isn't hard faulting, I can't see any errors to diagnose, yet I know that I should see some trace logs by now.
Update:
Current progress is that I've been able to use the RollingFlatFileTraceListnerData with the TextFormatter, making use of the template to specify XML. The two items that don't work as of yet are
The file has no XML declaration
The file has no root element, instead it has many root elements
Any thoughts on how to tack that on to the start and end of the file?
The out of the box trace listeners do not support a file header or file footer concept. As you've seen, it basically just appends to the file. Even if you used the .NET Framework System.Diagnostics.XmlWriterTraceListener it only writes XML fragments and not well formed XML documents.
One way to achieve what you want would be to create a separate process that modifies the archived files to be well formed after they have been rolled. The downside of that approach is that the active log file is not well formed.
If that is a concern then you will probably have to create a custom trace listener to do what you want. Instead of simply appending to the log file you could overwrite the XML document's closing tag (e.g. ) with the latest LogEntry and the closing tag.
Another interesting approach from the article Efficient Techniques for Modifying Large XML Files is to create a well formed XML document that includes the document with XML fragments. E.g.
<?xml version="1.0"?>
<!DOCTYPE logfile [
<!ENTITY events
SYSTEM "XMLFile1.xml">
]>
<logfile>
&events;
</logfile>
We have a jboss esb server which is reading files from the file system in a scheduled way (schedule frequency of 20sec) and convert them into the esb message then we parse the message.
There are some other providers/listeners (jms) and services configured on the esb servers. When there is an error in one of the services it effects the above process. File system provider (gateway) is working fine but the jms-listener who takes the gateway messages are not working and lots of messages are accumulated in the jbm queue (jbm_msg Oracle DB table).
Here is the problem, when my server is restarted messages in the jbm-queue is parsed in the esb for just 20 seconds which is the scheduled frequency of fs-provider, never process messages again and cpu usage goes up to 100% and stays there. We believe somehow fs-providers interrupts the jms-provider.
Is there any configuration we have been missing out.
Here are the configuration files that we have:
jboss-esb.xml
<?xml version = "1.0" encoding = "UTF-8"?>
<jbossesb xmlns="http://anonsvn.labs.jboss.com/labs/jbossesb/trunk/product/etc/schemas/xml/jbossesb-1.0.1.xsd" parameterReloadSecs="5">
<providers>
<fs-provider name="SitaIstProvider">
<fs-bus busid="gw_sita_ist" >
<fs-message-filter
directory="/ikarussita/IST/IN"
input-suffix=".RCV"
work-suffix=".lck"
post-delete="false"
post-directory="/ikarussita/IST/OK"
post-suffix=".ok"
error-delete="false"
error-directory="/ikarussita/IST/ERR"
error-suffix=".err"/>
</fs-bus>
</fs-provider>
<jms-provider name="SitaESBQueue" connection-factory="ConnectionFactory">
<jms-bus busid="esb_sita_queue">
<jms-message-filter dest-type="QUEUE" dest-name="queue/esb_sita_queue"/>
</jms-bus>
</jms-provider>
</providers>
<services>
<service category="SITA" name="SITA_IST" description="SITA Daemon For ISTCOXH">
<listeners>
<fs-listener name="Sita_Ist_Gateway" busidref="gw_sita_ist" is-gateway="true" schedule-frequency="20" />
<jms-listener name="Jms_Sita_EsbAware" busidref="esb_sita_queue" />
</listeners>
<actions mep="OneWay">
<action name="parse_msg" class="com.celebi.integration.action.sita.inbound.SitaHandler" process="parseMessage" />
<action name="send_ikarus" class="com.celebi.integration.action.ikarus.outbound.fis.FlightJmsSender" />
</actions>
</service>
</services>
</jbossesb>
jbm-queue-service.xml
<?xml version="1.0" encoding="UTF-8"?>
<server>
<mbean code="org.jboss.jms.server.destination.QueueService"
name="jboss.messaging.destination:service=Queue,name=esb_sita_queue"
xmbean-dd="xmdesc/Queue-xmbean.xml">
<depends optional-attribute-name="ServerPeer">jboss.messaging:service=ServerPeer</depends>
<depends>jboss.messaging:service=PostOffice</depends>
</mbean>
<server>
deployment.xml
<jbossesb-deployment>
<depends>jboss.messaging.destination:service=Queue,name=esb_sita_queue</depends>
</jbossesb-deployment>
Thanx
Split the service into 2 separate services, one handling the JMS queue, the other the file poller. Specify the same action pipeline. That way you get the same functionality but without the threading issue. Also use max-threads attr on the listener to specify the number of reading threads.