logback filenamepattern for rolling every x days - scala

I found some predefined fileNamePatterns for TimeBasedRollingPolicy.
Here's one that does every minute.
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>logfile.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>logfile.%d{yyyy-MM-dd_HH-mm}.log</fileNamePattern>
</rollingPolicy>
</appender>
Does anyone know how do i do this for every x days?
Could I extend RollingFileAppender? I am doing this in Scala.

Regardless of using Scala or Java, the answer is easy for x == 1 and x == 7.
For daily rollover, use
<fileNamePattern>logfile.%d{yyyy-MM-dd}.log</fileNamePattern>
and for weekly, write
<fileNamePattern>logfile.%d{yyyy-ww}.log</fileNamePattern>
(actually it rolls over at the start of the week depending on your locale, and not every seven days).
If you want a more general solution, you have to implement your custom RollingPolicy, but I don't know why would you need that. If you have problems with this size of the log, you should note that the log can be rolled over when a a specific size is reached. There are plenty of examples at http://logback.qos.ch/manual/appenders.html

Related

Veins: Get Tripinfo and emissions in output

I'm using veins 4.6 with omnetpp 5.1.1 and trying to output tripinfo of vehicles using following configurations in .sumocfg file:
<input>
<net-file value="erlangen.net.xml"/>
<route-files value="erlangen.rou.xml"/>
<additional-files value="erlangen.poly.xml"/>
</input>
<time>
<begin value="0"/>
<end value="300"/>
<step-length value="0.1"/>
</time>
<report>
<no-step-log value="true"/>
</report>
<gui_only>
<start value="true"/>
</gui_only>
<emissions>
<device.emissions.probability value="1"/>
</emissions>
<output>
<tripinfo-output value="erlangen.trip_info.xml"/>
<fcd-output value="erlangen.fcd.xml"/>
</output>
I have generated 30 random trips for example network, set emissionClass="HBEFA3/LDV_G_EU4" attribute of vType element. When I run simulation directly in SUMO then on successful completion it generates required trip info file:
<tripinfo id="0" depart="0.00" departLane="4006674#0_0" departPos="5.10" departSpeed="0.00" departDelay="0.00" arrival="202.40" arrivalLane="-4006726#0_0" arrivalPos="281.67" arrivalSpeed="13.76" duration="202.40" routeLength="2214.00" waitSteps="0" timeLoss="28.90" rerouteNo="0" devices="tripinfo_0 emissions_0" vType="passenger" speedFactor="1.00" vaporized="">
<emissions CO_abs="16453.885943" CO2_abs="591255.824603" HC_abs="76.174970" PMx_abs="24.476562" NOx_abs="123.285735" fuel_abs="254.203634" electricity_abs="0"/>
</tripinfo>
...
<tripinfo id="29" depart="29.00" departLane="29900564#4_0" departPos="5.10" departSpeed="0.00" departDelay="0.00" arrival="226.10" arrivalLane="-31241838#0_0" arrivalPos="18.39" arrivalSpeed="22.13" duration="197.10" routeLength="2353.60" waitSteps="0" timeLoss="23.99" rerouteNo="0" devices="tripinfo_29 emissions_29" vType="passenger" speedFactor="1.00" vaporized="">
<emissions CO_abs="16826.605518" CO2_abs="612826.831847" HC_abs="78.478455" PMx_abs="25.328690" NOx_abs="126.946877" fuel_abs="263.477812" electricity_abs="0"/>
</tripinfo>
But when I debug the same as OMNET++ Simulation then it finishes with following notification and trip info file is not generated.
I set the simulation time to 300s in both .sumocfg and omnetpp.ini (sim-time-limit = 300s), screenshots shows that all departed vehicles were arrived at 285.900 s and at the same time simulation stopped with the notification. I have observed this issue multiple time by changing the number of random trips and simulation time again and again but all in vain.
Here it is clearly stated that:
The information is generated for each vehicle as soon as the vehicle arrived at its destination and is removed from the network.
But that is not the case with me. Please guide what i'm doing wrong. Thanks
You most likely ran SUMO via sumo-launchd.py, which creates a temporary copy of your scenario (in /tmp). After the scenario ran, the copy is deleted. This means, if you are logging to the directory that the SUMO simulation is executing in, your logged data will be cleaned along with the temporary copy.
There are three ways of preventing that:
Run sumo-launchd.py with a command line switch that disables deletion of the temporary directory, or
Configure SUMO to store the statistics somewhere else, or
Use a different way of launching SUMO (manually or using the TraCI ScenarioManagerForker)

Create scheduled task using Task Scheduler Managed Wrapper with "Synchronize across time zones" option disabled

Does anybody know how to create a scheduled task using Task Scheduler Managed Wrapper or Schtasks.exe with "Synchronize across time zones" unchecked.
You can do this with schtasks.exe, but it's tricky. Essentially, you have to use the /xml switch and pass an XML file that has the trigger formatted properly.
The basics of the XML file can be determined by getting as much of the required config done in the Task Scheduler GUI on your dev machine; then using Export... from the context menu, saving the file and chopping out the irrelevant bits.
Given a basic XML structure:
<?xml version="1.0" encoding="UTF-16"?>
<Task version="1.2" xmlns="http://schemas.microsoft.com/windows/2004/02/mit/task">
<Triggers>
<CalendarTrigger>
<StartBoundary>2018-03-28T18:00:00Z</StartBoundary>
<Enabled>true</Enabled>
<ScheduleByDay>
<DaysInterval>1</DaysInterval>
</ScheduleByDay>
</CalendarTrigger>
</Triggers>
<Actions Context="Author">
<Exec>
<Command>C:\Windows\System32\cmd.exe</Command>
<Arguments>/c dir</Arguments>
</Exec>
</Actions>
</Task>
The important element here is <StartBoundary/> – it defines both the date/time from which tasks should start running and (in the case of time-based triggers) the time at which it should run each day, week, etc.
If you want Synchronize across timezones to be unchecked:
You must use a time value for start boundary that is as per the local time that you want the task to run and does not end with the GMT+0/UTC+0/zulu-time indicator Z:
<StartBoundary>2018-03-28T18:00:00</StartBoundary>
The above should run every day, at 18:00 local time.
If you want Synchronize across timezones to be checked:
You must calculate the GMT+0/UTC+0/zulu-time of the desired start time yourself, based on your local timezone and respective of any daylight-savings system your timezone uses, then use this time value and include the Z indicator at the end:
<StartBoundary>2018-03-28T18:00:00Z</StartBoundary>
The above should run every day, at 18:00 UTC, regardless of local time.
To register the above task:
From the command prompt, you would issue:
schtasks.exe /create /tn "My Task Name" /xml x:\pathto\taskdefinition.xml
(You do not need to keep the task definition file after you have registered it; the settings are copied to the created task.)
The difficulty here is probably in creating the XML file — it can be a little finicky around the encoding of the file (you may need to experiment with byte-order markers), and there are some combinations of settings that I have never been able to get to run properly (they register OK, but the running task instantly fails with a strange return code). Your mileage may vary.
I've never tried with the Managed Wrapper, but the source suggests it also generates the XML in the background. However, it appears to use XmlDateTimeSerializationMode.RoundtripKind as its serialization method, which (rightly, for round-tripping) includes the timezone as part of the serialization.
This leads me to think that it will never create a task that has Synchronize across timezones unchecked. In fact, it may mean that, if you can determine the correct timezone suffix for your start time, you might not need to do that Z-based calculation above, yourself.
You might be able to raise a feature request, to have this changed based on a boolean property, e.g.:
writer.WriteElementString("StartBoundary",
System.Xml.XmlConvert.ToString(t.StartBoundary,
System.Xml.XmlDateTimeSerializationMode.RoundtripKind));
becoming:
writer.WriteElementString("StartBoundary",
System.Xml.XmlConvert.ToString(t.StartBoundary,
SynchronizeAcrossTimezones
? System.Xml.XmlDateTimeSerializationMode.RoundtripKind
: System.Xml.XmlDateTimeSerializationMode.Unspecified));
...but that's not up to me!
Fixed it using the Task Scheduler Managed Wrapper library by specifying the
DateTimeKind.Unspecified in StatBoundary

Spring Batch pause and then continue

I'm writing a job that will read from an excel file, x number of rows and then I'd like it to pause for an hour before it continues with the next x number of rows.
How do I do this?
I have a job.xml file which contains the following. The subscriptionDiscoverer fetches the file and pass it over to the processor. The subscriptionWriter should write another file when the processor is done.
<job id="subscriptionJob" xmlns="http://www.springframework.org/schema/batch" incrementer="jobParamsIncrementer">
<validator ref="jobParamsValidator"/>
<step id="readFile">
<tasklet>
<chunk reader="subscriptionDiscoverer" processor="subscriptionProcessor" writer="subscriptionWriter" commit-interval="1" />
</tasklet>
</step>
</job>
Is there some kind of timer I could use or is it some kind of flow structure? It's a large file of about 160000 rows that should be processed.
I hope someone has a solution they would like to share.
Thank you!
I'm thinking of two possible approaches for you to start with:
Stop the job, and restart again (after an hour) at the last position. You can start by taking a look on how to change the BatchStatus to notify your intent to stop the job. See http://docs.spring.io/spring-batch/2.0.x/cases/pause.html or look at how Spring Batch Admin implements its way of communicating the PAUSE flag (http://docs.spring.io/spring-batch-admin/reference/reference.xhtml). You may need to implement some persistence to store the position (row number) for the job to know where to start processing again. You can use a scheduler as well to restart the job.
-or-
Add a ChunkListener and implement the following in afterChunk(ChunkContext context): Check if x number of rows has been read so far, and if yes, implement your pause mechanism (e.g., a simple Thread.sleep or look for more consistent way of pausing the step). To check for the number of rows read, you may use StepExecution.getReadCount() from ChunkContext.getStepContext().StepExecution().
Do note that afterChunk is called outside the transaction as indicated in the javadoc:
Callback after the chunk is executed, outside the transaction.

Editing Timeline from CCB file in cocos

I did some research into this and couldn't really find anything, so if this is a repetitive question I apologize. but anyway I have made a CCB file in CocosBuilder and I would like to start the timeline, for example, at one second instead of playing from the beginning. Is there a way to do this? Thanks for the help guys.
Edit: i would like this to be done in the code.
I am using 2.2.1 Cocos2DX version. I think there is no option to play it from given interval. But you can tweak yourself to get it done. (Not simple one)
You have to go to CCBAnimationManager and there you get "mNodeSequences".
It is dictionary and you get difference properties there like "rotation position etc..."
values there.
Internally AnimationManager reads this value (These values are specified in your CCB)
and puts in runAction queue.
So you have to break it as you want.(Ex. 5 min timeline you have. But you want to start
from 1 min then you have run first 1 min Actions without delay and for remaining you
have properly calculate tween intervals.
It's long procedure and needs calculation. If you don't know any other simpler way try this. If you know pls let us know (Post it).

How does Solaris SMF determine if something is to be in maintenance or to be restarted?

I have a daemon process that I wrote being executed by SMF. The problem is when an error occurs, I have fail code and then it will need to restart from scratch. Right now it is sending sys.exit(0) (Python), but SMF keeps throwing it in maintenance mode.
I've worked with SMF enough to know that it sometimes auto-restarts certain services (and lets others fail and have you deal with them like this). How do I classify this process as one that needs to auto-restart? Is it an SMF setting, a method of failing, what?
Manpage
Solaris uses a combination of startd/critical_failure_count and startd/critical_failure_period as described in the svc.startd manpage:
startd/critical_failure_count
startd/critical_failure_period
The critical_failure_count and critical_failure_period properties together specify the maximum number of service failures allowed in a given time interval before svc.startd transitions the service to maintenance. If the number of failures exceeds critical_failure_count in any period of critical_failure_period seconds, svc.startd will transition the service to maintenance.
Defaults in the source code
The defaults can be found in the source, the value depends on whether the service is "wait style":
if (instance_is_wait_style(inst))
critical_failure_period = RINST_WT_SVC_FAILURE_RATE_NS;
else
critical_failure_period = RINST_FAILURE_RATE_NS;
The defaults are either 5 failures/10 minutes or 5 failures/second:
#define RINST_START_TIMES 5 /* failures to consider */
#define RINST_FAILURE_RATE_NS 600000000000LL /* 1 failure/10 minutes */
#define RINST_WT_SVC_FAILURE_RATE_NS NANOSEC /* 1 failure/second */
These variables can be set in the SMF as properties:
<service_bundle type="manifest" name="npm2es">
<service name="site/npm2es" type="service" version="1">
...
<property_group name="startd" type="framework">
<propval name='critical_failure_count' type='integer' value='10'/>
<propval name='critical_failure_period' type='integer' value='30'/>
<propval name="ignore_error" type="astring" value="core,signal" />
</property_group>
...
</service>
</service_bundle>
TL;DR
After checking against the startd values, If the service is "wait style", it will be throttled to a max restart of 1/sec, until it no longer exits with a non-cfg error. If the service is not "wait style" it will be put into maintenance mode.
Presuming a normal service manifest, I would suspect that you're dropping into maintenance because SMF is restarting you "too quickly" (which is a bit arbitrarily defined). svcs -xv should tell you if that is the case. If it is, SMF is restarting you, and then you're exiting again rapidly and it's decided to give up until the problem is fixed (and you've manually svcadm clear'd it.
I'd wondered if exiting 0 (and indicating success) may cause further confusion, but it doesn't appear that it will.
I don't think Oracle Solaris allows you to tune what SMF considers "too quickly".
You have to create a service manifest. This is more complicated than not. This has example manifests and documents the manifest structure.
http://www.oracle.com/technetwork/server-storage/solaris/solaris-smf-manifest-wp-167902.pdf
As it turns out, I had two pkills in a row to make sure everything was terminated correctly. The second one, naturally, was exiting something other than 0. Changing this to include an exit 0 at the end of the script solved the problem.