Uber Cadence workflow versioning - workflow

Cadence workflow versioning:
workflow impl has below code:
int version1 = Workflow.getVersion("change123", Workflow.DEFAULT_VERSION, 1);
logger.info("version1: {}", version1);
start of workflow 1 : o/p -- version1: 1
change in workflow impl :
int version1 = Workflow.getVersion("change1234", Workflow.DEFAULT_VERSION, 2);
logger.info("version1: {}", version1);
start of workflow 2 : o/p -- version1: 2
change in workflow impl :
int version1 = Workflow.getVersion("change12345", Workflow.DEFAULT_VERSION, 3);
logger.info("version1: {}", version1);
start of workflow 3 : o/p -- version1: 3
replay of workflow 2 : o/p --version1: -1
replay of workflow 1 : o/p --version1: -1
I was expecting below o/p
replay of workflow 2 : o/p -- version1: 2
replay of workflow 1 : o/p -- version1: 1
So this means we can only maintain two versions of the workflow code at a time,
the current version and
all older version getVersion will give Workflow.DEFAULT_VERSION
Please correct me if I am wrong !!

The problem is that you are passing in the different changeIDs
To maintain different versions for the same part, you have to keep using the same changeID, unless the changeID is deprecated later but you want to make change again.
ChangID is the identifier in the workflow to differentiate the different parts of changes. For example, your workflow:
String resA = activities.executeA();
String resB = activities.executeB();
If you want to make change to executeA=>executeAA, then it becomes:
int versionA = Workflow.getVersion("changeToA", Workflow.DEFAULT_VERSION, 1);
if(versionA == 1){
String resA = activities.executeAA();
}else{
String resA = activities.executeA();
}
String resB = activities.executeB();
Then later on if you want to change executeAA=>executeAAA then it becomes:
int versionA = Workflow.getVersion("changeToA", Workflow.DEFAULT_VERSION, 2);
if(versionA == 2){
String resA = activities.executeAAA();
}else if versionA == 1{
String resA = activities.executeAA();
}else{
String resA = activities.executeA();
}
String resB = activities.executeB();
So that you can maintain multiple versions.
And then if you want to make change to executeB=>executeBB, then you have to use another changeID:
int versionA = Workflow.getVersion("changeToA", Workflow.DEFAULT_VERSION, 2);
if(versionA == 2){
String resA = activities.executeAAA();
}else if versionA == 1{
String resA = activities.executeAA();
}else{
String resA = activities.executeA();
}
int versionB = Workflow.getVersion("changeToB", Workflow.DEFAULT_VERSION, 1);
if(versionB == 1){
String resB = activities.executeBB();
}else{
String resB = activities.executeB();
}
The key here is to understand the changeID, it's indeed a little tricky, but once you get it, it's powerful.
In your code, because you always make a different changeID, the already started workflows(workflow1 and workflow2) will always get a DEFAULT_VERSION(-1).
Internally, when in executing mode(non-replay), getVersion will write a Marker event with the MaxVersion into history when executing the code.
When in replay mode, if a Marker existing(changeID must match, otherwise marker is ignored), then return the version from the Marker, If not existing(because workflow1 doesn't have the getVersion code in its executing mode), then uses the minVersion(in your case the DEFAULT_VERSION). (if the version from marker is lower than minVersion, it will return errors)
See more documents:
for golang
for java

Related

Mirth - iterate over insurance HL7 segments

working with an ADT message template in Mirth, having issues with the IN1 and IN2 segments, the IN2 specifically.
Here's a sample message that I'm working with, removed almost all the segments.
MSH|^~&|EPIC|AMB||99|20220403165344|RELEASEAUTO|ADT^A04|367476|T|2.5|||AL|NE
IN1|1|10500201|105^Test|BCBS NC BLUE CARE^Test1|PO BOX 35^^DURHAM^NC^27702^||
IN2|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||first iteration IN1-62||
IN1|1|10500201|106^Test|BCBS NC BLUE CARE^Test1|PO BOX 35^^DURHAM^NC^27702^||
IN2|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||second iteration IN1-62||
So far I've built the following to iterate over the IN1 segment:
//IN1 Segments
var IN1count=0;
for each (seg in msg.IN1) {
createSegment('IN1',output,IN1count);
output.IN1[IN1count]['IN1.2'] = seg['IN1.2'];
output.IN1[IN1count]['IN1.3'] = seg['IN1.3'];
output.IN1[IN1count]['IN1.4'] = seg['IN1.4'];
output.IN1[IN1count]['IN1.8'] = seg['IN1.8'];
output.IN1[IN1count]['IN1.9'] = seg['IN1.9'];
output.IN1[IN1count]['IN1.10'] = seg['IN1.10'];
output.IN1[IN1count]['IN1.12'] = seg['IN1.12'];
output.IN1[IN1count]['IN1.13'] = seg['IN1.13'];
output.IN1[IN1count]['IN1.22'] = seg['IN1.22'];
output.IN1[IN1count]['IN1.36'] = seg['IN1.36'];
IN1count++;
}
I'm struggling to map the IN2 segment correctly on the outbound, I've read about getSegmentsAfter but I can't find that function online... how do I write the correct JS code to look for whether the given IN1 has an IN2 after, specifically if it has IN2-61, and only then create the IN2 segment?
Hope that makes sense :)
You'll find code for createSegmentAfter here; it is JavaScript inserted before your transformer: https://github.com/nextgenhealthcare/connect/blob/2f3740db44c8d42aa6fafffd991b9c1fde940ea0/server/src/com/mirth/connect/server/builders/JavaScriptBuilder.java
One approach to keeping track of whether you just passed an IN1 would be to loop over all segments with something like this:
var was_in1 = false;
var last_in1;
for each ( seg in msg.children() ){
if (was_in1) {
if (seg.name() == "IN2") {
// existing IN2 follows IN1
} else {
// insert new IN2
}
if (seg.name() == "IN1") {
was_in1 = true;
last_in1 = seg;
} else {
was_in1 = false;
}
}
It'd be nice if there was a "nextSibling()" method for messages, but there isn't. Luckily, we can simulate it by:
Getting the current object's childIndex
Getting all children from the current object's parent
Access the next child with the index childIndex + 1
It'd look like this:
for each (seg in msg.IN1) {
// Do your IN1 stuff ...
var nextSeg = seg.parent().children()[ seg.childIndex() + 1];
if(nextSeg != null && nextSeg.name() == 'IN2' && nextSeg['IN2-61'].toString() != '') {
// Do yourIN2 stuff ...
}
}

Efficient way to optimise a Scala code to read large file that doesn't fit in memory

Problem Statement Below,
We have a large log file which stores user interactions with an application. The entries in the log file follow the following schema: {userId, timestamp, actionType} where actionType is one of two possible values: [open, close]
Constraints:
The log file is too big to fit in memory on one machine. Also assume that the aggregated data doesn’t fit into memory.
Code has to be able to run on a single machine.
Should not use an out-of-the box implementation of mapreduce or 3rd party database; don’t assume we have a Hadoop or Spark or other distributed computing framework.
There can be multiple entries of each actionType for each user, and there might be missing entries in the log file. So a user might be missing a close record between two open records or vice versa.
Timestamps will come in strictly ascending order.
For this problem, we need to implement a class/classes that computes the average time spent by each user between open and close. Keep in mind that there are missing entries for some users, so we will have to make a choice about how to handle these entries when making our calculations. Code should follow a consistent policy with regards to how we make that choice.
The desired output for the solution should be [{userId, timeSpent},….] for all the users in the log file.
Sample log file (comma-separated, text file)
1,1435456566,open
2,1435457643,open
3,1435458912,open
1,1435459567,close
4,1435460345,open
1,1435461234,open
2,1435462567,close
1,1435463456,open
3,1435464398,close
4,1435465122,close
1,1435466775,close
Approach
Below is the code I've written in Python & Scala, which seems to be not efficient and upto the expectations of the scenario given, I'd like to feedback from community of developers in this forum how better we could optimise this code as per given scenario.
Scala implementation
import java.io.FileInputStream
import java.util.{Scanner, Map, LinkedList}
import java.lang.Long
import scala.collection.mutable
object UserMetrics extends App {
if (args.length == 0) {
println("Please provide input data file name for processing")
}
val userMetrics = new UserMetrics()
userMetrics.readInputFile(args(0),if (args.length == 1) 600000 else args(1).toInt)
}
case class UserInfo(userId: Integer, prevTimeStamp: Long, prevStatus: String, timeSpent: Long, occurence: Integer)
class UserMetrics {
val usermap = mutable.Map[Integer, LinkedList[UserInfo]]()
def readInputFile(stArr:String, timeOut: Int) {
var inputStream: FileInputStream = null
var sc: Scanner = null
try {
inputStream = new FileInputStream(stArr);
sc = new Scanner(inputStream, "UTF-8");
while (sc.hasNextLine()) {
val line: String = sc.nextLine();
processInput(line, timeOut)
}
for ((key: Integer, userLs: LinkedList[UserInfo]) <- usermap) {
val userInfo:UserInfo = userLs.get(0)
val timespent = if (userInfo.occurence>0) userInfo.timeSpent/userInfo.occurence else 0
println("{" + key +","+timespent + "}")
}
if (sc.ioException() != null) {
throw sc.ioException();
}
} finally {
if (inputStream != null) {
inputStream.close();
}
if (sc != null) {
sc.close();
}
}
}
def processInput(line: String, timeOut: Int) {
val strSp = line.split(",")
val userId: Integer = Integer.parseInt(strSp(0))
val curTimeStamp = Long.parseLong(strSp(1))
val status = strSp(2)
val uInfo: UserInfo = UserInfo(userId, curTimeStamp, status, 0, 0)
val emptyUserInfo: LinkedList[UserInfo] = new LinkedList[UserInfo]()
val lsUserInfo: LinkedList[UserInfo] = usermap.getOrElse(userId, emptyUserInfo)
if (lsUserInfo != null && lsUserInfo.size() > 0) {
val lastUserInfo: UserInfo = lsUserInfo.get(lsUserInfo.size() - 1)
val prevTimeStamp: Long = lastUserInfo.prevTimeStamp
val prevStatus: String = lastUserInfo.prevStatus
if (prevStatus.equals("open")) {
if (status.equals(lastUserInfo.prevStatus)) {
val timeSelector = if ((curTimeStamp - prevTimeStamp) > timeOut) timeOut else curTimeStamp - prevTimeStamp
val timeDiff = lastUserInfo.timeSpent + timeSelector
lsUserInfo.remove()
lsUserInfo.add(UserInfo(userId, curTimeStamp, status, timeDiff, lastUserInfo.occurence + 1))
} else if(!status.equals(lastUserInfo.prevStatus)){
val timeDiff = lastUserInfo.timeSpent + curTimeStamp - prevTimeStamp
lsUserInfo.remove()
lsUserInfo.add(UserInfo(userId, curTimeStamp, status, timeDiff, lastUserInfo.occurence + 1))
}
} else if(prevStatus.equals("close")) {
if (status.equals(lastUserInfo.prevStatus)) {
lsUserInfo.remove()
val timeSelector = if ((curTimeStamp - prevTimeStamp) > timeOut) timeOut else curTimeStamp - prevTimeStamp
lsUserInfo.add(UserInfo(userId, curTimeStamp, status, lastUserInfo.timeSpent + timeSelector, lastUserInfo.occurence+1))
}else if(!status.equals(lastUserInfo.prevStatus))
{
lsUserInfo.remove()
lsUserInfo.add(UserInfo(userId, curTimeStamp, status, lastUserInfo.timeSpent, lastUserInfo.occurence))
}
}
}else if(lsUserInfo.size()==0){
lsUserInfo.add(uInfo)
}
usermap.put(userId, lsUserInfo)
}
}
Python Implementation
import sys
def fileBlockStream(fp, number_of_blocks, block):
#A generator that splits a file into blocks and iterates over the lines of one of the blocks.
assert 0 <= block and block < number_of_blocks #Assertions to validate number of blocks given
assert 0 < number_of_blocks
fp.seek(0,2) #seek to end of file to compute block size
file_size = fp.tell()
ini = file_size * block / number_of_blocks #compute start & end point of file block
end = file_size * (1 + block) / number_of_blocks
if ini <= 0:
fp.seek(0)
else:
fp.seek(ini-1)
fp.readline()
while fp.tell() < end:
yield fp.readline() #iterate over lines of the particular chunk or block
def computeResultDS(chunk,avgTimeSpentDict,defaultTimeOut):
countPos,totTmPos,openTmPos,closeTmPos,nextEventPos = 0,1,2,3,4
for rows in chunk.splitlines():
if len(rows.split(",")) != 3:
continue
userKeyID = rows.split(",")[0]
try:
curTimeStamp = int(rows.split(",")[1])
except ValueError:
print("Invalid Timestamp for ID:" + str(userKeyID))
continue
curEvent = rows.split(",")[2]
if userKeyID in avgTimeSpentDict.keys() and avgTimeSpentDict[userKeyID][nextEventPos]==1 and curEvent == "close":
#Check if already existing userID with expected Close event 0 - Open; 1 - Close
#Array value within dictionary stores [No. of pair events, total time spent (Close tm-Open tm), Last Open Tm, Last Close Tm, Next expected Event]
curTotalTime = curTimeStamp - avgTimeSpentDict[userKeyID][openTmPos]
totalTime = curTotalTime + avgTimeSpentDict[userKeyID][totTmPos]
eventCount = avgTimeSpentDict[userKeyID][countPos] + 1
avgTimeSpentDict[userKeyID][countPos] = eventCount
avgTimeSpentDict[userKeyID][totTmPos] = totalTime
avgTimeSpentDict[userKeyID][closeTmPos] = curTimeStamp
avgTimeSpentDict[userKeyID][nextEventPos] = 0 #Change next expected event to Open
elif userKeyID in avgTimeSpentDict.keys() and avgTimeSpentDict[userKeyID][nextEventPos]==0 and curEvent == "open":
avgTimeSpentDict[userKeyID][openTmPos] = curTimeStamp
avgTimeSpentDict[userKeyID][nextEventPos] = 1 #Change next expected event to Close
elif userKeyID in avgTimeSpentDict.keys() and avgTimeSpentDict[userKeyID][nextEventPos]==1 and curEvent == "open":
curTotalTime,closeTime = missingHandler(defaultTimeOut,avgTimeSpentDict[userKeyID][openTmPos],curTimeStamp)
totalTime = curTotalTime + avgTimeSpentDict[userKeyID][totTmPos]
avgTimeSpentDict[userKeyID][totTmPos]=totalTime
avgTimeSpentDict[userKeyID][closeTmPos]=closeTime
avgTimeSpentDict[userKeyID][openTmPos]=curTimeStamp
eventCount = avgTimeSpentDict[userKeyID][countPos] + 1
avgTimeSpentDict[userKeyID][countPos] = eventCount
elif userKeyID in avgTimeSpentDict.keys() and avgTimeSpentDict[userKeyID][nextEventPos]==0 and curEvent == "close":
curTotalTime,openTime = missingHandler(defaultTimeOut,avgTimeSpentDict[userKeyID][closeTmPos],curTimeStamp)
totalTime = curTotalTime + avgTimeSpentDict[userKeyID][totTmPos]
avgTimeSpentDict[userKeyID][totTmPos]=totalTime
avgTimeSpentDict[userKeyID][openTmPos]=openTime
eventCount = avgTimeSpentDict[userKeyID][countPos] + 1
avgTimeSpentDict[userKeyID][countPos] = eventCount
elif curEvent == "open":
#Initialize userid with Open event
avgTimeSpentDict[userKeyID] = [0,0,curTimeStamp,0,1]
elif curEvent == "close":
#Initialize userid with missing handler function since there is no Open event for this User
totaltime,OpenTime = missingHandler(defaultTimeOut,0,curTimeStamp)
avgTimeSpentDict[userKeyID] = [1,totaltime,OpenTime,curTimeStamp,0]
def missingHandler(defaultTimeOut,curTimeVal,lastTimeVal):
if lastTimeVal - curTimeVal > defaultTimeOut:
return defaultTimeOut,curTimeVal
else:
return lastTimeVal - curTimeVal,curTimeVal
def computeAvg(avgTimeSpentDict,defaultTimeOut):
resDict = {}
for k,v in avgTimeSpentDict.iteritems():
if v[0] == 0:
resDict[k] = 0
else:
resDict[k] = v[1]/v[0]
return resDict
if __name__ == "__main__":
avgTimeSpentDict = {}
if len(sys.argv) < 2:
print("Please provide input data file name for processing")
sys.exit(1)
fileObj = open(sys.argv[1])
number_of_chunks = 4 if len(sys.argv) < 3 else int(sys.argv[2])
defaultTimeOut = 60000 if len(sys.argv) < 4 else int(sys.argv[3])
for chunk_number in range(number_of_chunks):
for chunk in fileBlockStream(fileObj, number_of_chunks, chunk_number):
computeResultDS(chunk, avgTimeSpentDict, defaultTimeOut)
print (computeAvg(avgTimeSpentDict,defaultTimeOut))
avgTimeSpentDict.clear() #Nullify dictionary
fileObj.close #Close the file object
Both program above gives desired output, but efficiency is what matters for this particular scenario. Let me know if you've anything better or any suggestions on existing implementation.
Thanks in Advance!!
What you are after is iterator usage. I'm not going to re-write your code, but the trick here is likely to be using an iterator. Fortunately Scala provides decent out of the box tooling for the job.
import scala.io.Source
object ReadBigFiles {
def read(fileName: String): Unit = {
val lines: Iterator[String] = Source.fromFile(fileName).getLines
// now you get iterator semantics for the file line traversal
// that means you can only go through the lines once, but you don't incur a penalty on heap usage
}
}
For your use case, you seem to require a lastUser, so you're dealing with groups of 2 entries. I think you you have two choices, either go for iterator.sliding(2), which will produce iterators for every pair, or simply add recursion to the mix using options.
def navigate(source: Iterator[String], last: Option[User]): ResultType = {
if (source.hasNext) {
val current = source.next()
last match {
case Some(existing) => // compare with previous user etc
case None => navigate(source, Some(current))
}
} else {
// exit recursion, return result
}
}
You can avoid all the code you've written to read the file and so on. If you need to count occurrences, simply build a Map inside your recursion, and increment the occurrences at every step based on your business logic.
from queue import LifoQueue, Queue
def averageTime() -> float:
logs = {}
records = Queue()
with open("log.txt") as fp:
lines = fp.readlines()
for line in lines:
if line[0] not in logs:
logs[line[0]] = LifoQueue()
logs[line[0]].put((line[1], line[2]))
else:
logs[line[0]].put((line[1], line[2]))
for k in logs:
somme = 0
count = 0
while not logs[k].empty():
l = logs[k].get()
somme = (somme + l[0]) if l[1] == "open" else (somme - l[0])
count = count + 1
records.put([k, somme, count//2])
while not records.empty():
record = records.get()
print(f"UserId={record[0]} Avg={record[1]/record[2]}")

Katalon Studio deleting groovy code from script when switching to manual or recording

Did anybody see this as well, or am I doing something wrong?
I am working on a test case with Katalon Studio, in script mode. I have some Groovy script in it, (in particular a class definition, see start of code below). When I switch mode to work in manual or record mode, and then back to script mode, the groovy code (the class declaration in the example below) has disappeared...
Not very practical! Anything that should be done to avoid this?
Many thanks!
E.
Code example:
//Katalon Imports here
class Product {
String nozo
String price_string
Number qty = 1
Number price_ht = 0
Number price_ttc = 0
Product(String nozo, String price_string, Number qty = 1) {
this.nozo = nozo
this.price_string = price_string
def get_price = (this.price_string =~/(\d+)\s(\d+\.\d{2})/)
if(get_price) {
this.price_ttc = get_price[1] + get_price[2]
}
else this.price_ttc = 0
this.price_ht = this.price_ttc / 1.2
}
def get_price_order_line_ht() {
return this.price_ht * this.qty
}
}
// Intialisation of test data
Number qty_pdt1 = 2
'Open home page'
WebUI.openBrowser('http://localhost:8080/')
'Navigate to subrange'
WebUI.doubleClick(findTestObject('Object Repository/vb_desktop/home_page_desktop_fr/a_Lampadaire'))
// Etc...

Manipulate Date/Time in RDF4J for Debugging

I'm using RDF4J 2.2.1 on Windows 10 Professional 64-bit. I will have some SPIN constructor rules which are sensitive to date/time. For example, I may want to compare a triple containing an xsd:dateTime datatype property to the output of SPARQL's built-in now() function. To debug this functionality, it would be convenient to manipulate RDF4J's perception of date/time somehow rather than manipulating the system clock. I'm aware that there is general commercial software (e.g. Solution Soft's "Time Machine") that can generally manipulate the perception of time for any Windows process. However, this software appears to be far too expensive for our little proof-of-concept project.
What I'd like to be able to do:
Set RDF4J's date/time to some arbitrary date/time value.
Have RDF4J's date/time proceed at real time speed or at some programmable faster speed during debugging.
Does anyone have suggestions for how to manipulate in this manner date/time for RDF4J? It would make my debugging of time-sensitive SPIN rules much more efficient. I'd prefer not to fight my PC's system clock since many other things depend on it. I suppose that running an entire virtual PC and debugging on the virtual PC is another option, but it seems there should be a simpler way.
Thanks.
You could accomplish this by implementing a custom SPARQL function and using that instead of the actual now() function. Call it mock_now() for example. Since you implement it, you have full control over its behavior.
I'm posting my solution to my question in hopes it might help others as a further example of a custom SPARQL function under RDF4J. I don't hold this out as en elegant solution (due to how I set test conditions), but it does work and meets my requirements. This solution extends the answer from #jeen_broekstra based on http://docs.rdf4j.org/custom-sparql-functions/...
I now have a custom implemented in the namespace defined by PREFIX soo: <http://www.disa.mil/dso/a2i/ontologies/PBSM/Sharing/SpectrumOperationsOntology#>as a function called soo:spectrumOpsDateTime() which can take either three or no arguments. The three arguments case allows setting the scaled date time as follows.
First argument: xsd:boolean... use system clock if true or use scaled clock if false
Second argument: xsd:dateTime (ignored if first argument is true)... the starting date/time for scaled clock operation
Third argument: xsd:double (ignored if first argument is true)... the scaled clock rate (e.g. 2.0 means the scaled clock runs faster, at twice real time)
If there are no arguments, soo:spectrumOpsDateTime() returns the scaled date/time or the system date/time depending on what the initial values in the Java code specify or what the last three-argument call specified. The SPARQL and SPIN code under test will use only the no-argument version. Test setup queries will set up the time conditions for particular tests.
Here's an example SPARQL setup query to set up a 2x speed starting this morning:
PREFIX soo: <http://www.disa.mil/dso/a2i/ontologies/PBSM/Sharing/SpectrumOperationsOntology#>
SELECT DISTINCT *
WHERE {
BIND(soo:spectrumOpsDateTime("false"^^xsd:boolean, "2017-08-22T10:49:21.019-05:00"^^xsd:dateTime, "2.0"^^xsd:double) AS ?testDateTime) .
}
Here's an example SPARQL query to get the scaled date/time:
PREFIX soo: <http://www.disa.mil/dso/a2i/ontologies/PBSM/Sharing/SpectrumOperationsOntology#>
SELECT DISTINCT *
WHERE {
BIND(soo:spectrumOpsDateTime() AS ?testDateTime) .
}
The single class used to implement this custom function is:
/**
*
*/
package mil.disa.dso.spo.a2i.nsc.sharing2025.scaledDateTime;
import java.time.ZonedDateTime;
import java.time.format.DateTimeFormatter;
import java.time.temporal.ChronoUnit;
import org.eclipse.rdf4j.model.IRI;
import org.eclipse.rdf4j.model.Literal;
import org.eclipse.rdf4j.model.Value;
import org.eclipse.rdf4j.model.ValueFactory;
import org.eclipse.rdf4j.model.impl.SimpleValueFactory;
import org.eclipse.rdf4j.query.algebra.evaluation.ValueExprEvaluationException;
import org.eclipse.rdf4j.query.algebra.evaluation.function.Function;
/**
* Class for generating a configurable date/time clock that can either be a pass-through of the
* system clock or a scaled clock starting at a specified date/time running at a specified
* rate from that specified time (first call).
* #author Greg Cox of Roberson and Associates &copy Copyright 2017 Roberson and Associates, All Right Reserved
*
*/
public class DateTimeGenerator implements Function {
private static final String thisClassName = "RDF4JCustomSPARQLFunction." + DateTimeGenerator.class.getSimpleName();
private static final String thisClassFullName = DateTimeGenerator.class.getName();
private static final boolean errorMessages = true;
private static final boolean verboseMessages = true;
private double clockPace = 2.0; // the speed of the clock, 1.0 is real time, 2.0 is 2x real time (double speed)
private boolean useSystemClock = false; // flag to indicate whether to use scaled clock or pass through the system clock
private ZonedDateTime startingRealDateTime = null; // the real time stamp at the first call to the evaluate function
private ZonedDateTime startingScaledDateTime = // the scaled time stamp (starting scaled time) at the first call to the evaluate function
ZonedDateTime.parse("2016-08-21T17:29:37.568-05:00");
// define a constant for the namespace of custom function
private static String NAMESPACE = "http://www.disa.mil/dso/a2i/ontologies/PBSM/Sharing/SpectrumOperationsOntology#"; // defined as soo: elsewhere
// this is the evaluate function needed to implement the RDF4J Function interface
// it can take 0 or 3 arguments
// 0 - get the current scaled time (starting by first call)
// 3 - useSystemClock flag (true/false), starting date/time (xsd:dateTime), clock pace (non-negative real w/ 1.0 meaning 1sec = 1sec)
#SuppressWarnings("unused")
#Override
public Value evaluate(ValueFactory valueFactory, Value... args) throws ValueExprEvaluationException {
String thisMethodMessagePrefix = "";
if (errorMessages || verboseMessages ) {
String thisMethodName = ".evaluate: ";
thisMethodMessagePrefix = thisClassName + thisMethodName;
}
if (args.length == 3) {
// Three arguments --> attempting to set mode/parameters, so attempt to parse/check them
if (verboseMessages) System.out.println(thisMethodMessagePrefix + "attempting to set scaled clock mode/parameters");
boolean argErrFlag = false;
boolean newUseSystemClock = false;
String argErrMessage = "";
// first argument should be true/false on whether to use system clock (true) or scaled clock (false)
if (!(args[0] instanceof Literal)) {
argErrFlag = true;
argErrMessage += "first argument must be a literal true/false value... ";
} else {
String useSystemClockString = args[0].stringValue();
if (useSystemClockString.equalsIgnoreCase("true")) {
if (verboseMessages) System.out.println(thisMethodMessagePrefix + "use system clock specified");
newUseSystemClock = true;
} else if (useSystemClockString.equalsIgnoreCase("false")) {
if (verboseMessages) System.out.println(thisMethodMessagePrefix + "use scaled clock specified");
newUseSystemClock = false;
}
else {
argErrFlag = true;
argErrMessage += "first argument must be a literal true/false value... ";
}
}
// second argument should be starting date/time for scaled clock (ignore if using system clock)
ZonedDateTime startTime = null;
if (!newUseSystemClock) {
if (!(args[1] instanceof Literal)) {
argErrFlag = true;
argErrMessage += "second argument must be literal xsd:dateTime value for start of scaled date/time... ";
} else {
String startDateTimeString = args[1].stringValue();
try {
startTime = ZonedDateTime.parse(startDateTimeString);
} catch (Exception e) {
argErrFlag = true;
argErrMessage += "could not parse starting date/time... " + e.getMessage() + "... ";
}
}
}
// third argument should be clock pace for scaled clock (ignore if using system clock)
Double newClockPace = null;
if (!newUseSystemClock) {
if (!(args[2] instanceof Literal)) {
argErrFlag = true;
argErrMessage += "third argument must be literal xsd:double value for clock pace... ";
} else {
String clockPaceString = args[2].stringValue();
try {
newClockPace = Double.parseDouble(clockPaceString);
} catch (Exception e) {
argErrFlag = true;
argErrMessage += "could not parse clock pace which should be a positive xsd:double... ";
}
if ((newClockPace != null) && (newClockPace <= 0.0)) {
argErrFlag = true;
argErrMessage += "clock pace must be positive, got " + newClockPace + "... ";
}
}
}
// check for errors and set up the generator if no errors...
if (argErrFlag) {
if (errorMessages) System.err.println(thisMethodMessagePrefix + "ERROR - " + argErrMessage);
if (errorMessages) System.err.println(thisMethodMessagePrefix + "throwing exception...");
throw new ValueExprEvaluationException(
"spectrum operations time function soo:spectrumOpsDateTime() encountered errors in function arguments... " +
argErrMessage);
} else if (newUseSystemClock) {
if (verboseMessages) System.out.println(thisMethodMessagePrefix + "using unscaled system clock");
useSystemClock = newUseSystemClock;
} else if (!newUseSystemClock) {
if (verboseMessages) System.out.println(thisMethodMessagePrefix + "using scaled time");
useSystemClock = newUseSystemClock;
startingRealDateTime = ZonedDateTime.now();
if (verboseMessages) System.out.println(thisMethodMessagePrefix + "setting starting real time to " + startingRealDateTime.format(DateTimeFormatter.ISO_OFFSET_DATE_TIME));
if (verboseMessages) System.out.println(thisMethodMessagePrefix + "setting start time to " + startTime.format(DateTimeFormatter.ISO_OFFSET_DATE_TIME));
startingScaledDateTime = startTime;
if (verboseMessages) System.out.println(thisMethodMessagePrefix + "setting clock pace to " + String.format("%5.2f", newClockPace * 100.0) + "%");
clockPace = newClockPace;
}
} else if (args.length != 0) { // can only have no arguments or three arguments...
throw new ValueExprEvaluationException(
"spectrum operations time function soo:spectrumOpsDateTime() requires "
+ "zero arguments or three arguments, got "
+ args.length + " arguments");
}
// now run the generator and return the result...
IRI xsdDateTimeIRI = valueFactory.createIRI("http://www.w3.org/2001/XMLSchema#dateTime"); // long-form equivalent to xsd:dateTime
if (useSystemClock) {
String unscaledTimeString = millisTrailingZeroes(ZonedDateTime.now().format(DateTimeFormatter.ISO_OFFSET_DATE_TIME));
return valueFactory.createLiteral(unscaledTimeString, xsdDateTimeIRI);
} else {
errString = null;
String scaledTimeString = millisTrailingZeroes(getScaledDateTime().format(DateTimeFormatter.ISO_OFFSET_DATE_TIME));
if (scaledTimeString == null) {
if (errorMessages) System.err.println(thisMethodMessagePrefix + "ERROR - scaled time returned null");
if (errorMessages) System.err.println(thisMethodMessagePrefix + "thowing exception...");
throw new ValueExprEvaluationException("could not generate valid scaled time string" + ((errString == null) ? "" : "... " + errString));
}
return valueFactory.createLiteral(scaledTimeString, xsdDateTimeIRI);
}
}
private static String errString = null;
/**
* Utility method to make all the millisecond fields of an <tt>ISO_OFFSET_DATE_TIME</tt> three digits by
* adding trailing zeroes as needed. Why? Because of trouble with various implementations interpreting
* 1 and 2 digit milliseconds differently. Should be standard decimal, but sometimes interpreted
* as number of milliseconds (e.g. .39T interpreted as 39 millieconds inststead of 390 milliseconds)
* #param <tt>ISO_OFFSET_DATE_TIME</tt> string to check for millisecond field length
* #return <tt>ISO_OFFSET_DATE_TIME</tt> strnig with trailing zeroes in milliseconds field
* as require to make the field three digits or <tt>null</tt> on error
*/
private static String millisTrailingZeroes(String isoDateTimeString) {
if (isoDateTimeString == null) {
errString = "DateTimeGenerator.millisTrailingZeroes: got null isoDateTimeString argument, returning null...";
return null;
}
String[] ss_l1 = isoDateTimeString.split("\\."); // Example: 2017-08-18T13:01:05.39-05:00 --> 2017-08-18T13:01:05 AND 39-05:00
if (ss_l1.length != 2) {
errString = "DateTImeGenerator.millisTrailingZeros: first parsing split of isoDateTimeString=" + isoDateTimeString + " by '.' got unexpected number of parts=" + ss_l1.length;
return null;
}
String[] ss_l2 = ss_l1[1].split("-"); // 39-05:00 --> 39 AND 05:00
if (ss_l2.length != 2) {
errString = "DateTImeGenerator.millisTrailingZeros: second parsing split of " + ss_l1[1] + " by '-' got unexpected number of parts=" + ss_l2.length;
return null;
}
if (ss_l2[0].length() == 1) {
ss_l2[0] = ss_l2[0] + "00";
} else if (ss_l2[0].length() == 2)
ss_l2[0] = ss_l2[0] + "0"; // 39 --> 390
return ss_l1[0] + "." + ss_l2[0] + "-" + ss_l2[1]; // 2017-08-18T13:01:05.390-05:00
}
/**
* Method to get the current scaled date time according to the state of this DateTimeGenerator.
* If <tt>useSystemClock</tt> is <tt>true</tt>, then time is not
* scaled and system time is returned instead of scaled time.
* #return scaled date time if <tt>useSystemClock</tt> is <tt>true</tt> or
* system date time if <tt>useSystemClock</tt> is <tt>false</tt>
*/
private ZonedDateTime getScaledDateTime() {
ZonedDateTime scaledDateTime = null;
if (useSystemClock) {
scaledDateTime = ZonedDateTime.now();
} else {
if (startingRealDateTime == null)
startingRealDateTime = ZonedDateTime.now();
long realMillisFromFirstCall = ChronoUnit.MILLIS.between(startingRealDateTime, ZonedDateTime.now());
long scaledMillisFromFirstCall = (long) ((double) realMillisFromFirstCall * clockPace);
scaledDateTime = ChronoUnit.MILLIS.addTo(startingScaledDateTime, scaledMillisFromFirstCall);
}
return scaledDateTime;
}
#Override
public String getURI() {
return NAMESPACE + "spectrumOpsDateTime";
}
/**
* Test main method
* #param args command line arguments (ignored)
*/
#SuppressWarnings("unused")
public static void main(String[] args) {
String thisMethodMessagePrefix = "";
if (errorMessages || verboseMessages ) {
String thisMethodName = ".main: ";
thisMethodMessagePrefix = thisClassName + thisMethodName;
}
DateTimeGenerator testGen = new DateTimeGenerator();
if (verboseMessages) System.out.println(thisMethodMessagePrefix + "custom SPARQL method URI: " + testGen.getURI());
if (verboseMessages) System.out.println(thisMethodMessagePrefix + "fully-qualified class name: " + thisClassFullName);
ValueFactory testVF = SimpleValueFactory.getInstance();
Value testValues[] = new Value[0];
while (true) {
if (verboseMessages) System.out.println(thisMethodMessagePrefix + "scaled: " + testGen.evaluate(testVF, testValues).stringValue() +
" current real: " + millisTrailingZeroes(ZonedDateTime.now().format(DateTimeFormatter.ISO_OFFSET_DATE_TIME)));
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}
In my case, the jar file exported from Eclipse executes under my installation of Apache and resides at C:\Apache\apache-tomcat-8.5.15\webapps\rdf4j-server\WEB-INF\lib\ScaledDateTime.jar I restart the Apache server after replacing this jar file when I do mofifications.

finding "active" branch in EGIT repository

I have to modify a faulty program/feature, which is an extension to org.eclipse.jgit
the program overrides the close method of the PushResultDialog and passes the Repository to an method.
Here i want to find the branch which was actually pushed/checked out. I am only interested if its the master, if not i don't wanna do anything.
else i need the list of the pushed files.
First the program looked like this :
head = repo.resolve(Constants.HEAD);
RevCommit commit = rw.parseCommit(head);
PersonIdent committerIdent = commit.getCommitterIdent();
sCommitter = committerIdent.getName();
String sBranch = "?";
for (Map.Entry<String, Ref> e : repo.getAllRefs().entrySet()) {
if (e.getKey().startsWith(Constants.R_HEADS)) {
Ref ref = e.getValue();
if (rw.isMergedInto(commit, rw.parseCommit(ref.getObjectId()))) {
sTemp = ref.getName();
int i = sTemp.lastIndexOf('/');
if (i == -1)
continue;
sBranch = sTemp.substring(i + 1);
System.out.println("Ref " + sBranch
+ " < contains > " + commit);
if (sBranch.equalsIgnoreCase("master")) {
break;
} else {
return;
}
}
}
}
RevCommit parent = rw.parseCommit(commit.getParent(0).getId());
DiffFormatter df = new DiffFormatter(
DisabledOutputStream.INSTANCE);
df.setRepository(repo);
df.setDiffComparator(RawTextComparator.DEFAULT);
df.setDetectRenames(true);
List<DiffEntry> diffs = df.scan(parent.getTree(),
commit.getTree());
for (DiffEntry diff : diffs) {
sTemp = diff.getNewPath();
pushedObjects.add(sTemp);
}
now .. this works as long as the workflow in eclipse is just "checkout master" "pull" "merge branch" "push"
any other order seems to mess with the order of the branches in the set, and it stumbles over this "else" :
if (sBranch.equalsIgnoreCase("master")) {
break;
} else {
return;
}
the question is : is there an eaasy method to pick the right branch?
Ok, i think i can simply look at the HEAD in the set:
e.getKey().startsWith(Constants.HEAD)
and then parse the branchname, that should always be the branch which is checked out, if (in my case),if it is the master, i am happy.