Parse Json File & Give NextStep from Current Step from Json File in WorkFlow-Core (Daniel Gerlag) - workflow

Is there any InBuilt Method or Shortcut Method in DaneilGerLag(WorkFlow-Core)For Parsing Json File(which have List of Steps & Each step has Name & corresponding NextStep) & from json file how will i get NextStep in my existing Backend Api(.cs file function)?
This is my Json
"Id": "NextTaskWorkFlow",
"Steps": [
{
"StepType": "select-customer",
"NextStepId": "create-project"
},
{
"StepType": "create-project",
"NextStepId": "add-estimated"
},
]
}
and in my .cs file function i want next step like these way
NextStep = Steps.Single(x =\> x.StepType== outcome.StepType).NextStepId;
Please Help

Related

vs code snippet: how to use variable transforms twice in a row

See the following snippet:
"srcPath":{
"prefix": "getSrcPath",
"body": [
"$TM_FILEPATH",
"${1:${TM_FILEPATH/(.*)src.(.*)/${2}/i}}",
"${TM_FILEPATH/[\\\\]/./g}"
]
},
The output of lines 1-3 is :
D:\root\src\view\test.lua
view\test.lua
D:.root.src.view.test.lua
How can I get output like 'view/test.lua'?
Try this snippet:
"srcPath":{
"prefix": "getSrcPath",
"body": [
"$TM_FILEPATH",
"${TM_FILEPATH/.*src.|(\\\\)/${1:+/}/g}",
"${TM_FILEPATH/[\\\\]/\\//g}"
]
}
.*src.|(\\\\) will match everything up to and including the ...src\ path information. We don't save it in a capture group because we aren't using it in the replacement part of the transform.
The (\\\\) matches any \ in the rest of the path - need the g flag to get them all.
Replace: ${1:+/} which means if there is a capture group 1 in .*src.|(\\\\) then replace it with a /. Note we don't match the rest of the path after src\ only the \'s that might follow it. So, not matching those other path parts just allows them to remain in the result.
You were close on this one:
"${TM_FILEPATH/[\\\\]/\\//g}" just replace any \\\\ with \\/.
With the extension File Templates you can insert a "snippet" that contains a variable and multiple find-replace operations.
With a key binding:
{
"key": "ctrl+alt+f", // or any other combo
"command": "templates.pasteTemplate",
"args": {
"text": [
"${relativeFile#find=.*?src/(.*)#replace=$1#find=[\\\\/]#flags=g#replace=.#}"
]
}
}
At the moment only possible with a key binding or via multi command (or similar). Will add an issue to also make it possible by prefix.
Also some of the standard variables are missing.

Create a gatling custom feeder for large json data files

I am new to Gatling and Scala and I am trying to create a test that has a custom 'feeder' which would allow each load test thread to use (and reuse) one of about 250 json data files as a post payload.
Each post payload file has 1000 records of this form:
[{
"zip": "66221-2115",
"recordId": "18378e10-e046-4ad3-9293-0847f8a05b2f",
"firstName": "ANGELA",
"lastName": "MADEUP",
"city": "Springfield",
"street": "123 Fake St",
"state": "KS",
"email": "AMADEUP#GMAIL.COM"
},
...
]
(files are about 250kB each)
Ideally, I would like to read them in at the start of the test kind of like this:
int fileCount = 3;
ClassLoader classLoader = getClass().getClassLoader();
List<File> files = new ArrayList<>();
for (int i =0; i<=fileCount; i++){
String fileName = String.format("identityMatching/address_data_%d.json", i);
File file = new File(classLoader.getResource(fileName).getFile());
files.add(file);
}
and then get the file contents with something like:
FileUtils.readFileToString(files.get(1), StandardCharsets.UTF_8)
I am now fiddling with getting this code working in scala but am wondering a couple things:
1) Can I make this code into a feeder so that I can use it like a CSV feeder?
2) When should I load the json from the files into memory? At the start of the test or when each thread needs the data?
I haven't received any answers so I will post what I have learned.
1) I was able to use a feeder with the filenames in it (not the file content)
2) I think that the best approach for reading the data in is:
.body(RawFileBody(jsonMessage))
RawFileBody(path: Expression[String]) where path is the location of a file that will be uploaded as is
(from https://gatling.io/docs/current/http/http_request)

Converting a string that represents a list into an actual list in Jython?

I have a string in Jython that represents a list of JSON arrays:
[{"datetime": 1570216445000, "type": "test"},{"datetime": 1570216455000, "type": "test2"}]
If I try to iterate over this though, it just iterates over each character. How can I make it iterate over the actual list so I can get each JSON array out?
Background info - This script is being run in Apache NiFi, below is the code that the string originates from:
from org.apache.commons.io import IOUtils
...
def process(self, inputStream):
text = IOUtils.toString(inputStream,StandardCharsets.UTF_8)
You can parse a JSON similar to how you do it in Python.
Sample Code:
import json
# Sample JSON text
text = '[{"datetime": 1570216445000, "type": "test"},{"datetime": 1570216455000, "type": "test2"}]'
# Parse the JSON text
obj = json.loads(text)
# 'obj' is a dictionary
print obj[0]['type']
print obj[1]['type']
Output:
> jython json_string_to_object.py
test
test2

Hello im trying to read a JSON file and sorting with a template that things would be in a specific order, can i get some pointers on how to do it?

I found how to read from it but , can't seem to find the information i need on how to order it using my own template. and writing it on a diffrent json file. Im using Scala.
Usually to transform data from one JSON file to another you will need to parse it to some data structures in memory (case classes, Scala collections, etc.), transform them and serialize back to file.
Circe is most inefficient JSON parser, especially when it is need to parse files. Its core parser works only with strings that requires reading whole file to RAM and convert it to string from encoded bytes (usually UTF-8), even its alternative Jawn parser reads whole file to a byte array, then convert it to a string and then start parsing. Its formatter also have lot of overheads: serialization of whole output to string or byte buffer before you can start writing it to file.
Much better would be to use circe-jackson integration or even better to use jackson-module-scala: both support reading from FileInputStream and writing to FileOutputStream.
Most efficient Scala parser and serializer that can be used for buffered reading/writing from/to files is here and example of parse-transform-serialize code with it is below.
Let we have a following content of the JSON file:
{
"name": "John",
"devices": [
{
"id": 1,
"model": "HTC One X"
}
]
}
And we are going to transform it to:
{
"name": "John",
"devices": [
{
"id": 1,
"model": "HTC One X"
},
{
"id": 2,
"model": "iPhone X"
}
]
}
Here is how we can do it with jsoniter-scala:
libraryDependencies ++= Seq(
"com.github.plokhotnyuk.jsoniter-scala" %% "jsoniter-scala-core" % "0.29.2" % Compile,
"com.github.plokhotnyuk.jsoniter-scala" %% "jsoniter-scala-macros" % "0.29.2" % Provided // required only in compile-time
)
// import required packages
import java.io._
import com.github.plokhotnyuk.jsoniter_scala.macros._
import com.github.plokhotnyuk.jsoniter_scala.core._
// define your model that mimic JSON format
case class Device(id: Int, model: String)
case class User(name: String, devices: Seq[Device])
// create codec for type that corresponds to root of JSON
implicit val codec = JsonCodecMaker.make[User](CodecMakerConfig())
// read & parse JSON from file to your data structures
val user = {
val fis = new FileInputStream("/tmp/input.json")
try readFromStream(fis)
finally fis.close()
}
// transform your data
val newUser = user
.copy(devices = user.devices :+ Device(id = 2, model = "iPhone X"))
// write your transformed data to json file
val fos = new FileOutputStream("/tmp/output.json")
try writeToStream(newUser, fos)
finally fos.close()
you question is very abstract, but here's a good library for JSON parsing and manipulation in Scala
https://github.com/circe/circe

Protovis - dealing with a text source

lets say I have a text file with lines as such:
[4/20/11 17:07:12:875 CEST] 00000059 FfdcProvider W com.test.ws.ffdc.impl.FfdcProvider logIncident FFDC1003I: FFDC Incident emitted on D:/Prgs/testing/WebSphere/AppServer/profiles/ProcCtr01/logs/ffdc/server1_3d203d20_11.04.20_17.07.12.8755227341908890183253.txt com.test.testserver.management.cmdframework.CmdNotificationListener 134
[4/20/11 17:07:27:609 CEST] 0000005d wle E CWLLG2229E: An exception occurred in an EJB call. Error: Snapshot with ID Snapshot.8fdaaf3f-ce3f-426e-9347-3ac7e8a3863e not found.
com.lombardisoftware.core.TeamWorksException: Snapshot with ID Snapshot.8fdaaf3f-ce3f-426e-9347-3ac7e8a3863e not found.
at com.lombardisoftware.server.ejb.persistence.CommonDAO.assertNotNull(CommonDAO.java:70)
Is there anyway to easily import a data source such as this into protovis, if not what would the easiest way to parse this into a JSON format. For example for the first entry might be parsed like so:
[
{
"Date": "4/20/11 17:07:12:875 CEST",
"Status": "00000059",
"Msg": "FfdcProvider W com.test.ws.ffdc.impl.FfdcProvider logIncident FFDC1003I",
},
]
Thanks, David
Protovis itself doesn't offer any utilities for parsing text files, so your options are:
Use Javascript to parse the text into an object, most likely using regex.
Pre-process the text using the text-parsing language or utility of your choice, exporting a JSON file.
Which you choose depends on several factors:
Is the data somewhat static, or are you going to be running this on a new or dynamic file each time you look at it? With static data, it might be easiest to pre-process; with dynamic data, this may add an annoying extra step.
How much data do you have? Parsing a 20K text file in Javascript is totally fine; parsing a 2MB file will be really slow, and will cause the browser to hang while it's working (unless you use Workers).
If there's a lot of processing involved, would you rather put that load on the server (by using a server-side script for pre-processing) or on the client (by doing it in the browser)?
If you wanted to do this in Javascript, based on the sample you provided, you might do something like this:
// Assumes var text = 'your text';
// use the utility of your choice to load your text file into the
// variable (e.g. jQuery.get()), or just paste it in.
var lines = text.split(/[\r\n\f]+/),
// regex to match your log entry beginning
patt = /^\[(\d\d?\/\d\d?\/\d\d? \d\d:\d\d:\d\d:\d{3} [A-Z]+)\] (\d{8})/,
items = [],
currentItem;
// loop through the lines in the file
lines.forEach(function(line) {
// look for the beginning of a log entry
var initialData = line.match(patt);
if (initialData) {
// start a new item, using the captured matches
currentItem = {
Date: initialData[1],
Status: initialData[2],
Msg: line.substr(initialData[0].length + 1)
}
items.push(currentItem);
} else {
// this is a continuation of the last item
currentItem.Msg += "\n" + line;
}
});
// items now contains an array of objects with your data