how to merge the codes of generate random ID into data files? - katalon-studio

How to merge the codes of generate random ID into data files?
I have done the codes to generate random ItemID (append timestamp)
And I had test case which get the excel stored at data file. The first column of data file is ItemID.
How do I merge the random ItemID codes into the existing data file codes?
So that I do not need to keep updating the excel file, new testID will be used at new test execution.
codes for random ID
#Keyword
String getUniqueName() {
String prodName = ('ItemID'+Integer.toString(getRandomNumber(1, 99))) + timeStamp()
return prodName
}
codes for upload file
def requestObject = builder.withRestRequestMethod('POST')
.withRestUrl('http://'+GlobalVariable.URL+":"+GlobalVariable.Port + '/api/items/upload')
.withHttpHeaders([
new TestObjectProperty('Content-Type', ConditionType.EQUALS, 'multipart/form-data')])
.withMultipartFormDataBodyContent([
new FormDataBodyParameter('uploadedFile',"Data Files/ImportData.xlsx" , 'File'), ])
.build()
def response = WS.sendRequest(requestObject)
WS.verifyResponseStatusCode(response, 201)

Related

In Flutter, how can I combine data into a string very quickly?

I am gathering accelerometer data from my phone using the sensors package, adding that data to a List<AccelerometerEvent>, and then combining that data into a (csv) String so I can use file.writeAsString() to save this data as a csv file. The problem I am having is that it takes too long to combine the data into a string.
For example:
List length : 28645
Milliseconds to combine into csv string: 113580
Code:
for (AccelerometerEvent event in history) {
dataString = dataString + '${event.timestamp},${event.x},${event.y},${event.z}\n';
}
What would be a more efficient way to do this?
Should I even combine the data into a string, or is there a better way to save this data to a file?
Thanks
Create a file object
write first line with column names, and after that each row (after \n) will be an event
See: FileMode.append
Will add new strings without replacing existing string in file
File file = File('events.csv');
file.writeAsStringSync('TIMESTAMP, X, Y, Z\n', mode: FileMode.append);
for (AccelerometerEvent event in history) {
final x = event.x;
final y = event.y;
final z = event.z;
final timestamp = event.timestamp;
String data = '$timestamp, $x, $y, $z';
file.writeAsStringSync('$data\n', mode: FileMode.append);
}

Store generated dynamic unique ID and parse to next test case

I have keyword groovy which allowed me to generate a dynamic unique ID for test data purpose.
package kw
import java.text.SimpleDateFormat
import com.kms.katalon.core.annotation.Keyword
class dynamicId {
//TIME STAMP
String timeStamp() {
return new SimpleDateFormat('ddMMyyyyhhmmss').format(new Date())
}
//Generate Random Number
Integer getRandomNumber(int min, int max) {
return ((Math.floor(Math.random() * ((max - min) + 1))) as int) + min
}
/**
* Generate a unique key and return value to service
*/
#Keyword
String getUniqueId() {
String prodName = (Integer.toString(getRandomNumber(1, 99))) + timeStamp()
return prodName
}
}
Then I have a couple of API test cases as below:
test case 1:
POST test data by calling the keyword. this test case works well.
the dynamic unique ID is being posted and stored in the Database.
partial test case
//test data using dynamic Id
NewId = CustomKeywords.'kw.dynamicId.getUniqueId'()
println('....DO' + NewId)
GlobalVariable.DynamicId = NewId
//test data to simulate Ingest Service sends Dispense Order to Dispense Order Service.
def incomingDOInfo = '{"Operation":"Add","Msg":{"id":"'+GlobalVariable.DynamicId+'"}'
now, test case 2 served as a verification test case.
where I need to verify the dynamic unique ID can be retrieved by GET API (GET back data by ID, this ID should matched the one being POSTED).
how do I store the generated dynamic unique ID once generated from test case 1?
i have the "println('....DO' + NewId)" in Test Case 1, but i have no idea how to use it and put it in test case 2.
which method should I use to get back the generated dynamic unique ID?
updated Test Case 2 with the suggestion, it works well.
def dispenseOrderId = GlobalVariable.DynamicId
'Check data'
getDispenseOrder(dispenseOrderId)
def getDispenseOrder(def dispenseOrderId){
def response = WS.sendRequestAndVerify(findTestObject('Object Repository/Web Service Request/ApiDispenseorderByDispenseOrderIdGet', [('dispenseOrderId') : dispenseOrderId, ('SiteHostName') : GlobalVariable.SiteHostName, , ('SitePort') : GlobalVariable.SitePort]))
println(response.statusCode)
println(response.responseText)
WS.verifyResponseStatusCode(response, 200)
println(response.responseText)
//convert to json format and verify result
def dojson = new JsonSlurper().parseText(new String(response.responseText))
println('response text: \n' + JsonOutput.prettyPrint(JsonOutput.toJson(dojson)))
assertThat(dojson.dispenseOrderId).isEqualTo(dispenseOrderId)
assertThat(dojson.state).isEqualTo("NEW")
}
====================
updated post to try #2 suggestion, works
TC2
//retrieve the dynamic ID generated at previous test case
def file = new File("C:/DynamicId.txt")
//Modify this to match test data at test case "IncomingDOFromIngest"
def dispenseOrderId = file.text
'Check posted DO data from DO service'
getDispenseOrder(dispenseOrderId)
def getDispenseOrder(def dispenseOrderId){
def response = WS.sendRequestAndVerify(findTestObject('Object Repository/Web Service Request/ApiDispenseorderByDispenseOrderIdGet', [('dispenseOrderId') : dispenseOrderId, ('SiteHostName') : GlobalVariable.SiteHostName, , ('SitePort') : GlobalVariable.SitePort]))
println(response.statusCode)
println(response.responseText)
WS.verifyResponseStatusCode(response, 200)
println(response.responseText)
}
There are multiple ways of doing that that I can think of.
1. Store the value od dynamic ID in a GlobalVariable
If you are running Test Case 1 (TC1) and TC2 in a test suite, you can use the global variable for inter-storage.
You are already doing this in the TC1:
GlobalVariable.DynamicId = NewId
Now, this will only work if TC1 and TC2 are running as a part of the same test suite. That is because GlobalVariables are reset to default on the teardown of the test suite or the teardown of a test case when a single test case is run.
Let us say you retrieved the GET response and put it in a response variable.
assert response.equals(GlobalVariable.DynamicId)
2. Store the value od dynamic ID on the filesystem
This method will work even if you run the test cases separately (i.e. not in a test suite).
You can use file system for permanently storing the ID value to a file. There are various Groovy mehods to help you with that.
Here's an example on how to store the ID to a text file c:/path-to/variable.txt:
def file = new File("c:/path-to/variable.txt")
file.newWriter().withWriter { it << NewID }
println file.text
The TC2 needs this assertion (adjust according to your needs):
def file = new File("c:/path-to/variable.txt")
assert response.equals(file.text)
Make sure you defined file in TC2, as well.
3. Return the ID value at the end of TC1 and use it as an input to TC2
This also presupposes TC1 and TC2 are in the same test suite. You return the value of the ID with
return NewId
and then use as an input parameter for TC2.
4. Use test listeners
This is the same thing as the first solution, you just use test listeners to create a temporary holding variable that will be active during the test suite run.

How to export an csv file to a bigqery table using java dataflow?

I want to read an csv file from the cloud bucket and write it to a bigquery table with columns using dataflow in java. How can I set the headers to the csv file while writing to bigquery?
There are two issues to solve here
Skipping the header when reading the data, and
Using the header to correctly populate teh bigquery table columns.
For (1) this is, as of June 2019, not implemented natively, though you could try the options listed at Skipping header rows - is it possible with Cloud DataFlow?. For (2) the easiest would be to read the first line of your CSV in your main program, and pass the list of column names in the constructor to a DoFn that converts CSV lines into TableRow objects ready to write to Bigquery.
Your final program would look something like
public void CsvToBigquery(csvInputPattern, bigqueryTable) {
final String[] columns = readAndSplitFirstLineOfFirstFile(csvInputPattern);
Pipeline p = new Pipeline.create(...);
p
.apply(TextIO.read().from(csvInputPattern)
.apply(Filter.by(new MatchIfNonHeader())
.apply(ParDo.of(new DoFn<String, TableRow>() {
... // use columns here to TableRows
})
.apply(BigtableIO.write().withTableId(bigqueryTable)...);
}
I've done a similar task and used Apache Common library in ParDo function to extract the data from CSV files and then converted them to Table Row Objects for BQ.
String fileData = c.element();
BufferedReader fileReader = new BufferedReader(new InputStreamReader(
new ByteArrayInputStream(fileData.getBytes("UTF-8")), "UTF-8"));
CSVParser csvParser = new CSVParser(fileReader,CSVFormat.DEFAULT.withFirstRecordAsHeader().withIgnoreHeaderCase().withTrim());
Iterable<CSVRecord> csvRecords = csvParser.getRecords();
for (CSVRecord csvRecord : csvRecords) {
TableRow row = new TableRow();
checkAndConvertIntoBqDataType(csvRecord.toMap());
c.output(row);
}

Flink: join file with kafka stream

I have a problem I don't really can figure out.
So I have a kafka stream that contains some data like this:
{"adId":"9001", "eventAction":"start", "eventType":"track", "eventValue":"", "timestamp":"1498118549550"}
And I want to replace 'adId' with another value 'bookingId'.
This value is located in a csv file, but I can't really figure out how to get it working.
Here is my mapping csv file:
9001;8
9002;10
So my output would ideally be something like
{"bookingId":"8", "eventAction":"start", "eventType":"track", "eventValue":"", "timestamp":"1498118549550"}
This file can get refreshed every hour at least once, so it should pick up changes to it.
I currently have this code which doesn't work for me:
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.enableCheckpointing(30000); // create a checkpoint every 30 seconds
env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);
DataStream<String> adToBookingMapping = env.readTextFile(parameters.get("adToBookingMapping"));
DataStream<Tuple2<Integer,Integer>> input = adToBookingMapping.flatMap(new Tokenizer());
//Kafka Consumer
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", parameters.get("bootstrap.servers"));
properties.setProperty("group.id", parameters.get("group.id"));
FlinkKafkaConsumer010<ObjectNode> consumer = new FlinkKafkaConsumer010<>(parameters.get("inbound_topic"), new JSONDeserializationSchema(), properties);
consumer.setStartFromGroupOffsets();
consumer.setCommitOffsetsOnCheckpoints(true);
DataStream<ObjectNode> logs = env.addSource(consumer);
DataStream<Tuple4<Integer,String,Integer,Float>> parsed = logs.flatMap(new Parser());
// output -> bookingId, action, impressions, sum
DataStream<Tuple4<Integer, String,Integer,Float>> joined = runWindowJoin(parsed, input, 3);
public static DataStream<Tuple4<Integer, String, Integer, Float>> runWindowJoin(DataStream<Tuple4<Integer, String, Integer, Float>> parsed,
DataStream<Tuple2<Integer, Integer>> input,long windowSize) {
return parsed.join(input)
.where(new ParsedKey())
.equalTo(new InputKey())
.window(TumblingProcessingTimeWindows.of(Time.of(windowSize, TimeUnit.SECONDS)))
//.window(TumblingEventTimeWindows.of(Time.milliseconds(30000)))
.apply(new JoinFunction<Tuple4<Integer, String, Integer, Float>, Tuple2<Integer, Integer>, Tuple4<Integer, String, Integer, Float>>() {
private static final long serialVersionUID = 4874139139788915879L;
#Override
public Tuple4<Integer, String, Integer, Float> join(
Tuple4<Integer, String, Integer, Float> first,
Tuple2<Integer, Integer> second) {
return new Tuple4<Integer, String, Integer, Float>(second.f1, first.f1, first.f2, first.f3);
}
});
}
The code only runs once and then stops, so it doesn't convert new entries in kafka using the csv file. Any ideas on how I could process the stream from Kafka with the latest values from my csv file?
Kind regards,
darkownage
Your goal appears to be to join steaming data with a slow-changing catalog (i.e. a side-input). I don't think the join operation is useful here because it doesn't store the catalog entries across windows. Also, the text file is a bounded input whose lines are read once.
Consider using connect to create a connected stream, and store the catalog data as managed state to perform lookups into. The operator's parallelism would need to be 1.
You may find a better solution by researching 'side inputs', looking at the solutions that people use today. See FLIP-17 and Dean Wampler's talk at Flink Forward.

Change date column to integer

I have a large csv file as below:
DATE status code value value2
2014-12-13 Shipped 105732491-20091002165230 0.000803398 0.702892835
2014-12-14 Shipped 105732491-20091002165231 0.012925206 1.93748834
2014-12-15 Shipped 105732491-20091002165232 0.000191278 0.004772389
2014-12-16 Shipped 105732491-20091002165233 0.007493046 0.44883348
2014-12-17 Shipped 105732491-20091002165234 0.022015049 3.081006137
2014-12-18 Shipped 105732491-20091002165235 0.001894693 0.227268466
2014-12-19 Shipped 105732491-20091002165236 0.000312871 0.003113062
2014-12-20 Shipped 105732491-20091002165237 0.001754068 0.105016053
2014-12-21 Shipped 105732491-20091002165238 0.009773315 0.585910214
:
:
What i need to do is remove the header and change the date format to an integer yyyymmdd (eg. 20141217)
I am using opencsv to read and write the file.
Is there a way where i can change all the dates at once without parsing them one by one?
Below is my code to remove the header and create a new file:
void formatCsvFile(String fileToChange) throws Exception {
CSVReader reader = new CSVReader(new FileReader(new File(fileToChange)), CSVParser.DEFAULT_SEPARATOR, CSVParser.NULL_CHARACTER, CSVParser.NULL_CHARACTER, 1)
info "Read all rows at once"
List<String[]> allRows = reader.readAll();
CSVWriter writer = new CSVWriter(new FileWriter(fileToChange), CSVWriter.DEFAULT_SEPARATOR, CSVWriter.NO_QUOTE_CHARACTER)
info "Write all rows at once"
writer.writeAll(allRows)
writer.close()
}
Please can some one help?
Thanks
You don't need to parse the dates, but you do need to process each line in the file and convert the data on each line you want to convert. Java/Groovy doesn't have anything like awk where you can work with file data as columns, for example, the first 10 "columns" (characters usually) in every line in a file. Java/Groovy only deals with "rows" of data in a file, not "columns".
You could try something like this: (in Groovy)
reader.eachLine { String theLine ->
int idx = theLine.indexOf(' ')
String oldDate = theLine.subString(0, idx)
String newDate = oldDate.replaceAll('-', '')
String newLine = newDate + theLine.subString(idx);
writer.writeLine(newline);
}
Edit:
If your CSVReader class is not derived from File, then you can't use Groovy's eachLine method on it. And if the CSVReader class's readAll() method really returns a List of String arrays, then the above code could change to this:
allRows.each { String[] theLine ->
String newDate = theLine[0].replaceAll('-', '')
writer.writeLine(newDate + theLine[1..-1])
}
Ignore the first line (the header):
List<String[]> allRows = reader.readAll()[1..-1];
and replace the '-' in the dates by splitting each row and editting the first:
allrows = allrows.collect{
row -> row.split(',')[0].replace(',','') // the date
+ row.split(',')[1..-1] // the rest
}
I don't know what you mean by "all dates at once". For me can only be iterated.