Our test automation needs to interact with kafka and we are looking at how we can achieve this with karate.
We have a java class which reads from kafka and puts records in an internal list. We then ask for these records from karate, filter out all messages from background traffic, and return the first message that matches our filter.
So our consumer looks like this (simplified):
// consume.js
function(bootstrapServers, topic, filter, timeout, interval) {
var KafkaLib = Java.type('kafka.KafkaLib')
var records = KafkaLib.getRecords(bootstrapServers, topic)
for (record_id in records) {
// TODO here we want to convert record to a json (and later xml for xml records) so that
// we can access them as 'native' karate data types and use notation like: cat.cat.scores.score[1]
var record = records[record_id]
if (filter(record)) {
karate.log("Record matched: " + record)
return record
}
}
throw "No records found matching the filter: " + filter
}
Records can be json, xml, or plain text, but looking in the json case now.
In this case given that in kafka there is a message like this:
{"correlationId":"b3e6bbc7-e5a6-4b2a-a8f9-a0ddf435de67","text":"Hello world"}
This is loaded as a string in the record variable above.
We want to convert this to json so that a filter like this would work:
* def uuid = java.util.UUID.randomUUID() + ''
# This is what we are publishing to kafka
* def payload = ({ correlationId: uuid, text: "Hello world" })
* def filter = function(m) { return m.correlationId == uuid }
Is there a way to convert a string to a native karate variable in javascript? Might have missed it looking at https://intuit.github.io/karate/#the-karate-object. By the way var jsonRecord = karate.toJson(record) did not work and jsonRecord.uuid was undefined.
Edit: I have made an example of what I am trying to achieve here:
https://github.com/KostasKgr/karate-issues/blob/java_json_interop/src/test/java/examples/consumption/consumption.feature
Many thanks
Sometime ago I had put together a something that could be used to test Kafka from within Karate. Pls see if https://github.com/Sdaas/karate-kafka helps. Happy to enhance / improve if it helps you.
Can you try,
* json payload = { correlationId: uuid, text: "Hello world" }
ref : Type Conversion
for type conversion within javascript ideally karate.toMap(object) or karate.toJson(object) should.
rather than wrapping up everything into one JS function, I would suggest keeping the record invoking part outside the JS and let karate cast it.
* json records = Java.type('kafka.KafkaLib').getRecords(bootstrapServers, topic)
* consume(records, filter, timeout, interval)
As mentioned in the comments of another answer, there is now an enhancement ticket on karate to achieve what was discussed in this thread, see https://github.com/intuit/karate/issues/1202
Until that is in place, I managed to get most of what I wanted concerning JSON by parsing string to json in Java and returning that to karate.
Map<String,Object> result = new ObjectMapper().readValue(record, HashMap.class);
Not sure if the same can be worked around for xml
You can see the workaround in action here:
https://github.com/KostasKgr/karate-issues/blob/java_json_interop_v2/src/test/java/examples/consumption/consumption.feature
Because of Karate's support for Java inter-op you can easily write some "glue" code to connect your existing Kafka systems to Karate test-suites, see the first link below.
Here are a few references:
how to use Java inter-op to listen and wait for events: https://twitter.com/KarateDSL/status/1417023536082812935
the Karate ActiveMQ example: https://github.com/intuit/karate/tree/master/karate-netty#consumer-provider-example
Walmart Labs blog post (Kafka specific): https://medium.com/walmartglobaltech/kafka-automation-using-karate-6a129cfdc210
Karate Kafka (3rd party project / example): https://github.com/Sdaas/karate-kafka
Related
Please look at the below code from controller(Added comments) which uses RestTemplate:
#GetMapping("/{courseid}")
public Course getCourseDetails(#PathVariable Long courseid) {
// Get Course info (ID, Name, Description) from pre-populated Array List
CourseInfo courseInfo = getCourseInfo(courseid);
// Get Price info of a course from another microservice using RESTTemplate
Price price = restTemplate.getForObject("http://localhost:8002/price/"+courseid, Price.class);
// Get enrollment info of a course from another microservice using RESTTemplate
Enrollment enrollment = restTemplate.getForObject("http://localhost:8003/enrollment/"+courseid, Enrollment.class);
//Consolidate everything in to Course object and send it as response
return new Course(courseInfo.getCourseID(), courseInfo.getCourseName(), courseInfo.getCourseDesc(), price.getDiscountedPrice(),
enrollment.getEnrollmentOpen());
}
Now I am trying to achieve the same using Reactive programming. I now use WebClient and Mono from Web-Flux. But, I am so confused as to how to combine the results? Take a look at the below code (Just using Mono Everywhere. Rest of the code remained same)
#GetMapping("/{courseid}")
public Mono<Course> getCourseDetails(#PathVariable Long courseid) {
// Get Course info (ID, Name, Description) from pre-populated Array List
CourseInfo courseInfo = getCourseInfo(courseid);
// Get Price info of a course from another microservice using RESTTemplate
Mono<Price> price = webClient.get().uri("http://localhost:8002/price/{courseid}/",courseid).retrieve().bodyToMono(Price.class);
// Get enrollment info of a course from another microservice using RESTTemplate
Mono<Enrollment> inventory = webClient.get().uri("http://localhost:8003/enrollment/{courseid}/",courseid).retrieve().bodyToMono(Enrollment.class);
//Question : How do we Consolidate everything and form a Mono<Course> object and send it as response?
}
Question 1 : How do we Consolidate everything and form a Mono object and send it as response?
Question 2 : Does the statement "CourseInfo courseInfo = getCourseInfo(courseid);" cause blocking operation?
Thanks!
Answering to:
Question 1 : How do we Consolidate everything and form a Mono object and send it as response?
Mono.zip(..) is what you need to combine the two results. This diagram is from the doc :
Note that, zip will result in an empty Mono if one of A or 1 is empty! Use switchIfEmpty/defaultIfEmpty to protect against that case.
Thus the code looks like:
#GetMapping("/{courseid}")
public Mono<Course> getCourseDetails(#PathVariable Long courseid) {
CourseInfo courseInfo = getCourseInfo(courseid);
Mono<Price> priceMono = webClient.get().uri("http://localhost:8002/price/{courseid}/",courseid).retrieve().bodyToMono(Price.class);
Mono<Enrollment> enrollmentMono = webClient.get().uri("http://localhost:8003/enrollment/{courseid}/",courseid).retrieve().bodyToMono(Enrollment.class);
return Mono.zip(priceMono, enrollmentMono).map(t -> new Course(courseInfo.getCourseID(), courseInfo.getCourseName(), courseInfo.getCourseDesc(), t.getT1().getDiscountedPrice(),
t.getT2().getEnrollmentOpen()));
}
Now answering to:
Question 2 : Does the statement "CourseInfo courseInfo = getCourseInfo(courseid);" cause blocking operation?
Since you mentioned that Get Course info (ID, Name, Description) from pre-populated Array List, if it is just an in-memory Array containing the course information, then it's not blocking.
But (as #mslowiak also mentioned), if getCourseInfo contains logic which involves querying a database, ensure that you are not using a blocking JDBC driver. If so, then there is no point of using Webflux and Reactor. Use Spring R2DBC if that is the case.
restTemplate.getForObject returns simple object - in your case Price or Enrollment. To convert them to Mono you can simply Mono.just(object), however better solution would be to switch to Webclient which is default HTTP client for Spring Reactive
getCourseInfo it depends what is the logic behind this method. For sure if there is a JDBC connection behind that method it is blocking.
To make a final response with Mono<Course> you should think about zip operator which will help you with that.
For ex:
Mono<Course> courseMono = Mono.zip(price, enrollment)
.map(tuple -> new Course(courseInfo, tuple.getT1(), tuple.getT2()));
I'm new to Vert.x and trying I am trying to implement a small REST API that stores its data in JSON files on the local file system.
So far I managed to implement the REST API since Vertx is very well documented on that part.
What I'm currently looking for are examples how to build data access objects in Vert.x. How can I implement a Verticle that can perform crud operations on a text file containing JSON?
Can you provide me any examples? Any hints?
UPDATE 1:
By CRUD operations on a file I'm thinking of the following. Imagine there is a REST resource called Records exposed on the the path /api/v1/user/:userid/records/.
In my verticle that starts my HTTP server I have the following routes.
router.get('/api/user/:userid/records').handler(this.&handleGetRecords)
router.post('/api/user/:userid/records').handler(this.&handleNewRecord)
The handler methods handleGetRecords and handleNewRecord are sending a message using the Vertx event bus.
request.bodyHandler({ b ->
def userid = request.getParam('userid')
logger.info "Reading record for user {}", userid
vertx.eventBus().send(GET_TIME_ENTRIES.name(), "read time records", [headers: [userId: userid]], { reply ->
// This handler will be called for every request
def response = routingContext.response()
if (reply.succeeded()) {
response.putHeader("content-type", "text/json")
// Write to the response and end it
response.end(reply.result().body())
} else {
logger.warn("Reply failed {}", reply.failed())
response.statusCode = 500
response.putHeader("content-type", "text/plain")
response.end('That did not work out well')
}
})
})
Then there is another verticle that consumes these messages GET_TIME_ENTRIES or CREATE_TIME_ENTRY. I think of this consumer verticle as a Data Access Object for Records. This verticle can read a file of the given :userid that contains all user records. The verticle is able to
add a record
read all records
read a specific record
update a record
delete a or all records
Here is the example of reading all records.
vertx.eventBus().consumer(GET_TIME_ENTRIES.name(), { message ->
String userId = message.headers().get('userId')
String absPath = "${this.source}/${userId}.json" as String
vertx.fileSystem().readFile(absPath, { result ->
if (result.succeeded()) {
logger.info("About to read from user file {}", absPath)
def jsonObject = new JsonObject(result.result().toString())
message.reply(jsonObject.getJsonArray('records').toString())
} else {
logger.warn("User file {} does not exist", absPath)
message.fail(404, "user ${userId} does not exist")
}
})
})
What I trying to achieve is to read the file like I did above and deserialise the JSON into a POJO (e.g. a List<Records>). This seems much more convenient that working with JsonObject of Vertx. I don't want to manipulate the JsonObject instance.
First of all, your approach using EventBus is fine, in my opinion. It may be a bit slower, because EventBus will serialize/deserialize your objects, but it gives you a very good decoupling.
Example of another approach you can see here:
https://github.com/aesteve/vertx-feeds/blob/master/src/main/java/io/vertx/examples/feeds/dao/RedisDAO.java
Note how every method receives handler as its last argument:
public void getMaxDate(String feedHash, Handler<Date> handler) {
More coupled, but also more efficient.
And for a more classic and straightforward approach, you can see the official examples:
https://github.com/aokolnychyi/vertx-example/blob/master/src/main/java/com/aokolnychyi/vertx/example/dao/MongoDbTodoDaoImpl.java
You can see that here DAO is pretty much synchronous, but since the handlers are still async, it's fine anyway.
I guess the following link will help you out and this is a good example of Vertx crud operations.
Vertx student crud operations using hikari
I need some help please...
I am working with a GWT enabled web application. I am using the gwt-2.3.0 SDK.
I have a method that extends the DataSource class and uses the transformResponse method:
public class DeathRecordXmlDS extends DataSource {
protected void transformResponse(DSResponse response, DSRequest request, Object data){
super.transformResponse(response, request, data);
}
}
As I understand, the transformResponse() method should get control and at this point, I will have access to the data that is being provided to the Client side of my application. I am trying to work with the Object data parameter (the third parameter) that is passed in.
I am expecting an XML formatted string to be passed in. The XML will contain data (a count field) that I need to access and use.
I don't seem to be getting an XML string. Here's what I know...
I do see the XML data being passed to my webapp (the client). I can see this because I inspect the webpage that I am working with and I see the Response data. Here's an example of something that I expect to receive:
XML data from Query:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Collection numRecords="0">
<DeathRecords/>
</Collection>
The above XML is valid (I checked it in a Validator). This is a case where there was no data (No Death Records) being returned to my application. The numRecords XML attribute is set to "0". Of course, If I do have records returned the numRecords will contain the number of records and I'll get that same number of DeathRecord nodes.
I am not getting the above data (or, I don't know how to work with it) in the transformResponse() method.
Here's what I've done to try to figure this out...
The Object data parameter... it is a JavaScriptObject. I know this because I did a .getClass().getName() on it:
DeathRecordXmlDS::transformResponse() data.getClass().getName(): com.google.gwt.core.client.JavaScriptObject$
Then, to try to work with it, I converted it to a String:
com.google.gwt.core.client.JavaScriptObject dataJS = (com.google.gwt.core.client.JavaScriptObject)data;
System.out.println("DeathRecordXmlDS::transformResponse() data as a JavaScriptObject: "+dataJS.toString());
The contents of 'data' formatted as a String look like:
DeathRecordXmlDS::transformResponse() data as a JavaScriptObject: [XMLDoc <Collection>]
So, it looks like I have something that has to do with my 'Collection' node, but not a String of XML data that I can parse and get to my numRecords attribute.
What do I need to do to gain access to the XML in the transformResponse() method?
Thanks!
I think your data object is already translated to a javascript collection.
Maybe you could use the utility class XMLTools to retrieve your numRecords information:
Integer numRecords = Integer.parseInt(XMLTools.selectString(data, "Collection/#numRecords"));
After working on this for an additional period of time I was able to read the XML data that I am working with. I used the following piece of code:
try{
JsArray<JavaScriptObject> nodes = ((JavaScriptObject) XMLTools.selectNodes(data, "/Collection/#numRecords")).cast();
for (int i = 0; i < nodes.length(); i++) {
com.google.gwt.dom.client.Element element = (com.google.gwt.dom.client.Element) nodes.get(i);
numRecords = element.getNodeValue();
}
} catch(Exception e){
// If Parsing fails, capture the exception
System.out.println("DeathRecordXmlDS::transformResponse() Not able to parse the XML");
}
I think the first step to solving this was understanding that the parameter 'data' of type Object was really a JavaScriptObject. I learned this after looking at the .getClass() and .getName(). This helped me understand what I was working with:
System.out.println("DeathRecordXmlDS::transformResponse() data.getClass().getName(): "+data.getClass().getName());
Once I knew it was a JavaScriptObject, I was able to do a little more focused of a Google search for what I was trying to accomplish. I was a little surprised that the XMLTools.selectNodes() function worked the way it did, but the end result is that I was able to read the numRecords attribute.
Thanks for the suggestion!
I am building a REST API and facing this issue: How can REST API pass very large JSON?
Basically, I want to connect to Database and return the training data. The problem is in Database I have 400,000 data. If I wrap them into a JSON file and pass through GET method, the server would throw Heap overflow exception.
What methods we can use to solve this problem?
DBTraining trainingdata = new DBTraining();
#GET
#Produces("application/json")
#Path("/{cat_id}")
public Response getAllDataById(#PathParam("cat_id") String cat_id) {
List<TrainingData> list = new ArrayList<TrainingData>();
try {
list = trainingdata.getAllDataById(cat_id);
Gson gson = new Gson();
Type dataListType = new TypeToken<List<TrainingData>>() {
}.getType();
String jsonString = gson.toJson(list, dataListType);
return Response.ok().entity(jsonString).header("Access-Control-Allow-Origin", "*").header("Access-Control-Allow-Methods", "GET").build();
} catch (SQLException e) {
logger.warn(e.getMessage());
}
return null;
}
The RESTful way of doing this is to create a paginated API. First, add query parameters to set page size, page number, and maximum number of items per page. Use sensible defaults if any of these are not provided or unrealistic values are provided. Second, modify the database query to retrieve only a subset of the data. Convert that to JSON and use that as the payload of your response. Finally, in following HATEOAS principles, provide links to the next page (provided you're not on the last page) and previous page (provided you're not on the first page). For bonus points, provide links to the first page and last page as well.
By designing your endpoint this way, you get very consistent performance characteristics and can handle data sets that continue to grow.
The GitHub API provides a good example of this.
My suggestion is no to pass the data as a JSON but as a file using multipart/form-data. In your file, each line could be a JSON representing a data record. Then, it would be easy to use a FileOutputStream to receive te file. Then, you can process the file line by line to avoid memory problems.
A Grails example:
if(params.myFile){
if(params.myFile instanceof org.springframework.web.multipart.commons.CommonsMultipartFile){
def fileName = "/tmp/myReceivedFile.txt"
new FileOutputStream(fileName).leftShift(params.myFile.getInputStream())
}
else
//print or signal error
}
You can use curl to pass your file:
curl -F "myFile=#/mySendigFile.txt" http://acme.com/my-service
More details on a similar solution on https://stackoverflow.com/a/13076550/2476435
HTTP has the notion of chunked encoding that allows you send a HTTP response body in smaller pieces to prevent the server from having to hold the entire response in memory. You need to find out how your server framework supports chunked encoding.
i am confused on how to combine the json library in dispatch and lift to parse my json response.
I am apparently a scala newbie.
I have written this code :
val status = {
val httpPackage = http(Status(screenName).timeline)
val json1 = httpPackage
json1
}
Now i am stuck on how to parse the twitter json response
I've tried to use the JsonParser:
val status1 = JsonParser.parse(status)
but got this error:
<console>:38: error: overloaded method value parse with alternatives:
(s: java.io.Reader)net.liftweb.json.JsonAST.JValue<and>
(s: String)net.liftweb.json.JsonAST.JValue
cannot be applied to (http.HttpPackage[List[dispatch.json.JsObject]])
val status1 = JsonParser.parse(status1)
I unsure and can't figure out what to do next in order to iterate through the data, extract it and render it to my web page.
Here's another way to use Dispatch HTTP with Lift-JSON. This example fetches JSON document from google, parses all "titles" from it and prints them.
import dispatch._
import net.liftweb.json.JsonParser
import net.liftweb.json.JsonAST._
object App extends Application {
val http = new Http
val req = :/("www.google.com") / "base" / "feeds" / "snippets" <<? Map("bq" -> "scala", "alt" -> "json")
val json = http(req >- JsonParser.parse)
val titles = for {
JField("title", title) <- json
JField("$t", JString(name)) <- title
} yield name
titles.foreach(println)
}
The error that you are getting back is letting your know that the type of status is neither a String or java.io.Reader. Instead, what you have is a List of already parsed JSON responses as Dispatch has already done all of the hard work in parsing the response into a JSON response. Dispatch has a very compact syntax which is nice when you are used to it but it can be very obtuse initially, especially when you are first approaching Scala. Often times, you'll find that you have to dive into the source code of the library when you are first learning to see what is going on. For instance, if you look into the dispatch-twitter source code, you can see that the timeline method actually performs a JSON extraction on the response:
def timeline = this ># (list ! obj)
What this method is defining is a Dispatch Handler which converts the Response object into a JsonResponse object, and then parses the response into a list of JSON Objects. That's quite a bit going on in one line. You can see the definition for the operand ># in the JsHttp.scala file in the http+json Dispatch module. Dispatch defines lots of Handlers that do a conversion behind the scenes into different types of data which you can then pass to block to work with. Check out the StdOut Walkthrough and the Common Tasks pages for some of the handlers but you'll need to dive into the various modules source code or Scaladoc to see what else is there.
All of this is a long way to get to what you want, which I believe is essentially this:
val statuses = http(Status(screenName).timeline)
statuses.map(Status.text).foreach(println _)
Only instead of doing a println, you can push it out to your web page in whatever way you want. Check out the Status object for some of the various pre-built extractors to pull information out of the status response.