I was reading the docs about SCHEMAS in Apache BEAM but i can not understand what its purpose is, how and why or in which cases should i need to use them. What is the difference between using schemas or using a class that extends the Serializable interface?
The docs has an example:
#DefaultSchema(JavaFieldSchema.class)
public class TransactionPojo {
public String bank;
public double purchaseAmount;
}
PCollection<TransactionPojos> transactionPojos = readTransactionsAsPojo();
But it doesn't explain how readTransactionsAsPojo function is built. I think there are a lot of missing explanation about this.
There are several reasons to use Beam Schema, some of them are below:
You won't need to specify a Coder for objects with schema;
If you have the objects with the same schema, but represented in a different way (like, JavaBean and Pojo in your example), then Beam Schema will allow to use the same Schema PTransforms for the PCollections of these objects;
With Schema-aware PCollections it's much easier to write joins since it will require much less code boilerplate;
To use BeamSQL over PCollection it will require you to have a Beam Schema. Like, you can read Avro files with a schema that will be automatically converted into Beam Schema and then you apply a Beam SQL transform over these Avro records.
Also, I'd recommend to watch these talk from Beam Summit 2019 about Schema-aware PCollections and Beam SQL.
Still there is NO answer as how readTransactionsAsPojo() has been implemented
PCollection<TransactionPojos> transactionPojos = readTransactionsAsPojo();
Keeping document abstract and not having complete code in repo, is hard to understand!!
A sample code which worked for me
package com.beam.test;
import com.beam.test.schema.Address;
import com.beam.test.schema.Purchase;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.io.TextIO;
import org.apache.beam.sdk.options.PipelineOptions;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.Create;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.values.PCollection;
import java.util.ArrayList;
import java.util.List;
public class SchemaExample {
public static void main(String[] args) {
PipelineOptions options= PipelineOptionsFactory.create();
Pipeline pipeline=Pipeline.create(options);
pipeline.apply("Create input:", TextIO.read().from("path\to\input\file.txt"))
.apply(ParDo.of(new ConvertToPurchase())).
apply(ParDo.of(new DoFn<Purchase, Void>() {
#ProcessElement
public void processElement(#Element Purchase purchase){
System.out.println(purchase.getUserId()+":"+purchase.getAddress().getHouseName());
}
}));
pipeline.run().waitUntilFinish();
}
static class ConvertToPurchase extends DoFn<String,Purchase>{
#ProcessElement
public void processElement(#Element String input,OutputReceiver<Purchase> outputReceiver){
String[] inputArr=input.split(",");
Purchase purchase=new Purchase(inputArr[0],new Address(inputArr[1],inputArr[2]));
outputReceiver.output(purchase);
}
}
}
package com.beam.test.schema;
import org.apache.beam.sdk.schemas.JavaBeanSchema;
import org.apache.beam.sdk.schemas.annotations.DefaultSchema;
import org.apache.beam.sdk.schemas.annotations.SchemaCreate;
#DefaultSchema(JavaBeanSchema.class)
public class Purchase {
private String userId;
private Address address;
public String getUserId(){
return userId;
}
public Address getAddress(){
return address;
}
#SchemaCreate
public Purchase(String userId, Address address){
this.userId=userId;
this.address=address;
}
}
package com.beam.test.schema;
import org.apache.beam.sdk.schemas.JavaBeanSchema;
import org.apache.beam.sdk.schemas.annotations.DefaultSchema;
import org.apache.beam.sdk.schemas.annotations.SchemaCreate;
#DefaultSchema(JavaBeanSchema.class)
public class Address {
private String houseName;
private String postalCode;
public String getHouseName(){
return houseName;
}
public String getPostalCode(){
return postalCode;
}
#SchemaCreate
public Address(String houseName,String postalCode){
this.houseName=houseName;
this.postalCode=postalCode;
}
}
My test file contains data in below format
user1,abc,1234
user2,def,3456
Related
I tried to implement a search query in my spring-boot service which utilizes the similarity(text, text) function of postgres.
I got the similarity working in the postgres console, and managed to get it over to my #Repository interface as native query.
It seems to construct the query correctly, but every time I try to execute the query I get
ERROR: function similarity(text, character varying) does not exist
When I try to create the extension again, I get an exception, that this extension is already installed.
What am I missing? Do I need some Spring/JPA magic Object to enable this?
Example entity:
import javax.persistence.Entity;
import javax.persistence.Id;
import javax.persistence.Table;
import lombok.Data;
#Entity
#Table(name = "example")
#Data
public class ExampleEntity {
#Id
private String id;
private String textField;
}
Example repository:
import java.util.Set;
import org.springframework.data.jpa.repository.Query;
import org.springframework.data.repository.CrudRepository;
import org.springframework.stereotype.Repository;
#Repository
public interface ExampleRepository extends CrudRepository<ExampleEntity, String> {
#Query(nativeQuery = true,
value = "SELECT * FROM example ORDER BY similarity(:searchString)")
List<ExampleEntity> findBySimilarity();
#Query(nativeQuery = true, value = "CREATE EXTENSION pg_trgm")
void createSimilarityExtension();
}
Test code (excluding setup, as it is rather complex):
public void test() {
ExampleEntity r1 = dbUtils.persistNewRandomEntity();
ExampleEntity r2 = dbUtils.persistNewRandomEntity();
ExampleEntity r3 = dbUtils.persistNewRandomEntity();
try {
exampleRepository.createSimilarityExtension();
} catch (InvalidDataAccessResourceUsageException e) {
// always says that the extension is already setup
}
List<ExampleEntity> bySimilarity = exampleRepository.findBySimilarity(r2.getTextField());
for (ExampleEntity entity : bySimilarity) {
System.out.println(entity);
}
}
Turns out I created the extension in the wrong schema while trying out if the extension would work at all.
I then added the extension to my DB-migration script, but would skip it if the extension existed. Therefore my extension was registered for the public schema and did not work in the actual schema my service is using.
So if you have the same problem I had, make sure your extension is created for the correct schema by using:
SET SCHEMA <your_schema>; CREATE EXTENSION pg_trgm;
Assume there are two types T1 & T2 and a topic T. Both T1 & T2 must go in topic T (for some reason). What are ways to achieve this? And which one is better?
One way (of many) is to make use of inheritance, we can define a base class and then sub-classes can extends it. In our case we can define a base class TB and then T1 & T2 can extends TB.
Base class (TB)
package poc.kafka.domain;
import java.io.Externalizable;
import java.io.IOException;
import java.io.ObjectInput;
import java.io.ObjectOutput;
import lombok.AllArgsConstructor;
import lombok.NoArgsConstructor;
import lombok.ToString;
import lombok.extern.java.Log;
#ToString
#AllArgsConstructor
#NoArgsConstructor
#Log
public class Animal implements Externalizable {
public String name;
public void whoAmI() {
log.info("I am an Animal");
}
#Override
public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException {
name = (String) in.readObject();
}
#Override
public void writeExternal(ObjectOutput out) throws IOException {
out.writeObject(name);
}
}
Derived class (T1)
package poc.kafka.domain;
import java.io.Externalizable;
import java.io.IOException;
import java.io.ObjectInput;
import java.io.ObjectOutput;
import lombok.AllArgsConstructor;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.Setter;
import lombok.ToString;
import lombok.extern.java.Log;
#Log
#Setter
#Getter
#AllArgsConstructor
#NoArgsConstructor
#ToString
public class Cat extends Animal implements Externalizable {
private int legs;
public void whoAmI() {
log.info("I am a Cat");
}
#Override
public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException {
super.readExternal(in);
legs = in.readInt();
}
#Override
public void writeExternal(ObjectOutput out) throws IOException {
super.writeExternal(out);
out.writeInt(legs);
}
}
Derived class (T2)
package poc.kafka.domain;
import java.io.Externalizable;
import java.io.IOException;
import java.io.ObjectInput;
import java.io.ObjectOutput;
import lombok.AllArgsConstructor;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.Setter;
import lombok.ToString;
import lombok.extern.java.Log;
#Log
#Setter
#Getter
#AllArgsConstructor
#NoArgsConstructor
#ToString
public class Dog extends Animal implements Externalizable {
private int legs;
public void whoAmI() {
log.info("I am a Dog");
}
#Override
public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException {
super.readExternal(in);
legs = in.readInt();
}
#Override
public void writeExternal(ObjectOutput out) throws IOException {
super.writeExternal(out);
out.writeInt(legs);
}
}
Deserializer
package poc.kafka.domain.serialization;
import org.apache.commons.lang3.SerializationUtils;
import org.apache.kafka.common.serialization.Deserializer;
import poc.kafka.domain.Animal;
public class AnimalDeserializer implements Deserializer<Animal> {
#Override
public Animal deserialize(String topic, byte[] data) {
return SerializationUtils.deserialize(data);
}
}
Serializer
package poc.kafka.domain.serialization;
import org.apache.commons.lang3.SerializationUtils;
import org.apache.kafka.common.serialization.Serializer;
import poc.kafka.domain.Animal;
public class AnimalSerializer implements Serializer<Animal> {
#Override
public byte[] serialize(String topic, Animal data) {
return SerializationUtils.serialize(data);
}
}
Then we can send T1 & T2 like below
IntStream.iterate(0, i -> i + 1).limit(10).forEach(i -> {
if (i % 2 == 0)
producer.send(new ProducerRecord<Integer, Animal>("T", i, new Dog(i)));
else
producer.send(new ProducerRecord<Integer, Animal>("gs3", i, new Cat(i)));
});
The simplest way is to use your custom org.apache.kafka.common.serialization.Serializer, which will be able to handle both type of events. Both type of events should inherit from same type/based class.
Sample code might look as follow:
public class CustomSerializer implements Serializer<T> {
public void configure(Map<String, ?> configs, boolean isKey) {
// nothing to do
}
public byte[] serialize(String topic, T data) {
// serialization
return null;
}
public void close() {
// nothing to do
}
}
This might be not the direct answer to the question, but rather proposition to reconsider some aspects here, which might solve the original problem.
First of all, despite Kafka's ability to support any data format, for the serializable binary format, I would advice using Apache Avro, rather than serialized Java object.
With Avro, you'd get all the benefits of a compact binary, language-agnostic data type and wide set of tools to work with. For example, there are CLI tools to read Kafka topics with contents in Avro, but I am not aware of any single one able to deserialize Java objects there.
You can read about Avro itself here
Also some good insights onto why use Avro can be found in this SO question here
Second. Your question title says about Event Types, but judging the description probably implies "how to handle different data types via single Kafka topic". If the difference between events is just, well, event type - for example, Click, Submit, LogIn, LogOut and so on - then you can keep an enum field with this type inside and otherwise use generic container object.
If there is a difference in the structure of data payloads these events should carry, then, again, using Avro you could've solved it with Union types.
And finally, if the data difference is so much that these events are basically different data structures with nothing significant in common - go with different Kafka topics.
Despite the ability to use different partitions within the same topic to send different data types, it really only going to cause maintenance headache in the future and limitations on scaling as rightfully was pointed out in other responses here. So for this case, if there is an option to go with different topics - better do it that way.
If there is no concept of inheritance, for example the data is not like
Animal -> Cat
Animal -> Dog
Then the other way is to use a wrapper.
public class DataWrapper
{
private Object data;
private EventType type;
// getter and setters omitted for the sake of brevity
}
Put all your events in the wrapper object and distinguish each event with their EventType which can be an enum for example.
Then you can serialize it the normal way (as you posted in the question) and while de-serializing it you can check the EventType and then delegate it to its corresponding event processor based on the EventType
Moreover, for the sake of ensuring that your DataWrapper doesn't wrap all kinds of data i.e. should be used only for a specific type of data, then you can use a Marker interface and make all of your classes whose objects you will push to the topic to implement this interface.
For example,
interface MyCategory {
}
and then your custom classes can have for example,
class MyEvent implements MyCategory {
}
and in the DataWrapper you can have..
public class DataWrapper<T extends MyCategory> {
private T data;
private EventType type;
// getters and setters omitted for the sake of brevity
}
The best approach is to create custom partition.
Produce each message to different partition by partitionKey
This is the default implementation , you need to implement your partition logic.
public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster) {
if (keyBytes == null) {
return stickyPartitionCache.partition(topic, cluster);
}
List<PartitionInfo> partitions = cluster.partitionsForTopic(topic);
int numPartitions = partitions.size();
// hash the keyBytes to choose a partition
return Utils.toPositive(Utils.murmur2(keyBytes)) % numPartitions;
}
Check this tutorial for further examples.
This is a paragraph from kafka the definitive guide on when to choose costume partition.
Implementing a custom partitioning strategy So far, we have discussed
the traits of the default partitioner, which is the one most commonly
used. However, Kafka does not limit you to just hash partitions, and
sometimes there are good reasons to partition data differently. For
example, suppose that you are a B2B vendor and your biggest customer
is a company that manufactures handheld devices called Bananas.
Suppose that you do so much business with cus‐ tomer “Banana” that
over 10% of your daily transactions are with this customer. If you use
default hash partitioning, the Banana records will get allocated to
the same partition as other accounts, resulting in one partition being
about twice as large as the rest. This can cause servers to run out of
space, processing to slow down, etc. What we really want is to give
Banana its own partition and then use hash partitioning to map the
rest of the accounts to partitions.
I've inherited a web project that a contractor started. I and my coworkers are unfamiliar with the technology used, and have a number of questions. From what we can tell, this appears to be some sort of RESTful Java server code, but my understanding is there are lots of different types of Java RESTful services. Which one is this? Specific questions:
1) Where can we read more (particularly introductory information) about this specific service?
2) The code creates and returns a JSON through some kind of "magic"... I merely return a model class (code below) that has getter and setter methods for its fields, and it's automagically converted into a JSON. I'd like to learn more about how this is done automagically.
3) We already have some code that creates a JSON. We need to return this using this framework. If I already have a JSON, how do I return that? I tried something like this:
String testJSON = "{\"menu\": {\"id\": \"file\", \"value\": \"Hello there\"}}";
return testJSON;
instead of returning a model object with getters/setters, but this returns a literal text string, not a JSON. Is there a way to return an actual JSON that's already a JSON string, and have it be sent as a JSON?
You don't have to be able to answer all of the questions above. Any/all pointers in a helpful direction appreciated!
CODE
First, the view controller that returns the JSON:
package com.aimcloud.server;
import com.aimcloud.util.MySqlConnection;
import javax.ws.rs.GET;
import javax.ws.rs.PUT;
import javax.ws.rs.POST;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.QueryParam;
import javax.ws.rs.FormParam;
import javax.ws.rs.HeaderParam;
import javax.ws.rs.Produces;
import javax.ws.rs.WebApplicationException;
import javax.ws.rs.core.MediaType;
import java.io.File;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.json.JSONArray;
import org.json.JSONException;
import org.json.JSONObject;
import com.aimcloud.models.SubscriptionTierModel;
#Path("subscription_tier")
public class SubscriptionTierController
{
// this method will return a list of subscription_tier table entries that are currently active
#GET
#Produces({ MediaType.APPLICATION_JSON })
public String/*ArrayList<SubscriptionTierModel>*/ getSubscriptionTiers(#QueryParam("includeActiveOnly") Boolean includeActiveOnly)
{
MySqlConnection mysql = MySqlConnection.getConnection();
ArrayList<SubscriptionTierModel> subscriptionTierArray = new ArrayList<SubscriptionTierModel>();
String queryString;
if (includeActiveOnly)
queryString = "SELECT * FROM subscription_tier WHERE active=1";
else
queryString = "SELECT * FROM subscription_tier";
List<Map<String, Object>> resultList = mysql.query(queryString, null);
for (Map<String, Object> subscriptionRow : resultList)
subscriptionTierArray.add( new SubscriptionTierModel(subscriptionRow) );
// String testJSON = "{\"menu\": {\"id\": \"file\", \"value\": \"Hello there\"}}";
// return testJSON;
return subscriptionTierArray;
}
}
Next, the model the code above returns:
package com.aimcloud.models;
// NOTE this does NOT import Globals
import java.sql.Types;
import java.util.Arrays;
import java.util.Calendar;
import java.util.Date;
import java.util.List;
import java.util.Map;
import org.json.JSONObject;
import com.aimcloud.util.LoggingUtils;
public class SubscriptionTierModel extends ModelPrototype
{
private String name;
private Integer num_studies;
private Integer cost_viewing;
private Integer cost_processing;
private Integer active;
protected void setupFields()
{
this.fields.add("name");
this.fields.add("num_studies");
this.fields.add("cost_viewing");
this.fields.add("cost_processing");
this.fields.add("active");
}
public SubscriptionTierModel()
{
super("subscription");
this.setupFields();
}
public SubscriptionTierModel(Map<String, Object> map)
{
super("subscription");
this.setupFields();
this.initFromMap(map);
}
public void setName(String name) {
this.name = name;
}
public String getName() {
return this.name;
}
public void setNum_Studies(Integer num_studies) {
this.num_studies = num_studies;
}
public Integer getNum_studies() {
return this.num_studies;
}
public void setCost_viewing(Integer cost_viewing) {
this.cost_viewing = cost_viewing;
}
public Integer getCost_viewing() {
return this.cost_viewing;
}
public void setCost_processing(Integer cost_processing) {
this.cost_processing = cost_processing;
}
public Integer getCost_processing() {
return this.cost_processing;
}
public void setActive(Integer active) {
this.active = active;
}
public Integer getActive() {
return this.active;
}
}
public abstract class ModelPrototype {
protected MySqlConnection mysql;
protected ArrayList<String> fields;
protected String table;
protected Integer id = null;
public Integer getId() {
return this.id;
}
public void setId(Integer id) {
this.id = id;
}
abstract protected void setupFields();
public ModelPrototype() {
mysql = MySqlConnection.getConnection();
this.fields = new ArrayList<String>();
this.fields.add("id");
}
public void initFromDbResult(List<Map<String, Object>> result) {
if (result.size() >= 1)
{
Map<String, Object> userRow = result.get(0);
this.initFromMap(userRow);
if (result.size() > 1)
{
Thread.dumpStack();
}
}
else
{
throw new WebApplicationException(ServerUtils.generateResponse(Response.Status.NOT_FOUND, "resource not found"));
}
}
protected void initFromMap(Map<String, Object> map) {
for (Map.Entry<String, Object> entry : map.entrySet()) {
Object value = entry.getValue();
// LoggingUtils.log(entry.getKey() + " " + entry.getValue().toString());
if (value != null && this.fields.contains(entry.getKey())) {
this.setField(entry.getKey(), value);
}
}
}
....
1) Where can we read more (particularly introductory information)
about this specific service?
This is a RESTful service that uses basic jax-rs annotations to build the service. I suggest looking at a tutorial like "REST using jersey" or "REST using CXF".
2) The code creates and returns a JSON through some kind of "magic"...
The restful framework used usually takes care of this. #Produces({ MediaType.APPLICATION_JSON }) annotation indicates the framework to do this conversion.This will be defined somewhere in the configuration. Check the spring config files if you are using spring to define the beans. Usually a mapper or a provider will be defined that converts the object to json.
3) We already have some code that creates a JSON. We need to return this using this framework. If I already have a JSON, how do I return that? I tried something like this:
If you already have a json just return that json from the method. Remember to still have the #Produces({ MediaType.APPLICATION_JSON }) annotation on the method.
but this returns a literal text string, not a JSON
A json is a string. That is what you will see in the response, unless you deserialize it back to an object.
I suggest you read up on JAX-RS, the Java specification for RESTful web services. All of the "javax.ws.rs.*" classes/annotations come from JAX-RS
As JAX-RS, is just a specification, there needs to be something that implements the spec. There is probably a third-party, JAX-RS component that is used to run this service. Jersey in one popular implementation. Apache CXF is another.
Now back to JAX-RS. When you read up on this, you will see that the annotations on your class determine the REST characteristics of your service. For example,
#Path("subscription_tier")
defines your class as the resource with URI BASE_PATH/subscription_tier, where BASE_PATH is propbably defined in a configuration file for your web service framework.
As for how the objects are "automagically" converted into a JSON response: that is the role of the web service framework as well. It probably uses some kind of standard object-to-JSON mapping to accomplish this. (I have worked with CXF and XML resources. In that case JAXB was the mapping mechanism). This is a good thing, as the web service developer does not have to worry about this mapping, and can focus on coding just the implementation of service itself.
I'm doing some performance testing, and I want to be able to call a resource method without going through the network. I already have a framework for generating URLs, and I'd like to be able to reuse it.
For example, given the URL: www.example.com:8080/resource/method, I want to get a reference to the resource method that it calls, so that I can run it without making a network-level HTTP request. I.e., in the example below, I want to use the URL "www.frimastudio.com:8080/time" to get a reference to the method getServerTime() that I can then call directly.
Does Jersey (or something else?) provide a way to do this, or do I have to import the specific Resource class I want to call, instantiate it, etc.? Thanks in advance!
Yes jersey is RESTful API that allow routes configuration (only with annotations)
Example :
package com.frimastudio.webservice.controller.route;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
import org.joda.time.DateTime;
import com.frimastudio.webservice.controller.representation.Time;
#Path("/time")
#Produces(MediaType.APPLICATION_JSON)
public class TimeResource
{
public TimeResource()
{
}
#GET
public Time getServerDate()
{
return new Time(new DateTime());
}
}
with Time being a Jackson representation :
package com.frimastudio.webservice.controller.representation;
import org.hibernate.validator.constraints.NotEmpty;
import org.joda.time.DateTime;
import com.fasterxml.jackson.annotation.JsonProperty;
public class Time
{
#NotEmpty
#JsonProperty
private String date;
public Time()
{
// Jackson deserialization
}
public Time(String date)
{
super();
this.date = date;
}
public Time(DateTime date)
{
super();
this.date = date.toString();
}
}
This doesn't seem to be possible, based on looking at the Jersey code. The lookup is performed by HttpMethodRule.Matcher, which is a private class used only to implement HttpMethodRule.accept.
Seems to me everything in accept up to if (s == MatchStatus.MATCH) { could be pulled into its own method and exposed to the user.
I've been through a few documentations, but am not able to communicate to the datastore yet...can anyone give me a sample project/code of objectify used in GWT web app(I use eclipse)...just a simple 'put' and 'get' action using RPC should do...or, atleast tell me how its done
Easiest way to understand how to make objectify work is to repeat all steps described in this article from David's Chandler blog. Whole blog is a pretty much must read if you interested in GWT, GAE(Java), gwt-presenter, gin\guice,etc. There you will find working example, but anyway here i'll show a slighly advanced example.
In package shared define your entity/model:
import javax.persistence.Embedded;
import javax.persistence.Id;
import com.google.gwt.user.client.rpc.IsSerializable;
import com.googlecode.objectify.Key;
import com.googlecode.objectify.annotation.Entity;
import com.googlecode.objectify.annotation.Unindexed;
#Entity
public class MyEntry implements IsSerializable {
// Objectify auto-generates Long IDs just like JDO / JPA
#Id private Long id;
#Unindexed private String text = "";
#Embedded private Time start;
// empty constructor for serialization
public MyEntry () {
}
public MyEntry (Time start, String text) {
super();
this.text = tText;
this.start = start;
}
/*constructors,getters,setters...*/
}
Time class (also shared package) contains just one field msecs:
#Entity
public class Time implements IsSerializable, Comparable<Time> {
protected int msecs = -1;
//rest of code like in MyEntry
}
Copy class ObjectifyDao from link above to your server.dao package. And then make DAO class specifically for MyEntry -- MyEntryDAO:
package com.myapp.server.dao;
import java.util.logging.Logger;
import com.googlecode.objectify.ObjectifyService;
import com.myapp.shared.MyEntryDao;
public class MyEntryDao extends ObjectifyDao<MyEntry>
{
private static final Logger LOG = Logger.getLogger(MyEntryDao.class.getName());
static
{
ObjectifyService.register(MyEntry.class);
}
public MyEntryDao()
{
super(MyEntry.class);
}
}
Finally we can make requests to database(server package):
public class FinallyDownloadingEntriesServlet extends HttpServlet {
protected void doPost(HttpServletRequest req, HttpServletResponse resp) throws
ServletException, IOException {
resp.setCharacterEncoding("UTF-8");
resp.setContentType("text/plain");
//more code...
resp.setHeader("Content-Disposition", "attachment; filename=\""+"MyFileName"+".txt\";");
try {
MyEntryDao = new MyEntryDao();
/*query to get all MyEntries from datastore sorted by start Time*/
ArrayList<MyEntry> entries = (ArrayList<MyEntry>) dao.ofy().query(MyEntry.class).order("start.msecs").list();
PrintWriter out = resp.getWriter();
int i = 0;
for (MyEntry entry : entries) {
++i;
out.println(i);
out.println(entry.getStart() + entry.getText());
out.println();
}
} finally {
//catching exceptions
}
}