I am trying to extract data from the Samples namespace that comes with Intersystems Cache install. Specifically, I am trying to retrieve Sample.Company global data using XEP. Inorder to achieve this, I created Sample.Company class like this -
package Sample;
public class Company {
public Long id;
public String mission;
public String name;
public Long revenue;
public String taxId;
public Company(Long id, String mission, String name, Long revenue,
String taxId) {
this.id = id;
this.mission = mission;
this.name = name;
this.revenue = revenue;
this.taxId = taxId;
}
public Company() {
}
}
XEP related code looks like this -
import java.util.ArrayList;
import java.util.List;
import org.springframework.stereotype.Service;
import Sample.Company;
import com.intersys.xep.Event;
import com.intersys.xep.EventPersister;
import com.intersys.xep.EventQuery;
import com.intersys.xep.EventQueryIterator;
import com.intersys.xep.PersisterFactory;
import com.intersys.xep.XEPException;
#Service
public class CompanyService {
public List<Company> fetch() {
EventPersister myPersister = PersisterFactory.createPersister();
myPersister.connect("SAMPLES", "user", "pwd");
try { // delete any existing SingleStringSample events, then import
// new ones
Event.isEvent("Sample.Company");
myPersister.deleteExtent("Sample.Company");
String[] generatedClasses = myPersister.importSchema("Sample.Company");
for (int i = 0; i < generatedClasses.length; i++) {
System.out.println("Event class " + generatedClasses[i]
+ " successfully imported.");
}
} catch (XEPException e) {
System.out.println("import failed:\n" + e);
throw new RuntimeException(e);
}
EventQuery<Company> myQuery = null;
List<Company> list = new ArrayList<Company>();
try {
Event newEvent = myPersister.getEvent("Sample.Company");
String sql = "Select * from Sample.Company";
myQuery = newEvent.createQuery(sql);
newEvent.close();
myQuery.execute();
EventQueryIterator<Company> iterator = myQuery.getIterator();
while (iterator.hasNext()) {
Company c = iterator.next();
System.out.println(c);
list.add(c);
}
myQuery.close();
myPersister.close();
return list;
} catch (XEPException e) {
System.out.println("createQuery failed:\n" + e);
throw new RuntimeException(e);
}
}
}
When I try executing the fetch() method of the above class, I am seeing the following exception -
com.intersys.xep.XEPException: Cannot import - extent for Sample.Company not empty.
at com.intersys.xep.internal.Generator.generate(Generator.java:52)
at com.intersys.xep.EventPersister.importSchema(EventPersister.java:954)
at com.intersys.xep.EventPersister.importSchema(EventPersister.java:363)
I got the simple string example working. Does it mean, we can not read the existing data using XEP? If we can read, Could some please help me in resolving the above issue? Thanks in advance.
You are trying to create a new class named Sample.Company in your instance:
String[] generatedClasses = myPersister.importSchema("Sample.Company");
But you still have data and an existing class there.
Related
Till now am able to parse a docx file using docx4j and find the bookmarks and all the tables in a docx file using below code:
WordprocessingMLPackage wordMLPackage = WordprocessingMLPackage.load(new java.io.File(docxFile));
List<Object> paragraphs = getAllElementFromObject(template.getMainDocumentPart(), P.class);
for (Object p : paragraphs) {
RangeFinder rt = new RangeFinder("CTBookmark", "CTMarkupRange");
new TraversalUtil(p, rt);
for (CTBookmark content : rt.getStarts()) {
if (content.getName().equals("if_supdef")) {
List<Object> tbl = getAllElementFromObject(content, Tbl.class);
System.out.println("tbl==" + tbl.size());
}
}
}
TableFinder finder = new TableFinder();
new TraversalUtil(documentPart.getContent(), finder);
System.out.println("Found " + finder.tblList.size() + " tables");
I've got these lines of code from some blogs and answers from other questions.
Now I would like to find the table only inside a bookmark (here my bookmark name is if_supdef) rather than searching in entire document. Once I find the table, I would add rows based on number of data I receive from SQL table and MERGEFIELDS available.
Bookmark and its table look like something in below picture:
Once processed through docx4j it should look like:
In document.xml I see parent tag of w:tbl is body but not bookmark.
Is it possible to read the table inside bookmark? If so, how?
If not, what is the other alternative to uniquely identify a table and add contents to it?
Try something along the lines of the below.
import java.math.BigInteger;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.docx4j.TraversalUtil;
import org.docx4j.TraversalUtil.CallbackImpl;
import org.docx4j.openpackaging.packages.WordprocessingMLPackage;
import org.docx4j.openpackaging.parts.WordprocessingML.MainDocumentPart;
import org.docx4j.wml.CTBookmark;
import org.docx4j.wml.CTMarkupRange;
import org.docx4j.wml.Tbl;
import jakarta.xml.bind.JAXBContext;
public class TableInBookmarkFinder {
public static JAXBContext context = org.docx4j.jaxb.Context.jc;
public static void main(String[] args) throws Exception {
String inputfilepath = System.getProperty("user.dir")
+ "/tbl_bookmarks.docx";
WordprocessingMLPackage wordMLPackage = WordprocessingMLPackage
.load(new java.io.File(inputfilepath));
MainDocumentPart documentPart = wordMLPackage.getMainDocumentPart();
// find
TableInBookmarkFinderCallback finder = new TableInBookmarkFinderCallback();
new TraversalUtil(documentPart.getContent(), finder);
List<TableInfo> tableInfos = finder.getTableInfos();
// result?
for (TableInfo ti : tableInfos) {
System.out.println("table contained in bookmarks:");
for (String s: ti.getBookmarkNames()) {
System.out.println("bookmark name: " + s);
}
}
}
public static class TableInfo {
TableInfo(Tbl tbl, List<String> bookmarkNames) {
this.tbl = tbl;
this.bookmarkNames = bookmarkNames;
}
private Tbl tbl;
public Tbl getTbl() {
return tbl;
}
private List<String> bookmarkNames;
public List<String> getBookmarkNames() {
return bookmarkNames;
}
}
public static class TableInBookmarkFinderCallback extends CallbackImpl {
public TableInBookmarkFinderCallback() {
}
/**
* Keep this set to true unless you don't
* want to traverse a table (eg a nested table).
* NB: If traversing from body level, you'll need to set it to true!
*/
private boolean traverseTables=true;
/**
* Track bookmarks encountered
*/
private Map<BigInteger, String> bookmarkInfos = new HashMap<BigInteger, String>();
/**
* What bookmarks are we currently in?
*/
private Set<BigInteger> currentBookmarks = new HashSet<BigInteger>();
/**
* What tables did we encounter?
*/
private List<TableInfo> tableInfos = new ArrayList<TableInfo>();
public List<TableInfo> getTableInfos() {
return tableInfos;
}
#Override
public List<Object> apply(Object o) {
System.out.println(o.getClass().getName());
if (o instanceof CTBookmark) {
CTBookmark bmStart = (CTBookmark)o;
bookmarkInfos.put(bmStart.getId(), bmStart.getName());
if (currentBookmarks.add(bmStart.getId()) ) {
// ok
System.out.println("added " + bmStart.getId());
} else {
System.out.println("ERROR: duplicate bookmarks with id " + bmStart.getId());
}
} else /* need this else because CTBookmark extends CTMarkupRange */
if (o instanceof CTMarkupRange) {
CTMarkupRange bmEnd = (CTMarkupRange)o;
if (currentBookmarks.remove(bmEnd.getId()) ) {
// ok
System.out.println("removed " + bmEnd.getId());
} else {
System.out.println("ERROR: no start element for bookmark with id " + bmEnd.getId());
}
}
if (o instanceof Tbl ) {
System.out.println("tbl");
List<String> bookmarkNames = new ArrayList<String>();
for (BigInteger bmId : currentBookmarks) {
bookmarkNames.add(bookmarkInfos.get(bmId));
}
tableInfos.add( new TableInfo( (Tbl)o, bookmarkNames));
}
return null;
}
#Override
public boolean shouldTraverse(Object o) {
if (traverseTables) {
return true;
} else {
// Yes, unless its a nested Tbl
return !(o instanceof Tbl);
}
}
}
}
I am creating in memory PCollection and writing it into postgres sql. now, when I insert data into table, few records may throw exception and will not be inserted. how to extract such failed insert records when I start pipeline?
below is the code I have written for pipeline:
PipelineOptions options = PipelineOptionsFactory.create();
options.setRunner(FlinkRunner.class);
Pipeline p = Pipeline.create(options);
// Preparing dummy data
Collection<Stock> stockList = Arrays.asList(new Stock("AAP", 2000,"Apple Inc"),
new Stock("MSF", 3000, "Microsoft Corporation"),
new Stock("NVDA", 4000, "NVIDIA Corporation"),
new Stock("INT", 3200, "Intel Corporation"));
// Reading dummy data and save it into PCollection<Stock>
PCollection<Stock> data = p.apply(Create.of(stockList)
.withCoder(SerializableCoder.of(Stock.class)));
//insert
#SuppressWarnings("unused")
PDone insertData = data.apply(JdbcIO.<Stock>write()
.withDataSourceConfiguration(JdbcIO.DataSourceConfiguration
.create("org.postgresql.Driver","jdbc:postgresql://localhost:5432/postgres")
.withUsername("postgres").withPassword("sachin"))
.withStatement("insert into stocks values(?, ?, ?)")
.withPreparedStatementSetter(new JdbcIO.PreparedStatementSetter<Stock>() {
private static final long serialVersionUID = 1L;
public void setParameters(Stock element, PreparedStatement query) throws SQLException {
query.setString(1, element.getSymbol());
query.setLong(2, element.getPrice());
query.setString(3, element.getCompany());
}
}));
p.run().waitUntilFinish();
After going through all apache beam programming guide, i did not get any clue, So, copied JdbcIO and modified execute batch where I have separated inserted successful record and insert failed record by using TupleTags. now, It is working.
below is code for modified JdbcIO:
private static class WriteFn<T> extends DoFn<T, T> {
private static final int DEFAULT_BATCH_SIZE = 1;
private final Write<T> spec;
private DataSource dataSource;
private Connection connection;
private PreparedStatement preparedStatement;
**private TupleTag<T> validTupleTag;
private TupleTag<T> inValidTupleTag;**
private int batchCount;
public WriteFn(Write<T> spec) {
this.spec = spec;
}
#Setup
public void setup() throws Exception {
dataSource = spec.getDataSourceConfiguration().buildDatasource();
connection = dataSource.getConnection();
connection.setAutoCommit(false);
preparedStatement = connection.prepareStatement(spec.getStatement());
validTupleTag = spec.getValidTupleTag();
inValidTupleTag = spec.getInvalidTupleTag();
}
#StartBundle
public void startBundle() {
batchCount = 0;
}
#ProcessElement
public void processElement(#Element T record, MultiOutputReceiver out)
throws Exception {
preparedStatement.clearParameters();
spec.getPreparedStatementSetter().setParameters(record,
preparedStatement);
preparedStatement.addBatch();
batchCount++;
if (batchCount >= DEFAULT_BATCH_SIZE) {
if (batchCount > 0) {
try {
preparedStatement.executeBatch();
connection.commit();
**out.get(validTupleTag).output(record);**
} catch (SQLException e1) {
//TODO add logger
**out.get(inValidTupleTag).output(record);**
}
batchCount = 0;
}
}
}
and client code:
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.List;
import org.apache.beam.runners.flink.FlinkRunner;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.PipelineResult.State;
import org.apache.beam.sdk.coders.SerializableCoder;
import org.apache.beam.sdk.options.PipelineOptions;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.Create;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.values.PCollection;
import org.apache.beam.sdk.values.PCollectionTuple;
import org.apache.beam.sdk.values.TupleTag;
/**
* #author sachin
* #date 18-Nov-2021
*/
public class BeamTest {
static List<Stock> stocks = new ArrayList<>();
public static void main(String[] args) {
System.setProperty("java.specification.version", "1.8");
process();
// read();
}
public static void process() {
final TupleTag<Stock> VALID = new TupleTag<Stock>() {
};
final TupleTag<Stock> INVALID = new TupleTag<Stock>() {
};
PipelineOptions options = PipelineOptionsFactory.create();
options.setRunner(FlinkRunner.class);
Pipeline p = Pipeline.create(options);
// Preparing dummy data
Collection<Stock> stockList = Arrays.asList(
new Stock("AAP", 2000, "Apple Inc"),
new Stock("MSF", 3000, "Microsoft Corporation"),
new Stock("NVDA", 4000, "NVIDIA Corporation"),
new Stock("INT", 3200, "Intel Corporation"));
// Reading dummy data and save it into PCollection<Stock>
PCollection<Stock> data =
p.apply(Create.of(stockList).
withCoder(SerializableCoder.of(Stock.class)));
// insert
PCollectionTuple pCollectionTupleResult = data.apply("write",
CustomJdbcIOWrite.<Stock>write()
.withDataSourceConfiguration(CustomJdbcIOWrite.DataSourceConfiguration
.create("org.postgresql.Driver",
"jdbc:postgresql://localhost:5432/postgres")
.withUsername("postgres").withPassword("sachin"))
.withStatement("insert into stocks values(?, ?,
?)").withValidTag(VALID).withInValidTag(INVALID)
.withPreparedStatementSetter(new
CustomJdbcIOWrite.PreparedStatementSetter<Stock>() {
private static final long serialVersionUID = 1L;
public void setParameters(Stock element,
PreparedStatement query) throws SQLException {
query.setString(1, element.getSymbol());
query.setLong(2, element.getPrice());
query.setString(3, element.getCompany());
}
}));
// get failed PCollection using INVALID tupletag
PCollection<Stock> failedPcollection =
pCollectionTupleResult.get(INVALID)
.setCoder(SerializableCoder.of(Stock.class));
failedPcollection.apply(ParDo.of(new DoFn<Stock, Stock>() {
private static final long serialVersionUID = 1L;
#ProcessElement
public void process(ProcessContext pc) {
System.out.println("Failed pCollection element:" +
pc.element().getCompany());
}
}));
//get failed PCollection using INVALID tupletag
PCollection<Stock> insertedPcollection =
pCollectionTupleResult.get(VALID)
.setCoder(SerializableCoder.of(Stock.class));
insertedPcollection.apply(ParDo.of(new DoFn<Stock, Stock>() {
private static final long serialVersionUID = 1L;
#ProcessElement
public void process(ProcessContext pc) {
System.out.println("Inserted pCollection element:" +
pc.element().getCompany());
}
}));
// run pipeline
State state = p.run().waitUntilFinish();
System.out.println("Data inserted successfully with state : " +
state);
}
}
below is the output as new Stock("NVDA", 4000, "NVIDIA Corporation") is intentianlly not inserted as my db column accept only 3 char "NVD" and not 4 chars "NVDA":
Inserted pCollection element:Microsoft Corporation
Failed pCollection element:NVIDIA Corporation
Inserted pCollection element:Intel Corporation
Inserted pCollection element:Apple Inc
Data inserted successfully with state : DONE
Full Details and github link
Scenario: a Spring WebFlux triggering CommandLineRunner.run in order to load data to MongoDb for testing purpose.
Goal: when starting the microservice locally it is aimed to read a json file and load documents to MongDb.
Personal knowledge: "bufferedReader.lines().filter(l -> !l.trim().isEmpty()" reads each json node and return it as stream. Then I can map it to "l" and access the get methods. I guess I don't have to create a list and then stream it since I have already load it as stream by "new InputStreamReader(getClass().getClassLoader().getResourceAsStream()" and I assume I can use lines() since it node will result in a string line. Am I in right direction or I am messing up some idea?
This is a json sample file:
{
"Extrato": {
"description": "credit",
"value": "R$1.000,00",
"status": 11
},
"Extrato": {
"description": "debit",
"value": "R$2.000,00",
"status": 99
}
}
model
import org.springframework.data.annotation.Id;
import org.springframework.data.mongodb.core.mapping.Document;
#Document
public class Extrato {
#Id
private String id;
private String description;
private String value;
private Integer status;
public Extrato(String id, String description, String value, Integer status) {
super();
this.id = id;
this.description = description;
this.value = value;
this.status = status;
}
... getters and setter accordinly
Repository
import org.springframework.data.mongodb.repository.Query;
import org.springframework.data.repository.reactive.ReactiveCrudRepository;
import com.noblockingcase.demo.model.Extrato;
import reactor.core.publisher.Flux;
import org.springframework.data.domain.Pageable;
public interface ExtratoRepository extends ReactiveCrudRepository<Extrato, String> {
#Query("{ id: { $exists: true }}")
Flux<Extrato> retrieveAllExtratosPaged(final Pageable page);
}
command for loading from above json file
import org.springframework.boot.CommandLineRunner;
import org.springframework.stereotype.Component;
import com.noblockingcase.demo.model.Extrato;
import com.noblockingcase.demo.repository.ExtratoRepository;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import reactor.core.publisher.Flux;
#Component
public class TestDataLoader implements CommandLineRunner {
private static final Logger log = LoggerFactory.getLogger(TestDataLoader.class);
private ExtratoRepository extratoRepository;
TestDataLoader(final ExtratoRepository extratoRepository) {
this.extratoRepository = extratoRepository;
}
#Override
public void run(final String... args) throws Exception {
if (extratoRepository.count().block() == 0L) {
final LongSupplier longSupplier = new LongSupplier() {
Long l = 0L;
#Override
public long getAsLong() {
return l++;
}
};
BufferedReader bufferedReader = new BufferedReader(
new InputStreamReader(getClass().getClassLoader().getResourceAsStream("carga-teste.txt")));
//*** THE ISSUE IS NEXT LINE
Flux.fromStream(bufferedReader.lines().filter(l -> !l.trim().isEmpty())
.map(l -> extratoRepository.save(new Extrato(String.valueOf(longSupplier.getAsLong()),
l.getDescription(), l.getValue(), l.getStatus()))))
.subscribe(m -> log.info("Carga Teste: {}", m.block()));
}
}
}
Here is the MongoDb config althought I don't think it is relevant
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import com.mongodb.MongoClientOptions;
#Configuration
public class MongoDbSettings {
#Bean
public MongoClientOptions mongoOptions() {
return MongoClientOptions.builder().socketTimeout(2000).build();
}
}
If I tried my original code and adjust it for reading a text file I can successfully read text file instead of json. Obvisouly it doesn't fit my demand since I want read json file. By the way, it can clarify a bit more where I am blocked.
load-test.txt (available in https://github.com/jimisdrpc/webflux-worth-scenarious/blob/master/demo/src/main/resources/carga-teste.txt)
crédito de R$1.000,00
débito de R$100,00
snippet code working with simple text file
BufferedReader bufferedReader = new BufferedReader(
new InputStreamReader(getClass().getClassLoader().getResourceAsStream("carga-teste.txt")));
Flux.fromStream(bufferedReader.lines().filter(l -> !l.trim().isEmpty())
.map(l -> extratoRepository
.save(new Extrato(String.valueOf(longSupplier.getAsLong()), "Qualquer descrição", l))))
.subscribe(m -> log.info("Carga Teste: {}", m.block()));
Whole project working succesfully reading from text file: https://github.com/jimisdrpc/webflux-worth-scenarious/tree/master/demo
Docker compose for booting MongoDb https://github.com/jimisdrpc/webflux-worth-scenarious/blob/master/docker-compose.yml
To summarize, my issue is: I didn't figure out how read a json file and insert the data into MongoDb during CommandLineRunner.run()
I found an example with Flux::using Flux::fromStream to be helpful for this purpose. This will read your file into a Flux and then you can subscribe to and process with .flatmap or something. From the Javadoc
using(Callable resourceSupplier, Function> sourceSupplier, Consumer resourceCleanup)
Uses a resource, generated by a supplier for each individual Subscriber, while streaming the values from a Publisher derived from the same resource and makes sure the resource is released if the sequence terminates or the Subscriber cancels.
and the code that I put together:
private static Flux<Account> fluxAccounts() {
return Flux.using(() ->
new BufferedReader(new InputStreamReader(new ClassPathResource("data/ExportCSV.csv").getInputStream()))
.lines()
.map(s->{
String[] sa = s.split(" ");
return Account.builder()
.firstname(sa[0])
.lastname(sa[1])
.build();
}),
Flux::fromStream,
BaseStream::close
);
}
Please note your json is invalid. Text data is not same as json. Json needs a special handling so always better to use library.
carga-teste.json
[
{"description": "credit", "value": "R$1.000,00", "status": 11},
{"description": "debit","value": "R$2.000,00", "status": 99}
]
Credits goes to article here - https://www.nurkiewicz.com/2017/09/streaming-large-json-file-with-jackson.html.
I've adopted to use Flux.
#Override
public void run(final String... args) throws Exception {
BufferedReader bufferedReader = new BufferedReader(
new InputStreamReader(getClass().getClassLoader().getResourceAsStream("carga-teste.json")));
ObjectMapper mapper = new ObjectMapper();
Flux<Extrato> flux = Flux.generate(
() -> parser(bufferedReader, mapper),
this::pullOrComplete,
jsonParser -> {
try {
jsonParser.close();
} catch (IOException e) {}
});
flux.map(l -> extratoRepository.save(l)).subscribe(m -> log.info("Carga Teste: {}", m.block()));
}
}
private JsonParser parser(Reader reader, ObjectMapper mapper) {
JsonParser parser = null;
try {
parser = mapper.getFactory().createParser(reader);
parser.nextToken();
} catch (IOException e) {}
return parser;
}
private JsonParser pullOrComplete(JsonParser parser, SynchronousSink<Extrato> emitter) {
try {
if (parser.nextToken() != JsonToken.END_ARRAY) {
Extrato extrato = parser.readValueAs(Extrato.class);
emitter.next(extrato);
} else {
emitter.complete();
}
} catch (IOException e) {
emitter.error(e);
}
return parser;
}
Error this line :
mDataBase = SQLiteDatabase.openDatabase(dbPath, "123", null, SQLiteDatabase.NO_LOCALIZED_COLLATORS);
When open the database . but whats Wrong? how to open database with password? Can any one help me?
I set the password on SQLITE Db Browser > File> Set encryption
open this password in android part
When Open then show error
error : net.sqlcipher.database.SQLiteException: file is not a database: , while compiling: select count(*) from sqlite_master
Can any one help me to solve it? thanks in advance
import android.content.Context;
import android.database.SQLException;
//import android.database.sqlite.SQLiteOpenHelper;
import android.util.Log;
import net.sqlcipher.database.SQLiteDatabase;
import net.sqlcipher.database.SQLiteDatabase;
import net.sqlcipher.database.SQLiteOpenHelper;
import net.sqlcipher.database.SQLiteDatabase.CursorFactory;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.io.File;
import net.sqlcipher.database.SQLiteDatabase;
import android.app.Activity;
import android.os.Bundle;
public class DatabaseHelper extends SQLiteOpenHelper {
private static String TAG = DatabaseHelper.class.getName();
private static String DB_PATH = "";
private static String DB_NAME = "ec.db";// Database name
private SQLiteDatabase mDataBase;
private final Context mContext;
File databaseFile=null;
public DatabaseHelper(Context context) {
super(context, DB_NAME, null, 1);
DB_PATH = context.getApplicationInfo().dataDir + "/databases/";
this.mContext = context;
SQLiteDatabase.loadLibs(context);
File databaseFile = context.getDatabasePath(DB_NAME);
databaseFile.mkdirs();
}
public void createDataBase() throws IOException {
// If database not exists copy it from the assets
boolean mDataBaseExist = checkDataBase();
if (!mDataBaseExist) {
this.getWritableDatabase("123");
this.close();
try {
// Copy the database from assests
copyDataBase();
Log.e(TAG, "createDatabase database created");
} catch (IOException mIOException) {
throw new Error(mIOException.toString() + " : " + DB_PATH
+ DB_NAME);// "ErrorCopyingDataBase"
}
}
}
private boolean checkDataBase() {
File dbFile = new File(DB_PATH + DB_NAME);
return dbFile.exists();
}
// Copy the database from assets
private void copyDataBase() throws IOException {
InputStream mInput = mContext.getAssets().open(DB_NAME);
String outFileName = DB_PATH + DB_NAME;
OutputStream mOutput = new FileOutputStream(outFileName);
byte[] mBuffer = new byte[4096];
int mLength;
while ((mLength = mInput.read(mBuffer)) > 0) {
mOutput.write(mBuffer, 0, mLength);
}
mOutput.flush();
mOutput.close();
mInput.close();
}
// Open the database, so we can query it
public boolean openDataBase() throws SQLException {
String mPath = DB_PATH + DB_NAME;
//File dbFile = new File(DB_PATH + DB_NAME);
//File databaseFile = mContext.getDatabasePath(DB_NAME);
//databaseFile.mkdirs();
//databaseFile.delete();
SQLiteDatabase.loadLibs(mContext);
String dbPath = mContext.getDatabasePath("ec.db").getPath();
//databaseFile.delete();
SQLiteDatabase.loadLibs(mContext);
//mDataBase = SQLiteDatabase.openOrCreateDatabase(databaseFile, "123", null);
//mDataBase = SQLiteDatabase.openDatabase(mPath, "123",null, SQLiteDatabase.NO_LOCALIZED_COLLATORS);
mDataBase = SQLiteDatabase.openDatabase(dbPath, "123", null, SQLiteDatabase.NO_LOCALIZED_COLLATORS);
return mDataBase != null;
}
#Override
public synchronized void close() {
if (mDataBase != null)
mDataBase.close();
super.close();
}
#Override
public void onCreate(SQLiteDatabase db) {
}
#Override
public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) {
}
}
i created a database with sqlcipher V3.5.7 and then changed the sqlcipher version to V4.1.3 and had this problem
in build.gradle i changed
implementation "net.zetetic:android-database-sqlcipher:4.1.3#aar"
to
implementation 'net.zetetic:android-database-sqlcipher:3.5.7#aar'
and the problem solved
You are referencing the password string value of 123456Sa, however your call within createDataBase uses the value of 123 as a password to getWritableDatabase.
I was getting this exception only in obfuscation case.
I added default constructors for both ByteArraySerializer and ByteArrayDeserializer classes.
Note: don't add above classes as inner classes. Declare these classes independently add keep these classes in proguard-rules.
Can any one please tell me how can I get current attempt count in quartz.
Example : if Quartz scheduler is started with repeat count of 5. I want to get the current repeat count.
Here is the Example I am trying with
public class SimpleTriggerExample implements Job
{
int count = 0;
JobDetail job = null;
JobDataMap data = null;
public static void main( String[] args ) throws Exception
{
new SimpleTriggerExample().schedule();
}
public void schedule() throws ParseException, SchedulerException{
job = JobBuilder.newJob(SimpleTriggerExample.class)
.withIdentity("dummyJobName", "group1").build();
Trigger trigger = TriggerBuilder
.newTrigger()
.withIdentity("dummyTriggerName", "group1")
.withSchedule(SimpleScheduleBuilder.simpleSchedule()
.withIntervalInSeconds(10).withRepeatCount(3))
.build();
System.out.println("before in main jobdatamap");
Scheduler scheduler = new StdSchedulerFactory().getScheduler();
scheduler.start();
scheduler.scheduleJob(job, trigger);
}
public void execute(JobExecutionContext context)
throws JobExecutionException {
//count
data = context.getJobDetail().getJobDataMap();
System.out.println("after jobdatamap");
int count1 = data.getInt("EXECUTION_COUNT");
System.out.println("count1-->before"+count1);
count1++;
System.out.println("count1-->after"+count1);
job.getJobDataMap().put("EXECUTION_COUNT", count1);
count = count1;
System.out.println("count"+count);
}
}
Use JobDataMap along with #PersistJobDataAfterExecution annotation.
Make sure when you modify data in JobDataMap the key value should be same.
If you do like this you can persist your attempts as per your requirement.
Example Code Snippet:
package com.mss.quartz.demo;
import java.text.SimpleDateFormat;
import java.util.Date;
import org.quartz.InterruptableJob;
import org.quartz.Job;
import org.quartz.JobBuilder;
import org.quartz.JobDataMap;
import org.quartz.JobDetail;
import org.quartz.JobExecutionContext;
import org.quartz.JobExecutionException;
import org.quartz.JobKey;
import org.quartz.PersistJobDataAfterExecution;
import org.quartz.Scheduler;
import org.quartz.SchedulerContext;
import org.quartz.SchedulerException;
import org.quartz.SimpleScheduleBuilder;
import org.quartz.TriggerBuilder;
import org.quartz.UnableToInterruptJobException;
import org.quartz.impl.StdSchedulerFactory;
#PersistJobDataAfterExecution
public class HelloJob implements InterruptableJob
{
SchedulerContext schedulerContext = null;
testQuartz test = new testQuartz();
boolean result;
private boolean _interrupted = false;
private JobKey _jobKey = null;
Thread t = null;
//public static int count = 0;
public void interrupt() throws UnableToInterruptJobException {
System.out.println("---" + this._jobKey + " -- INTERRUPTING --");
this._interrupted = true;
}
public void execute(JobExecutionContext context)
throws JobExecutionException {
Scheduler scd = context.getScheduler();
JobDataMap dataMap = context.getJobDetail().getJobDataMap();
String jobSays = dataMap.getString("test1");
int myFloatValue = dataMap.getIntValue("id");
System.out.println("In Job Class"+jobSays+ " "+myFloatValue+" Current time in
Job class "+new Date().toString());
JobKey jobKey = context.getJobDetail().getKey();
int attemps = dataMap.getInt("attempts");
attemps++;
dataMap.put("attempts", attemps);
System.out.println("After putting count in job data map:"+dataMap.get("attempts"));
}
}
Try to add the #PersistJobDataAfterExecution annotation to SimpleTriggerExample class:
#PersistJobDataAfterExecution
public class SimpleTriggerExample implements Job
{ ...}