Bulk insert or update record in Sqlite database in Android - android-sqlite

In my application, i have to insert or updated Bulk of record in SQLite database android,
record size may 3000 or larger,
String query="SELECT * FROM tableName where "+id+" = "+bean.id()+" and "+other_id+" = "+bean.otherID();
Cursor mCount= db.rawQuery(query , null);
mCount.moveToFirst();
row=mCount.getCount();
mCount.close();
/**
* Insert here
*/
if(row==0){
ContentValues values = new ContentValues();
values.put(key, "value");
values.put(key, "value");
db.insert("tableName", null, values);
}
/**
* Updated here
*/
else{
ContentValues values = new ContentValues();
values.put(key, "value");
values.put(key, "value");
db.update("tableName", values, "id"+" = ? and "+"other_id"+" = ? ", new String[] {bena.id(),bean.otherid});
}
this code is work fine, but take so much time, how to reduce this time, or bulk insert or update record.

Related

postgresql RETURNING in zf2 table gateway

How would one add a RETURNING clause in insert via table gateway ?
INSERT INTO users (name, age) VALUES ('Liszt', 10) RETURNING id;
$dataArray = array('name'=> 'Liszt','age' => 10);
$this->tableGateway->insert($dataArray);
$userId = $this->tableGateway->lastInsertValue;
Another method is :
$userId = $this->tableGateway->getLastInsertValue();
if you want to get last insert id in postgresql when inserting into tablegateway you have to use SequenceFeature.
$myTable = new TableGateway('table_name', $dbAdapter, new Feature\SequenceFeature('primary_key', 'sequence_name'));
$id = $myTable->insert(array(/*your data*/));

Combinations of Where Criteria - Still parameterized query - Dapper

I have a Dapper query as follows
Public void GetAllCusomers(string CustmoerId, StringFirstName, String LastName, String Gender)
{
TblCustomer tblCustomer = new TblCustomer();
using (var sqlConnection = new SqlConnection(“DatabaseConncetionString"))
{
sqlConnection.Open();
//tblCustomer = sqlConnection.Query<TblCustomer >("SELECT * FROM tblCustomer WHERE CustomerId = #CustomerID" AND FirstName = #FirstName……………, new { CustomerID = CustomerId,……………. }).ToList();
tblCustomer = sqlConnection.Query<TblCustomer >("SELECT * FROM tblCustomer WHERE CustomerId = #CustomerID", new { CustomerID = CustomerId }).ToList();
sqlConnection.Close();
}
}
The question is how to build the query? In the above method user can provide value to any parameters that he wishes to query. If the parameter value is blank that will not be used in the WHERE criteria. I will be using all the supplied parameters in the where criteria with AND operations.
Without Dapper it is easy to build the dynamic query by concatenating the SQL statement depending upon the supplied parameters. How to build these queries in Dapper without compromising the parameterized feature.
Thank you,
Ganesh
string sql = "SELECT * FROM tblCustomer " +
"WHERE CustomerId = #CustomerID AND FirstName = #FirstName"; // ...
var parameters = new DynamicParameters();
parameters.Add("CustomerId", customerID);
parameters.Add("FirstName", firstName);
// ...
connection.Execute(sql, parameters);
You would do it similar to how you build a dynamic query. Build your string dynamically (based on user input), only including filters in the Where clause as needed.
Exmpale:
var query = new StringBuilder("select * from users where ");
if(!string.IsNullOrEmpty(firstname)) query.Append("FirstName = #FirstName ");
As far as passing in the parameters, you can either construct an object that includes all of your possible parameters with values to pass in:
new {FirstName = "John", LastName = "Doe"}
or, if you only want to pass in parameters that will actually be used, you can build a Dictionary<string,object> that contains only those parameters you need to pass in:
new Dictionary<string,object> { {"FirstName", "John" } }

Creating a SQLite table row updating row again and again

I have created a table for my application... first time user will give the input for two editText ,name and mobile number when table is empty after that only updating the first row of table...so on
A scenario:
add a new record with name:"name1", telephone: "123456789" --> new record
add a new record with name:"name2", telephone:"987654321" --> update the previously entered record.
If that what you want then:
be sure to always insert the new record with the same id as the previously inserted one.
use db.insertWithOnConflict()[ link ] to insert new records, passing the value CONFLICT_REPLACE [ link ] for the last parameter conflictAlgorithm
Sample Code
void Add_Contact(Person_Contact contact)
{
db = this.getWritableDatabase();
ContentValues values = new ContentValues();
// SINGLE_ROW_ID is a constant holding the single row id that will be used. e.g: SINGLE_ROW_ID = 1
values.put( KEY_ID, SINGLE_ROW_ID );
values.put( KEY_NAME, contact.get_name() ); // Contact Name
values.put( KEY_PH_NO, contact.get_phone_number()); // Contact Phone
// Inserting Row
db.insert( TABLE_CONTACTS, null, values );
db.insertWithOnConflict( TABLE_CONTACTS,KEY_ID,values, SQLiteDatabase.CONFLICT_REPLACE );
db.close(); // Closing database connection
}

jpa avoid query

i have the next class
#Entity
#Table(name = "table_order")
#IdClass(OrderPK.class)
public class Order {
/** Unique identifier of the currency code in which the transaction was negociated. */
#Column(name = "TRADECURRE", nullable = false, length = 5)
private String tradeCurrencyCode;
/** Currency Entity for Trade. */
#ManyToOne(optional = true, fetch = FetchType.LAZY)
#JoinColumns({
#JoinColumn(name = "TRADECURRE", referencedColumnName = "codigo", updatable = false, insertable = false) })
private Currency currencyEntity;
.. here get and sets
}
then execute the next query:
StringBuilder jpaQuery = new StringBuilder();
StringBuilder whereClause = new StringBuilder();
jpaQuery.append("SELECT o, o.currencyEntity");
List orders = query.getResultList();
in this point the log of jpa show 2 querys executed, one to order table and other to Currency table.
bellow i write the next code (in the same class and method of the previous code)
for (Object orderElement : orders) {
int indexArray = 0;
Object[] orderArray = (Object[]) orderElement;
Order orderEntity = (Order) orderArray [indexArray++];
orderEntity.setCurrencyEntity((Currency) orderArray [indexArray++]);
}
When the line
orderEntity.setCurrencyEntity((Currency) orderArray [indexArray++]);
is executed, the query over the table currency is executed once again at database. I need avoid this query to fix some performance problems, i have all the data in the orderArray.
i'm using eclipselink 1.1
thanks in advance
This is happening because you haven't told JPA to pre-fetch the currentEntity in the initial select (although I think that's what you were trying to do with SELECT o, o.currencyEntity). As a result, JPA has to fetch the currentEntity each time round the loop, and it's a real performance killer.
The way to do this with JPA is with a fetch join (documented here). You'd write your query like this:
SELECT o from Order o LEFT JOIN FETCH o.currencyEntity
This also makes it easier to navigate the result set than with SELECT o, o.currencyEntity, since you'll only have a single entity returned, with its currencyEntity property intact:
List<Order> orders = query.getResultList();
for (Order order : orders) {
// fully-populated, without requiring another database query
Currency ccy = order.getCurrentEntity();
}

Fetching Cassandra row keys

Assume a Cassandra datastore with 20 rows, with row keys named "r1" .. "r20".
Questions:
How do I fetch the row keys of the first ten rows (r1 to r10)?
How do I fetch the row keys of the next ten rows (r11 to r20)?
I'm looking for the Cassandra analogy to:
SELECT row_key FROM table LIMIT 0, 10;
SELECT row_key FROM table LIMIT 10, 10;
Take a look at:
list<KeySlice> get_range_slices(keyspace, column_parent, predicate, range, consistency_level)
Where your KeyRange tuple is (start_key, end_key) == (r1, r10)
Based on my tests there is no order for the rows (unlike columns). CQL 3.0.0 can retrieve row keys but not distinct (there should be a way that I do not know).I my case I do not know what my key range is, so I tried to retrieve all the keys with both Hector and Thrift, and sort the keys later. The performance test with CQL 3.0.0 for 100000 columns 200 rows was about 500 milliseconds, Hector around 100 and thrift about 50 milliseconds. My Row key here is integer. Hector code follows:
public void queryRowkeys() {
myCluster = HFactory.getOrCreateCluster(CLUSTER_NAME, "127.0.0.1:9160");
ConfigurableConsistencyLevel ccl = new ConfigurableConsistencyLevel();
ccl.setDefaultReadConsistencyLevel(HConsistencyLevel.ONE);
myKeyspace = HFactory.createKeyspace(KEYSPACE_NAME, myCluster, ccl);
RangeSlicesQuery<Integer, Composite, String> rangeSlicesQuery = HFactory.createRangeSlicesQuery(myKeyspace, IntegerSerializer.get(),
CompositeSerializer.get(), StringSerializer.get());
long start = System.currentTimeMillis();
QueryResult<OrderedRows<Integer, Composite, String>> result =
rangeSlicesQuery.setColumnFamily(CF).setKeys(0, -1).setReturnKeysOnly().execute();
OrderedRows<Integer, Composite, String> orderedRows = result.get();
ArrayList<Integer> list = new ArrayList<Integer>();
for(Row<Integer, Composite, String> row: orderedRows){
list.add(row.getKey());
}
System.out.println((System.currentTimeMillis()-start));
Collections.sort(list);
for(Integer i: list){
System.out.println(i);
}
}
This is the Thrift code:
public void retreiveRows(){
try {
transport = new TFramedTransport(new TSocket("localhost", 9160));
TProtocol protocol = new TBinaryProtocol(transport);
client = new Cassandra.Client(protocol);
transport.open();
client.set_keyspace("prefdb");
ColumnParent columnParent = new ColumnParent("events");
SlicePredicate predicate = new SlicePredicate();
predicate.setSlice_range(new SliceRange(ByteBuffer.wrap(new byte[0]), ByteBuffer.wrap(new byte[0]), false, 1));
KeyRange keyRange = new KeyRange(); //Get all keys
keyRange.setStart_key(new byte[0]);
keyRange.setEnd_key(new byte[0]);
long start = System.currentTimeMillis();
List<KeySlice> keySlices = client.get_range_slices(columnParent, predicate, keyRange, ConsistencyLevel.ONE);
ArrayList<Integer> list = new ArrayList<Integer>();
for (KeySlice ks : keySlices) {
list.add(ByteBuffer.wrap(ks.getKey()).getInt());
}
Collections.sort(list);
System.out.println((System.currentTimeMillis()-start));
for(Integer i: list){
System.out.println(i);
}
transport.close();
} catch (Exception e) {
e.printStackTrace();
}
}
You should firstly modify cassandra.yaml in the version of cassandra1.1.o, where you should set as follows:
partitioner: org.apache.cassandra.dht.ByteOrderedPartitioner
Secondly,you should define as follows:
create keyspace DEMO with placement_strategy =
'org.apache.cassandra.locator.SimpleStrategy' and
strategy_options = [{replication_factor:1}];
use DEMO;
create column family Users with comparator = AsciiType and
key_validation_class = LongType and
column_metadata = [
{
column_name: aaa,
validation_class: BytesType
},{
column_name: bbb,
validation_class: BytesType
},{
column_name: ccc,
validation_class: BytesType
}
];
Finally, you can insert data into cassandra and can realize range query.