Inserting columns in Sqlite database using android studio - android-sqlite

Inserting columns in Sqlite database using android studio and on how to set some as autoincrement and primarykeys

If you set the column type as INTEGER PRIMARY KEY and if you do not specify a value when inserting a row, then a value (64bit signed integer) will be assigned an unused integer, usually one greater then the largest.
You very likely do not need to use the AUTOINCREMENT keyword (this assigns an integer that is unique within the database, rather than at the table level and thus has overheads in determining that integer).
Frequently _id is used, so will will often see
db.execSQL("CREATE TABLE tablename (_id INTEGER PRIMARY KEY, column_name column_type, ...more column_name / column_types as required...);");
As an example, the following code uses a subclass of SQLiteOpenHelper (not needed but frequently used), which requires an onCreate method (which is called when the database is created e.g. the very first time the helper is used) and an onUpgrade method (required if the version number is incremented/increased).
This code will create, if need be, a database called mydb (filename mydb.sqlite for Operating Systems that support extension greater than 3 characters).
Note if need be is just once (with some rare exceptions) unless the database is deleted. i.e. onCreate is not called every time an instance of the helper is constructed, it is only called when the database file itself doesn't exist (again rare exceptions apply).
It will then, if creating the database, create a table in the database named testfloat. The table will consists of 2 columns namely, _id and myfloat.
The _id column type is INTEGER PRIMARY KEY, if when inserting a row and no value is specified for the column then it will be a unique incrementing integer (1 at first, then 2 ......).
The myfloat column is of type FLOAT (perhaps checkout Datatypes In SQLite Version 3 for SQLite's flexibility in regards to Datatypes), if a value isn't given when inserting a row then a value of 0.0 will be given.
public class MyDBHelper extends SQLiteOpenHelper {
public static final String DBNname = "mydb";
public static final int DBVersion = 1;
public static final String TESTFLOATTABLE = "testfloat";
public static final String STDIDCOL = "_id INTEGER PRIMARYKEY";
public static final String MYFLOATCOL = "myfloat";
public static final String MYFLOATTYPE = " FLOAT DEFAULT 0.0";
public MyDBHelper(Context context) {
super(context,DBNname,null,DBVersion);
}
#Override
public void onCreate(SQLiteDatabase db) {
db.execSQL("CREATE TABLE " +
TESTFLOATTABLE +
"(" +
STDIDCOL +
"," +
MYFLOATCOL +
MYFLOATTYPE +
")");
}
#Override
public void onUpgrade(SQLiteDatabase db, int oldeversion, int newversion) {
}
public long insertRow(double myfloatvalue) {
long rv;
SQLiteDatabase db = getWritableDatabase();
ContentValues cv = new ContentValues();
cv.put(MYFLOATCOL,myfloatvalue);
rv = db.insert(TESTFLOATTABLE,null,cv);
return rv;
}
public Cursor getAllMyFloats() {
Cursor rv;
SQLiteDatabase db = getReadableDatabase();
rv = db.query(TESTFLOATTABLE,null,null,null,null,null,null);
return rv;
}
}
The code above, has an empty method for onUpgrade. Additionally there are two methods insertRow (to insert a row) and getAllMyFloats to return a Cursor, with in this case, all rows containing all columns.
In you invoking activity you could do something along the lines of :-
MyDBHelper mydbhelper = new MyDBHelper(this);
mydbhelper.insertRow(1.3);
mydbhelper.insertRow(1);
mydbhelper.insertRow(5.674389123459834);
The first line gets a MyDBHelper instance, which when first run will create the database and in doing so call the onCreate method, thus creating the tables.
The next three lines invoke the insertRow method (note if the app is rerun 3 additional rows will be added........ i.e. the code is for demonstration), which will cause 3 rows to be added, the first will have 1 in the _id column, the next 2 etc.
The following code (which follows on from the code above) demonstrates obtaining and interrogating a Cursor:-
Cursor getfloats = mydbhelper.getAllMyFloats();
Log.d("TESTFLOAT","Rows returned from getALlFloats = " + getfloats.getCount());
while (getfloats.moveToNext()) {
Log.d("TESTFLOAT","Via getString = " + getfloats.getString(getfloats.getColumnIndex(mydbhelper.MYFLOATCOL)));
Log.d("TESTFLOAT","Via getFloat = " + Float.toHexString(
getfloats.getFloat(
getfloats.getColumnIndex(
mydbhelper.MYFLOATCOL
)
)
));
Log.d("TESTFLOAT","Via getDouble = " +Double.toString(
getfloats.getDouble(
getfloats.getColumnIndex(
mydbhelper.MYFLOATCOL
)
)));
}
getfloats.close();
The first line invokes the getAllMyFLoats that returns a cursor.
The next line wtires a log message that details how many rows are in the resultant cursor.
The while clause transverses the cursor if there are any rows in the cursor. For each row it gets the value of the myfloat column, using some of the cursor.get????? methods (to demonstrate how getting the value is affected by each). Note instead of getfloats.get????(getfloats.getColumnIndex(column)), getfloats.get????(1) would work.

Related

Get return value from ExecuteSqlRaw in EF Core

I have an extremely large table that I'm trying to get the number of rows for. Using COUNT(*) is too slow, so I want to run this query using EF Core:
int count = _dbContext.Database.ExecuteSqlRaw(
"SELECT Total_Rows = SUM(st.row_count) " +
"FROM sys.dm_db_partition_stats st " +
"WHERE object_name(object_id) = 'MyLargeTable' AND(index_id < 2)");
The only problem is that the return value isn't the result of the query, but the number of records returned, which is just 1
Is there a way to get the correct value here, or will I need to use a different method?
Since you only need a scalar value you can also use an output parameter to retrieve the data, eg
var sql = #"
SELECT #Total_Rows = SUM(st.row_count)
FROM sys.dm_db_partition_stats st
WHERE object_name(object_id) = 'MyLargeTable' AND(index_id < 2)
";
var pTotalRows = new SqlParameter("#Total_Rows", System.Data.SqlDbType.BigInt);
pTotalRows.Direction = System.Data.ParameterDirection.Output;
db.Database.ExecuteSqlRaw(sql, pTotalRows);
var totalRos = (long?)(pTotalRows.Value == DBNull.Value ? null:pTotalRows.Value) ;
If one let's me to recreate a correct answer based on this blog: https://erikej.github.io/efcore/2020/05/26/ef-core-fromsql-scalar.html
We need to create a virtual entity model for our database, that will contain our needed query result, at the same time we need a pseudo DbSet<this virtual model> to use ef core FromSqlRaw method that returns data instead of ExecuteSqlRaw that just returns numbers of rows affected by query.
The example is for returning an integer value, but you can easily adapt it:
Define a return value holding class
public class IntReturn
{
public int Value { get; set; }
}
Fake a virtual DbSet<IntReturn> it will not be really present in db:
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
...
modelBuilder.Entity<IntReturn>().HasNoKey();
base.OnModelCreating(modelBuilder);
}
Now we can call FromSqlRaw for this virtual set. In this example the calling method is inside MyContext:DbContext, you'd need to instantiate your own context to use it instead of this):
NOTE the usage of "as Value" - same name as IntReturn.Value property. In some wierd cases you'd have to do the opposite: name your virtual model property after the name of the value thre database funstion is returning.
public int ReserveNextCustomerId()
{
var sql = $"Select nextval(pg_get_serial_sequence('\"Customers\"', 'Id')) as Value;";
var i = this.Set<IntReturn>()
.FromSqlRaw(sql)
.AsEnumerable()
.First().Value;
return i;
}

Using Integer Array in postgres with Spring-boot

I am attempting to accept from the browser a List and use this within a SQL query to a postgres database. I have the following code snippet that tries to show the function that I have made todo this. Some of the variables have been changed in case there appears to be discrepancies.
static public List<Map<String,Object>> fetch(NamedParameterJdbcTemplate jdbcTemplate, List<Integer> id){
List<Map<String,Object>> result= new ArrayList<>();
String sql = "select * from lookup where id && ARRAY[ :ids ]";
MapSqlParameterSource parameters = new MapSqlParameterSource();
parameters.addValue("ids",id, Types.INTEGER);
result= jdbcTemplate.query(sql,
parameters,
new RowMapper<Map<String,Object>>() { ...
}
)
}
The lookup tables id field is a postgress array hence me needing to use && and the array function
This function is called by many different endpoints and passes the NamedParameterJdbcTemplate as well as a list of Integers. The problem I am having is that if any integer in the list is < 100 I get the following message
Bad value for type int : {20}
Is there another way of doing this or a way around this error ?
EDIT:
It appears it was part of the problem mentioned as the answer but also using
rs.getInt(col)
instead of
rs.getArray(col)
There's an error I can see in the SQL, and probably the wrong choice of API after that. First in the query:
select * from lookup where id && ARRAY[ :ids ]
To bind an array parameter, it must not be placed in the ARRAY constructor, but rather you need to use JDBC binding like this:
select * from lookup where id && ?
As you've noticed I'm not using a named parameter in these examples, because NamedParameterJdbcTemplate does not provide a route to obtaining the java.sql.Connection object or a proxy to it. You can access it through the PreparedStatementSetter if you use the JdbcOperations interface instead.
public static List<Map<String,Object>> fetch(NamedParameterJdbcTemplate jdbcTemplate, List<Integer> idlist){
List<Map<String,Object>> result= new ArrayList<>();
String sql = "select * from lookup where id && ?";
final Integer[] ids = idlist.toArray(new Integer[0]);
PreparedStatementSetter parameters = new PreparedStatementSetter() {
#Override
void setValues(PreparedStatement stmt) {
Connection conn = stmt.getConnection();
// this can only be done through the Connection
java.sql.Array arr = conn.createArrayOf("integer", ids);
// you can use setObject(1, ids, java.sql.Types.ARRAY) instead of setArray
// in case the connection wrapper doesn't pass it on to the JDBC driver
stmt.setArray(1, ids);
}
};
JdbcOperations jdo = jdbcTemplate.getJdbcOperations();
result= jdo.query(sql,
parameters,
new RowMapper<Map<String,Object>>() { ...
}
)
}
There might be errors in the code, since I normally use a different set of APIs, and you need a try-catch block for java.sql.SQLException in that setValues function, but you should be able to handle it from here on.

How to set values in ItemPreparedStatementSetter for one to many mapping

I am trying to use JdbcBatchItemWriter for a domain object RemittanceClaimVO . RemittanceClaimVO has a List of another domain object , ClaimVO .
public class RemittanceClaimVO {
private long remitId;
private List<ClaimVO> claims = new ArrayList<ClaimVO>();
//setter and getters
}
So for each remit id, there would be multiple claims and I wish to use single batch statement to insert all rows.
With plain jdbc, I used to write this object by putting values in batches like below ,
ist<ClaimVO> claims = remittanceClaimVO.getClaims();
if(claims != null && !claims.isEmpty()){
for(ClaimVO claim:claims){
int counter = 1 ;
stmt.setLong(counter++, remittanceClaimVO.getRemitId());
stmt.setLong(counter++, claim.getClaimId());
stmt.addBatch();
}
}
stmt.executeBatch();
I am not sure how to achieve same in Spring Batch by using ItemPreparedStatementSetter.
I have tried similar loop as above in setValues method but values not getting set.
#Override
public void setValues(RemittanceClaimVO remittanceClaimVO, PreparedStatement ps) throws SQLException {
List<ClaimVO> claims = remittanceClaimVO.getClaims();
for(ClaimVO claim:claims){
int counter = 1 ;
ps.setLong(counter++, remittanceClaimVO.getRemitId());
ps.setLong(counter++, claim.getClaimId());
}
}
This seems another related question.
Please suggest.

Ebean Annotations - Using sequences to generate IDs in DB2

I'm trying to use sequences to generate incremented IDs for my tables in DB2. It works when I send SQL statements directly to the database, but when using ebean the statement fails. Here's the field in Java:
#Id
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "TABLENAME_IDNAME_TRIG")
#SequenceGenerator(name = "TABLENAME_IDNAME_TRIG", sequenceName = "TABLENAME_IDNAME_SEQ")
#Column(name = "IDNAME")
private Long id;
Here's the column in SQL (From TOAD):
Name Data type Not Null Default Generated Bit Data Scope Identity
IDNAME INTEGER Yes No No
And here's the sequence definition in SQL:
CREATE OR REPLACE SEQUENCE SCHEMA.TABLENAME_IDNAME_SEQ
AS INTEGER CACHE 50 ORDER;
And the trigger:
CREATE OR REPLACE TRIGGER SCHEMA.TABLENAME_IDNAME_TRIG
NO CASCADE BEFORE INSERT
ON TABLENAME
REFERENCING
NEW AS OBJ
FOR EACH ROW
BEGIN
SET obj.IDNAME=NEXT VALUE FOR SCHEMA.TABLENAME_IDNAME_SEQ;
END;
What is the issue with my annotations here? As a(n important) side note - when I set GenerationType to AUTO, TABLE, or IDENTITY, it works, even though it shouldn't, because I'm also using this object to represent a parallel oracle table which also uses sequences for ID generation.
Edited to include error message:
javax.persistence.PersistenceException: Error getting sequence nextval
...
Caused by: com.ibm.db2.jcc.am.SqlSyntaxErrorException: DB2 SQL Error: SQLCODE=-348, SQLSTATE=428F9, SQLERRMC=NEXTVAL FOR SCHEMA.TABLENAME_IDNAME_SEQ, DRIVER=4.19.49
EDIT 2: The specific Sql statement that is failing is:
values nextval for QA_CONNECTION_ICONNECTIONI_SEQ union values nextval for QA_CONNECTION_ICONNECTIONI_SEQ union values nextval for QA_CONNECTION_ICONNECTIONI_SEQ
Which is SQL generated by Ebean. This is a smaller version of the real statement, which is repeated 20 times, so I'm guessing something screws up when generating the caching query.
EDIT 3: I believe this might be a bug in Ebean's use of DB2 sequences. This function generates SQl that returns an error for me when used with db2
public DB2SequenceIdGenerator(BackgroundExecutor be, DataSource ds, String seqName, int batchSize) {
super(be, ds, seqName, batchSize);
this.baseSql = "values nextval for " + seqName;
this.unionBaseSql = " union " + baseSql;
}
EDIT 4: Based on this SO link I think it is a bug.
Can't insert multiple values into DB2 by using UNION ALL and generate IDs from sequence
The correct class probably looks like this? Though I haven't ever tried building the library, so I couldn't test it. Time to learn how to open a defect I guess.
public class DB2SequenceIdGenerator extends SequenceIdGenerator {
private final String baseSql;
private final String unionBaseSql;
private final String startSql;
public DB2SequenceIdGenerator(BackgroundExecutor be, DataSource ds, String seqName, int batchSize) {
super(be, ds, seqName, batchSize);
this.startSql = "values "
this.baseSql = "(nextval for " + seqName);
this.unionBaseSql = ", " + baseSql;
}
public String getSql(int batchSize) {
StringBuilder sb = new StringBuilder();
sb.append(startSql);
sb.append(baseSql);
for (int i = 1; i < batchSize; i++) {
sb.append(unionBaseSql);
}
return sb.toString();
}
}
Temporary workaround for those interested: in ebean.properties, set
ebean.databaseSequenceBatchSize=1

Cannot drop a constraint in MS ACCESS

When using the SQL command :
ALTER TABLE [Sessions] DROP CONSTRAINT [SessionAttendance]
I get the exception error message "Could not find reference."
The constraint exists, and shows in the system table of constraints for this user table. How can I get this constraint to drop?
The database is in MS-ACCESS 2003 format. The application uses JET 4.0 I have several hundred instances which will need schema updates. I have a utility program to generate the SQL, but it falls over when attempting the DROP CONSTRAINT action.
Answered by implications of Gord Thompson in comment suggestions.
The ALTER statement was being applied to the wrong table in the relation.
The constraint was originally Added to the Attendance table. However it shows up as an attribute of the Sessions table when using the "GetOleDbSchemaTable" method to list.
Per the following code excerpt:
Structure Relation
Public Name As String
Public PrimaryTableName As String
Public PrimaryField As String
Public PrimaryIndex As String
Public ForeignTable As String
Public ForeignField As String
Public OnUpdate As String
Public OnDelete As String
Public Overrides Function ToString() As String
Dim msg As String = String.Format("Name:{0} PT:{1} PF:{2} PI:{3} FT:{4} FF:{5}", _
Name, PrimaryTableName, PrimaryField, PrimaryIndex, ForeignTable, ForeignField)
Return msg
End Function
End Structure
Private Function ListRelations(tableName As String) As List(Of Relation)
Dim relations As New List(Of Relation)
Dim MySchemaTable As DataTable
Dim dbConn As New OleDbConnection(connectionString)
dbConn.Open()
MySchemaTable = dbConn.GetOleDbSchemaTable(OleDbSchemaGuid.Foreign_Keys, _
New Object() {Nothing, Nothing, tableName})
Dim result As Boolean = False
'List the table name from each row in the schema table.
For Each row As DataRow In MySchemaTable.Rows
Dim r As New Relation
r.Name = row("FK_NAME")
r.PrimaryTableName = row("PK_TABLE_NAME")
r.PrimaryField = row("PK_COLUMN_NAME")
r.PrimaryIndex = row("PK_NAME")
r.ForeignTable = row("FK_TABLE_NAME")
r.ForeignField = row("FK_COLUMN_NAME")
r.OnUpdate = row("UPDATE_RULE")
r.OnDelete = row("DELETE_RULE")
Console.WriteLine(r.ToString)
relations.Add(r)
Next
MySchemaTable.Dispose()
dbConn.Close()
dbConn.Dispose()
Return relations
End Function