Uploading data using C# console application - c#-3.0

I have a C# console application that uploads data into SQL Server database after doing a bit of calculation which is done using various C# functions. Now the problem is it is taking almost 1 sec to calculate and upload one line of data and I have to upload 50,000 lines of data in the same way.
Please suggest me a way to solve this problem.
P.S. : I am using stringbuilder to compose separate insert statements and upload in bulk. This process is taking only 1 min.
Inserting or updating to database is hardly taking any time as I have mentioned in my question. Calculation is taking most of the time. I am attaching the code sample of a function below:
public void EsNoMinLim()
{
ds = new DataSet();
ds = getDataSet("select aa.Country, aa.Serial_No from UEM_Data aa inner join (select distinct " +
"IId, Country from UEM_Data where Active_Status is null) bb on aa.iid = bb.iid where aa.Serial_No <> '0'").Copy();
execDML("Delete from ProMonSys_Grading");
StringBuilder strCmd = new StringBuilder();
foreach (DataRow dRow in ds.Tables[0].Rows)
{
SiteCode = dRow["Country"].ToString();
Serial_No = dRow["Serial_No"].ToString();
ds_sub = new DataSet();
ds_sub = getDataSet("select EsNo_Abs_Limit from EsNo_Absolute_Limit where Fec_Coding_Rate in "+
"(select MODCOD from FEC_Master where NMS_Value in (select Top 1 FEC_Rate from "+
"DNCC_Billing_Day where Serial_No = '" + Serial_No + "' and [Date] = (select max([Date]) "+
"from DNCC_Billing_Day where Serial_No = '" + Serial_No + "')))").Copy();
if (ds_sub.Tables[0].Rows.Count > 0 && Convert.ToString(ds_sub.Tables[0].Rows[0][0]) != "")
{
Min_EsNo = Convert.ToString(ds_sub.Tables[0].Rows[0][0]);
}
else
{
Min_EsNo = "a";
}
if (Min_EsNo != "a")
{
ds_sub = new DataSet();
ds_sub = getDataSet("select Top 1 modal_Avg_EsNo from DNCC_Billing_Day where " +
"Serial_No = '" + Serial_No + "' and [Date] = (select max([Date]) from DNCC_Billing_Day " +
"where Serial_No = '" + Serial_No + "')").Copy();
if (ds_sub.Tables[0].Rows.Count > 0 && Convert.ToString(ds_sub.Tables[0].Rows[0][0]) != "")
{
Avg_EsNo = Convert.ToString(ds_sub.Tables[0].Rows[0][0]);
}
else
{
Avg_EsNo = "-1";
}
ds_sub = new DataSet();
ds_sub = getDataSet("select Top 1 Transmit_Power from ProMonSys_Threshold where Serial_No = '" + Serial_No + "'").Copy();
if (ds_sub.Tables[0].Rows.Count > 0 && Convert.ToString(ds_sub.Tables[0].Rows[0][0]) != "")
{
Threshold_EsNo = Convert.ToString(ds_sub.Tables[0].Rows[0][0]);
}
else
{
Threshold_EsNo = "-1";
}
getGrade = EsNoSQFGrading(Min_EsNo, Avg_EsNo, Threshold_EsNo);
strCmd.Append("insert into ProMonSys_Grading(SiteCode, Serial_No, EsNo_Grade) " +
"values('" + SiteCode + "','" + Serial_No + "','" + getGrade + "')");
}
}
execDML_StringBuilder(strCmd);
}

Find out, what part of the process is the expensive one. Use StopWatch to check how long loading, calculating and saving takes separately. Then you which part to improve (and can tell us).

Related

How to send unbounded TableResult to Kafka sink?

I am using table API to create two streams lets call it A and B. Using executeSql I am joining the two tables. The output is in the form of TableResult. I want to send the joined result to Kafka Sink. Please find below the code.
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
String ddlUser = "CREATE TABLE UserTable (\n" +
"id BIGINT,\n" +
"name STRING\n" +
") WITH (\n" +
"'connector' = 'kafka',\n" +
"'topic' = 'USER',\n" +
"'properties.bootstrap.servers' = 'pkc:9092',\n" +
"'properties.group.id' = 'testGroup',\n" +
"'scan.startup.mode' = 'earliest-offset',\n" +
"'format' = 'json',\n" +
"'properties.security.protocol' = 'SASL_SSL',\n" +
"'properties.sasl.jaas.config' = 'org.apache.kafka.common.security.plain.PlainLoginModule required username=\"\" password=\"\";',\n" +
"'properties.sasl.mechanism' = 'PLAIN'\n" +
")";
tEnv.executeSql(ddlUser);
String ddlPurchase = "CREATE TABLE PurchaseTable (\n" +
"transactionId BIGINT,\n" +
"userID BIGINT,\n" +
"item STRING\n" +
") WITH (\n" +
"'connector' = 'kafka',\n" +
"'topic' = 'PURCHASE',\n" +
"'properties.bootstrap.servers' = 'pkc:9092',\n" +
"'properties.group.id' = 'purchaseGroup',\n" +
"'scan.startup.mode' = 'earliest-offset',\n" +
"'format' = 'json',\n" +
"'properties.security.protocol' = 'SASL_SSL',\n" +
"'properties.sasl.jaas.config' = 'org.apache.kafka.common.security.plain.PlainLoginModule required username=\"\" password=\"\";',\n" +
"'properties.sasl.mechanism' = 'PLAIN'\n" +
")";
tEnv.executeSql(ddlPurchase);
String useQuery = "SELECT * FROM UserTable";
String purchaseQuery = "SELECT * FROM PurchaseTable JOIN UserTable ON PurchaseTable.userID = UserTable.id";
TableResult joinedData = tEnv.executeSql(purchaseQuery);
How to send unbounded TableResult to Kafka sink?
You need to insert into a destination table that is also backed by the kafka connector: https://nightlies.apache.org/flink/flink-docs-stable/docs/dev/table/common/#emit-a-table
In the example they create a temporary table, but as you have already done, you can create a table with the Kafka connector https://nightlies.apache.org/flink/flink-docs-stable/docs/connectors/table/kafka/ and have the stream inserted into it (haven't tested but should be something like this):
tEnv.sqlQuery(purchaseQuery).insertInto('DestinationTable')
or
tEnv.executeSql('INSERT INTO DestinationTable SELECT * FROM PurchaseTable JOIN UserTable ON PurchaseTable.userID = UserTable.id')

How to use android SQLITE SELECT with two parameters?

This code return empty cursor.What is wrong here?
Data is already there in sqlitedb.
public static final String COL_2 = "ID";
public static final String COL_3 = "TYPE";
public Cursor checkData(String id, String type){
SQLiteDatabase db = getWritableDatabase();
Cursor res = db.rawQuery("SELECT * FROM "+ TABLE_NAME + " WHERE " + COL_2 + " = " + id+ " AND " + COL_3 + " = " + type , null);
return res;
}
When you pass strings as parameters you must quote them inside the sql statement.
But by concatenating quoted string values in the sql code your code is unsafe.
The recommended way to do it is with ? placeholders:
public Cursor checkData(String id, String type){
SQLiteDatabase db = getWritableDatabase();
String sql = "SELECT * FROM "+ TABLE_NAME + " WHERE " + COL_2 + " = ? AND " + COL_3 + " = ?";
Cursor res = db.rawQuery(sql , new String[] {id, type});
return res;
}
The parameters id and type are passed as a string array in the 2nd argument of rawQuery().
I finally solved it.
public Cursor checkData(String id, String type){
SQLiteDatabase db = getWritableDatabase();
Cursor res = db.rawQuery("SELECT * FROM "+ TABLE_NAME + " WHERE " + COL_2 + " = '" + id+ "' AND " + COL_3 + " = '" + type +"'" , null);
return res;
}
if COL_3 type is string try this:
Cursor res = db.rawQuery("SELECT * FROM "+ TABLE_NAME + " WHERE " + COL_2 + " = " + id+ " AND " + COL_3 + " = '" + type + "'" , null);

Writing query with parameters to avoid SQL Injections

I have done that before, but in this case I have an insert into table query where value of the column of the target table comes as a result from another query. Having that, I'm not sure if my parametarized query is formatted the right way.
Here is an original query without before Sql Injection fix:
cmd.CommandText += "insert into controlnumber (controlnumber, errorid)
values ('" + ControlNumber + "', (select errorid from error where
errordescription = '" + ErrorDescription + "' and errortype = '" +
ErrorType + "' + and applicationid = " + ApplicationID + " and statusid =
" + StatusID + " and userid = " + UserID + " and errortime = '" +
ErrorTime + "');";
This is the query after I tried to fix Sql Injection:
cmd.CommandText = "insert into ControlTable(ControlNumber, ErrorID)
values (#ControlNum, (select errorid from error where errordescription =
#ErrorDescription and errortype = #errorType and applicationid =
#ApplicationID and statusid = #StatusID and userid = #UserID and
errortime = #ErrorTime)"
This is where I add parameters:
.....
command.CommandType = CommandType.Text
command.Parameters.AddWithValue("#ErrorDescription ", ErrorDesc);
command.Parameters.AddWithValue("#ControlNum", cntNumber);
command.Parameters.AddWithValue("#errorType",ErrorType);
command.Parameters.AddWithValue("#ApplicationID",AppID);
command.Parameters.AddWithValue("#StatusID",StatusID);
command.Parameters.AddWithValue("#UserID",UserID);
....
I'm just wondering if my CommandText is formatted the right way.
Thank's
try this:
cmd.CommandText = "insert into ControlTable(ControlNumber, ErrorID)
select #ControlNum, errorid from error where errordescription =
#ErrorDescription and errortype = #errorType and applicationid =
#ApplicationID and statusid = #StatusID and userid = #UserID and
errortime = #ErrorTime)"
When using INSERT INTO SELECT FROM, you do not use keyword VALUES. The syntax is:
INSERT INTO TABLE(columns) SELECT ... FROM TABLE2

ESPER: 'Partition by' CLAUSE ERROR

The issue that I have is using the clause 'partition by' in 'Match Recognize', the 'partition by' clause seems to support just 99 different events because when I have 100 or more different events it does not group correctly. to test this I have the following EPL query:
select * from TemperatureSensorEvent
match_recognize (
partition by id
measures A.id as a_id, A.temperature as a_temperature
pattern (A)
define
A as prev(A.id) is null
)
I am using this query basically to get the first event (first temperature) of each device, however testing with 10, 20, 50, ... 99 different devices it works fine but when I have more than 99, it seems that ESPER resets all the events send before the device with id=100, and if I send a event that is of the device with id=001, ESPER takes it as if it was the first event.
it seems that 'partition by' just supports 99 different events and if you add one more the EPL is reset or something like that. Is it a restriction that 'partition by' clause has?, how I can increase this threshold because I have more than 100 devices?.
ESPER version: 5.1.0
Thanks in advance
Demo Class:
public class EsperDemo
{
public static void main(String[] args)
{
Configuration config = new Configuration();
config.addEventType("TemperatureSensorEvent", TemperatureSensorEvent.class.getName());
EPServiceProvider esperProvider = EPServiceProviderManager.getProvider("EsperDemoEngine", config);
EPAdministrator administrator = esperProvider.getEPAdministrator();
EPRuntime esperRuntime = esperProvider.getEPRuntime();
// query to get the first event of each temperature sensor
String query = "select * from TemperatureSensorEvent "
+ "match_recognize ( "
+ " partition by id "
+ " measures A.id as a_id, A.temperature as a_temperature "
+ " after match skip to next row "
+ " pattern (A) "
+ " define "
+ " A as prev(A.id) is null "
+ ")";
TemperatureSubscriber temperatureSubscriber = new TemperatureSubscriber();
EPStatement cepStatement = administrator.createEPL(query);
cepStatement.setSubscriber(temperatureSubscriber);
TemperatureSensorEvent temperature;
Random random = new Random();
int sensorsQuantity = 100; // it works fine until 99 sensors
for (int i = 1; i <= sensorsQuantity; i++) {
temperature = new TemperatureSensorEvent(i, random.nextInt(20));
System.out.println("Sending temperature: " + temperature.toString());
esperRuntime.sendEvent(temperature);
}
temperature = new TemperatureSensorEvent(1, 64);
System.out.println("Sending temperature: sensor with id=1 again: " + temperature.toString());
esperRuntime.sendEvent(temperature);
}
}

ERROR: duplicate key value violates unique constraint "xak1fact_dim_relationship"

I am getting the below error while deleting some rows and updating the table based on a condition from java. My database is PostgreSQL 8.4. Below is the error:
ERROR: duplicate key value violates unique
constraint "xak1fact_dim_relationship"
The code cuasing this issue is below:
/**
* Commits job. This does the following:
* <OL>
* <LI> cancel previous datamart states </LI>
* <LI> drop diabled derived objects </LI>
* <LI> flip mirror relationships for objects with next states </LI>
* <LI> advance rolloff_state from start1 to complete1, from start2 to complete </LI>
* <LI> set 1/0 internal states to 0/-1 </LI>
* <LI> remove header objects with no letter rows </LI>
* <LI> mark mirror rel as OS if children are missing (e.g., semantic w/o agg build) </LI>
* <LI> mark mirror rel as OS if int-map not in sync with dim base (e.g., int-map SQL w/o semantic) </LI>
* </OL>
*/
protected void CommitJobUnprotected()
throws SQLException
{
if (_repl.epiCenterCon== null)
throw makeReplacementError(0);
boolean oldAutoCommitStatus = _repl.epiCenterCon.getAutoCommit();
try
{
_repl.epiCenterCon.setAutoCommit(false);
Statement stmt = null;
boolean committed = false;
synchronized (SqlRepl.metaChangeLock)
{
try
{
stmt = _repl.epiCenterCon.createStatement();
// update internal states for fact_dim_relationship
metaExec(stmt, "DELETE from fact_dim_relationship WHERE internal_state = -1 AND " +
" EXISTS (SELECT 1 FROM fact_dim_relationship WHERE internal_state = 1)",
" SELECT 1 from fact_dim_relationship WHERE internal_state = -1 AND " +
" EXISTS (SELECT 1 FROM fact_dim_relationship WHERE internal_state = 1)"); /*1*/
metaExec( stmt, "UPDATE fact_dim_relationship SET internal_state = internal_state - 1 " +
" WHERE EXISTS (SELECT 1 FROM fact_dim_relationship inner1 " +
" WHERE inner1.internal_state = 1 " +
" AND inner1.fact_tbl_key = fact_dim_relationship.fact_tbl_key " +
" AND inner1.dim_base_key = fact_dim_relationship.dim_base_key ) ",
" SELECT 1 FROM fact_dim_relationship " +
" WHERE EXISTS (SELECT 1 FROM fact_dim_relationship inner1 " +
" WHERE inner1.internal_state = 1 " +
" AND inner1.fact_tbl_key = fact_dim_relationship.fact_tbl_key " +
" AND inner1.dim_base_key = fact_dim_relationship.dim_base_key ) "); /*5*/
System.out.println("Update done on fact_dim_relationship");
_repl.doDrop(SqlReplLogger.DB_META, stmt, "fact_agg", "SELECT fact_agg_key FROM fact_agg f WHERE " +
" NOT EXISTS (SELECT 1 FROM fact_agg_letter l WHERE " +
" f.fact_agg_key = l.fact_agg_key) "); /*6*/
_repl.doDrop(SqlReplLogger.DB_META, stmt, "dim_base_agg", "SELECT dim_base_agg_key FROM dim_base_agg d WHERE " +
" NOT EXISTS (SELECT 1 FROM dim_base_agg_letter l WHERE " +
" d.dim_base_agg_key = l.dim_base_agg_key) "); /*6*/
CheckOutOfSync(stmt, "fact_agg", null); /*7*/
CheckOutOfSync(stmt, "dim_base_agg", null); /*7*/
metaExec( stmt, " update mirror_relationship set relation_to_current = 'Out Of Sync' " +
" where dim_col_intmap_key is not null " +
" and relation_to_current = 'One Back' " +
" and not exists ( " +
" select 1 " +
" from mirror_relationship m2, dim_col_view c, dim_col_intmap i " +
" where m2.dim_base_key = c.dim_base_key " +
" and c.dim_col_key = i.dim_col_key " +
" and i.dim_col_intmap_key = mirror_relationship.dim_col_intmap_key " +
" and m2.relation_to_current = 'One Back') ",
" SELECT 1 FROM mirror_relationship " +
" where dim_col_intmap_key is not null " +
" and relation_to_current = 'One Back' " +
" and not exists ( " +
" select 1 " +
" from mirror_relationship m2, dim_col_view c, dim_col_intmap i " +
" where m2.dim_base_key = c.dim_base_key " +
" and c.dim_col_key = i.dim_col_key " +
" and i.dim_col_intmap_key = mirror_relationship.dim_col_intmap_key " +
" and m2.relation_to_current = 'One Back') "); /*8*/
// clean out the tables used by mombuilder, aggbuilder, and semantics
metaExec( stmt, "delete from relation_intermediary", "select 1 from relation_intermediary" );
_repl.epiCenterCon.commit();
committed = true;
}
finally
{
safeMetaRollbackIfNeeded( committed );
_repl.safeClose( null, stmt );
}
} // end synchronized block
}
finally
{
_repl.epiCenterCon.setAutoCommit(oldAutoCommitStatus);
}
}
The first delete statement ran well, but while running the update it is throwing the above exception....! We support the SQLServer, Oracle and DB2, and the same code runs fine with other DBs. By the way we run these statements in a READ_COMMITTED transaction level and we are setting the autocommit off if anything fails in between we safely rolls back. If i run the above code with autocommit true the code works fine! But we should not do so. I am suspecting the Multi version concurrency control feature of PostgreSQL, am i wrongly setting the Isolation level? Please help me as early as possible. I can provide what ever the info you want.
If it is only this particular set of queries, use SET CONSTRAINT:
BEGIN;
SET CONSTRAINT = xak1fact_dim_relationship DEFERRED;
-- Do your SQL
COMMIT;
If this is a very common case, you can change your database schema to you can, change your database schema to support INITIALLY DEFERRED.