vertx with mysql have low tps performance - vert.x

i try to combine vertx with mysql to do some code refactoring. first i use vertx-jdbc-client, but the tps seems no good, about 400 transaction per second, seems that so much time wasted in waiting to get the sql connection, but i had closed the connection after transaction commit/rollback, then i took vertx-mysql-client for instead, which is similar, about 480 transaction/ second, i had looked up the officesite documention for many times, followed the example codes, tried so many tunning options, just as vertical instance number, blockinghandler, blockingexecute, connectin pool size, tcp options, native transport etc, but i can't figured out why has so low tps performance.
contrast to the vertx, i use the springboot webflux, jpa and Schedulers.parallel(), the tps up to 1300 per second, it is so weird.
my legacy code used springboot undertow, servlet 3.1 and jpa, the tps was rather low.
the vertx qps will up to 27000 per second if without mysql query or trasanction in my stress test.
VertxOptions vertxOptions = new VertxOptions().setPreferNativeTransport(true);
Vertx vertx = Vertx.vertx(vertxOptions);
vertx.registerVerticleFactory(springVerticleFactory);
DeploymentOptions deploymentOptions = new DeploymentOptions().setInstances(CpuCoreSensor.availableProcessors());
vertx.deployVerticle(SpringVerticleFactory.PREFIX + ":" + MainVerticle.class.getName(), deploymentOptions);
#Slf4j
#Component
#Scope(SCOPE_PROTOTYPE)
#RequiredArgsConstructor
public class MainVerticle extends AbstractVerticle {
#Value("${server.port:8080}")
private Integer port;
private JDBCClient jdbcClient;
public void start(Promise<Void> startFuture) throws Exception {
/*Map<String, Object> properties = Maps.newHashMap();
properties.put("maxLifetime", 1700000);
properties.put("cachePrepStmts", true);
properties.put("prepStmtCacheSize", 250);
properties.put("prepStmtCacheSqlLimit", 2048);
properties.put("useServerPrepStmts", true);
properties.put("useLocalSessionState", true);
properties.put("rewriteBatchedStatements", true);
properties.put("cacheResultSetMetadata", true);
properties.put("cacheServerConfiguration", true);
properties.put("elideSetAutoCommits", true);
properties.put("maintainTimeStats", false);
JsonObject config = new JsonObject()
.put("jdbcUrl", "jdbc:mysql://xxxxxx:3306/db?characterEncoding=UTF-8&zeroDateTimeBehavior=CONVERT_TO_NULL&serverTimezone=GMT%2B8&sslMode=DISABLED&allowPublicKeyRetrieval=true&rewriteBatchedStatements=true")
.put("provider_class", "io.vertx.ext.jdbc.spi.impl.HikariCPDataSourceProvider")
.put("driverClassName", "com.mysql.cj.jdbc.Driver")
.put("username", "user")
.put("password", "user")
.put("minimumIdle", 10)
.put("maximumPoolSize", 10)
.put("datasource", properties);
jdbcClient = JDBCClient.createShared(vertx, config);*/
MySQLConnectOptions connectOptions = new MySQLConnectOptions()
.setPort(3306)
.setHost("xxxxxxx")
.setDatabase("db")
.setUser("user")
.setPassword("user")
.setCachePreparedStatements(true)
.setPreparedStatementCacheMaxSize(250)
.setPreparedStatementCacheSqlLimit(2048)
.setTcpFastOpen(true)
.setTcpKeepAlive(true)
.setTcpNoDelay(true)
.setTcpQuickAck(true)
.setReusePort(true);
PoolOptions poolOptions = new PoolOptions().setMaxSize(10);
MySQLPool client = MySQLPool.pool(vertx, connectOptions, poolOptions);
Router router = Router.router(vertx);
router.route().method(HttpMethod.POST).method(HttpMethod.PUT).consumes(MediaType.APPLICATION_FORM_URLENCODED_VALUE).handler(BodyHandler.create().setHandleFileUploads(false));
router.post("/api/test/post")
.handler(HTTPRequestValidationHandler.create()
.addFormParam("id", ParameterType.GENERIC_STRING, true)
.addFormParam("partition", ParameterType.GENERIC_STRING, true)
.addFormParam("code", ParameterType.GENERIC_STRING, true)
.addFormParam("amount", ParameterType.DOUBLE, true)
.addFormParam("num", ParameterType.GENERIC_STRING, true))
.handler(routingContext -> {
String id = routingContext.request().getFormAttribute("id");
String partition = routingContext.request().getFormAttribute("partition");
String code = routingContext.request().getFormAttribute("code");
BigDecimal amount = new BigDecimal(routingContext.request().getFormAttribute("amount"));
String num = routingContext.request().getFormAttribute("num");
client.begin(transactionAsyncResult -> {
if (transactionAsyncResult.failed()) {
routingContext.response().setStatusCode(500).end(Json.encode(TestResult.fail(Errors.OPERATE_FAILED)));
return;
}
Transaction tx = transactionAsyncResult.result();
tx.preparedQuery("select id from test_account where customer_id = ? and partition_id = ? and code = ? and user_type = ? and trans_type = ?")
.execute(Tuple.of(id, id, code, "xxx", "xxx"), rowSetAsyncResult -> {
if (rowSetAsyncResult.failed()) {
routingContext.response().setStatusCode(500).end(Json.encode(TestResult.fail(Errors.OPERATE_FAILED)));
return;
}
String id = rowSetAsyncResult.result().iterator().next().getString(0);
Tuple tuple = Tuple.of(amount, amount, id, amount);
tx.preparedQuery("UPDATE test_account SET available = available - ?, non_avalibale = non_avalibale + ? WHERE id = ? and available >= ?").execute(tuple, rowSetAsyncResult1 -> {
if (rowSetAsyncResult1.failed()) {
routingContext.response().setStatusCode(500).end(Json.encode(TestResult.fail(Errors.OPERATE_FAILED)));
return;
}
int rowcount = rowSetAsyncResult1.result().rowCount();
if (rowcount == 0) {
routingContext.response().setStatusCode(500).end(Json.encode(TestResult.fail(Errors.OPERATE_FAILED)));
return;
}
List<Tuple> jsonArrayList = Lists.newLinkedList();
Tuple jsonArray = getTuple();
jsonArrayList.add(jsonArray);
jsonArray = getTuple();
jsonArrayList.add(jsonArray);
tx.preparedQuery("INSERT INTO test_transaction_log (id, created_by, created_time, last_modified_by, last_modified_time, version, account_type, trans_type, qty_begin, qty_end, us_type, chg_type, code, cu_id, in_address, out_address, partition, record_type, remark, status, trans_amount, trans_number, tx_id, website, trans_type) " +
"VALUES (?, 'system', now(), 'system', now(), 1, ?, ?, ?, ?, ?, ?, ?, ?, NULL, NULL, ?, ?, ?, 'SUCCESS', ?, ?, NULL, 'zh', ?)").executeBatch(jsonArrayList, listAsyncResult -> {
if (listAsyncResult.failed()) {
tx.rollback(rollbackcontext -> {
routingContext.response().setStatusCode(500).end(Json.encode(TestResult.fail(Errors.OPERATE_FAILED)));
});
} else {
tx.commit(commitcontect -> {
routingContext.response().end(Json.encode(TestResult.SUCCESS));
});
}
});
});
});
});
/*jdbcClient.getConnection(result -> {
if (result.failed()) {
routingContext.response().setStatusCode(500).end(Json.encode(TestResult.fail(Errors.OPERATE_FAILED)));
return;
}
SQLConnection conn = result.result();
conn.setAutoCommit(false, voidAsyncResult -> {
if (voidAsyncResult.failed()) {
conn.close(h -> {
routingContext.response().setStatusCode(500).end(Json.encode(TestResult.fail(Errors.OPERATE_FAILED)));
});
return;
}
log.info("time spend -2 " + Thread.currentThread().getName() + "--" + (System.currentTimeMillis() - start));
JsonArray params = new JsonArray().add(id).add().).add(code).add("xxx").add("yyy");
conn.querySingleWithParams("select id from test_account where id = ? and partition_id = ? and code = ? and user_type = ? and trans_type = ?", params, jsonArrayAsyncResult -> {
if (jsonArrayAsyncResult.failed()) {
conn.close(h -> {
routingContext.response().setStatusCode(500).end(Json.encode(TestResult.fail(Errors.OPERATE_FAILED)));
});
return;
}
String id = jsonArrayAsyncResult.result().getString(0);
JsonArray uparams = new JsonArray().add(amount.doubleValue()).add(amount.doubleValue()).add(id).add(amount.doubleValue());
conn.updateWithParams("UPDATE test_account SET available = available - ?, non_avalibale = non_avalibale + ? WHERE id = ? and available >= ?", uparams, updateResultAsyncResult -> {
if (updateResultAsyncResult.failed()) {
conn.rollback(rollbackcontext -> {
conn.close(h -> {
routingContext.response().setStatusCode(500).end(Json.encode(TestResult.fail(Errors.OPERATE_FAILED)));
});
});
return;
}
log.info("time spend -4 " + Thread.currentThread().getName() + "--" + (System.currentTimeMillis() - start));
List<JsonArray> jsonArrayList = Lists.newLinkedList();
JsonArray jsonArray = getEntityArray();
jsonArrayList.add(jsonArray);
jsonArray = getEntityArray();
jsonArrayList.add(jsonArray);
conn.batchWithParams("INSERT INTO test_transaction_log (id, created_by, created_time, last_modified_by, last_modified_time, version, account_type, trans_type, qty_begin, qty_end, us_type, chg_type, code, cu_id, in_address, out_address, partition, record_type, remark, status, trans_amount, trans_number, tx_id, website, trans_type) " +
"VALUES (?, 'system', now(), 'system', now(), 1, ?, ?, ?, ?, ?, ?, ?, ?, NULL, NULL, ?, ?, ?, 'SUCCESS', ?, ?, NULL, 'zh', ?)", jsonArrayList, listAsyncResult -> {
if (listAsyncResult.failed()) {
conn.rollback(rollbackcontext -> {
conn.close(h -> {
routingContext.response().setStatusCode(500).end(Json.encode(TestResult.fail(Errors.OPERATE_FAILED)));
});
});
} else {
conn.commit(commitcontect -> {
conn.close(h -> {
routingContext.response().end(Json.encode(TestResult.SUCCESS));
});
});
}
});
});
});
});
});*/
});
router.route().failureHandler(routingContext -> {
log.warn("handler error", routingContext.failure());
routingContext.response().setStatusCode(500).end(Json.encode(Exceptions.map(routingContext.failure(), false)));
});
HttpServerOptions options = new HttpServerOptions()
.setTcpFastOpen(true).setTcpNoDelay(true)
.setTcpQuickAck(true).setReusePort(true);
vertx.createHttpServer(options).requestHandler(router).listen(port, http -> {
if (http.succeeded()) {
startFuture.complete();
} else {
log.error("http server start failed", http.cause());
startFuture.fail(http.cause());
}
});
}
}

That's not a definitive solution, but there are two problematic points I see in this code:
Connection pool is too small
Too much work is done on EventLoop
For the first problem, note that you're doing three separate DB operations: select, update and insert, using the same transaction, meaning the same connection. This is one probable bottleneck you're hitting.
For the second problem, I would suggest breaking this into at least 3 separate verticles communicating over event bus.
RequestHandler
Verticle to select the correct row
Verticle to perform operations on this row
That should also allow you to better interleave the work produced by different requests.

Related

opc ua milo - How to monitor an attribute under a node and return all the attributes of the node?

1、I need to monitor an attribute (eg: totalWp) in a node. If this attribute changes, I need to get all other attributes (PT...) under this node. My code cannot get the expected result, please tell Me, what should I do?
protected CompletableFuture<UaSubscription> createValueSubscription(String deviceId) {
final CompletableFuture<UaSubscription> result = new CompletableFuture<>();
try {
Node node = this.getDeviceNode(deviceId);
NodeId parentId = node.getNodeId().get();
UaSubscriptionManager subscriptionManager = this.getOpcUaClient().getSubscriptionManager();
CompletableFuture<UaSubscription> subscriptionFuture = subscriptionManager.createSubscription(5000.0);
subscriptionFuture.whenComplete((subscription, e) -> {
if (e != null) {
result.completeExceptionally(e);
} else {
subscription.addNotificationListener(new UaSubscription.NotificationListener() {
#Override
public void onDataChangeNotification(UaSubscription subscription, List<UaMonitoredItem> monitoredItems, List<DataValue> dataValues, DateTime publishTime) {
Iterator<UaMonitoredItem> itemIterator = monitoredItems.iterator();
Iterator<DataValue> dataValueIterator = dataValues.iterator();
while (itemIterator.hasNext() && dataValueIterator.hasNext()) {
logger.info("--- subscription value received: item= " + itemIterator.next().getReadValueId().getNodeId()
+ ", value=" + dataValueIterator.next().getValue() + " ---");
}
}
});
NodeId valueId = OpcUaClientUtils.createDeviceAttributeId(parentId, "totalWp");
NodeId pt = OpcUaClientUtils.createDeviceAttributeId(parentId, "PT");
ReadValueId readTotalWpId = new ReadValueId(valueId, AttributeId.Value.uid(), null, null);
ReadValueId readPtId = new ReadValueId(pt, AttributeId.Value.uid(), null, null);
UInteger clientHandle = uint(clientHandles.getAndIncrement());
MonitoringParameters parameters = new MonitoringParameters(
clientHandle,
1000.0, // sampling interval
null, // filter, null means use default
Unsigned.uint(10), // queue size
true // discard oldest
);
MonitoredItemCreateRequest requestTotalWp = new MonitoredItemCreateRequest(readTotalWpId, MonitoringMode.Reporting, parameters);
MonitoredItemCreateRequest requestPt = new MonitoredItemCreateRequest(readPtId, MonitoringMode.Reporting, parameters);
// requests.add(requestPt);
CompletableFuture<List<UaMonitoredItem>> future =
subscription.createMonitoredItems(
TimestampsToReturn.Both,
newArrayList(requestTotalWp),
(item, id) -> onValueChanged(deviceId, item, id)
);
future.whenComplete((items, ex) -> {
if (ex == null) {
result.complete(subscription);
} else {
result.completeExceptionally(ex);
}
});
} ;
});
} catch (Exception e) {
result.completeExceptionally(e);
}
return result;
}
The above code only returns the monitored attributes, other attributes are not returned.
You are only creating 1 MonitoredItem:
CompletableFuture<List<UaMonitoredItem>> future =
subscription.createMonitoredItems(
TimestampsToReturn.Both,
newArrayList(requestTotalWp),
(item, id) -> onValueChanged(deviceId, item, id)
);
If you want to receive changes for other Nodes then you need to create MonitoredItems for them as well.

sqflite database getting locked flutter - Warning database has been locked

I am using SQLite in my flutter project and trying to figure out DB locked issue, in my scenario user is trying to download new data one time in day, and if that record exist it will update or it will insert a new record. My problem is even though I am using transaction and batch I am getting DB locked error, the only issue I can think of is getSongList() call as it calls DB out of that transaction or batch multiple times but that's read call and my code seems to fail during batch commit.
buildDB1(List<MusicData> _list, int version) async {
await openDb();
try {
_database.transaction((txn) async {
Batch batch = txn.batch();
for (var i = 0; i < _list.length; i++) {
// buildBatch(_list[i]);
MusicData musicData = _list[i];
int id = musicData.id;
if (musicData.pdfpage == 0 || musicData.pdfpage == null) {
PDFPAGE = "0";
} else {
PDFPAGE = (musicData.pdfpage).toString();
}
if (musicData.linkid == 0 || musicData.linkid == null) {
LINKID = "0";
} else {
LINKID = (musicData.linkid).toString();
}
// PDFPAGE = musicData.pdfpage as String;
// LINKID = musicData.linkid as String;
TITLE = musicData.title;
ALBUM = musicData.album;
SONGURL = musicData.songURL;
HINDINAME = musicData.hindiName;
MNAME = musicData.mname;
MSIGN = musicData.msign;
OTHER1 = musicData.other1;
OTHER2 = musicData.other2;
ENAME = musicData.ename;
ESIGN = musicData.esign;
LANGUAGE = musicData.language;
SONGTEXT = musicData.songtext;
Future<List<MusicData>> list1 =
getSongList("select * from songs where id=$id");
List<MusicData> list = await list1;
if (list.length != 0) {
String updateSQL =
"UPDATE SONGS SET pdfpage = $PDFPAGE, linkid = $LINKID, title = '$TITLE', album = '$ALBUM', songURL = '$SONGURL', hindiName = '$HINDINAME', mname = '$MNAME', msign = '$MSIGN', other1 = '$OTHER1', other2 = '$OTHER2', ename = '$ENAME', esign = '$ESIGN', language = '$LANGUAGE',songtext = '$SONGTEXT' WHERE id = $ID";
batch.rawUpdate(updateSQL);
// _database.rawUpdate(
// "UPDATE SONGS SET pdfpage = ?, linkid = ?, title = ?, album = ?, songURL = ?, hindiName = ?, mname = ?, msign = ?, other1 = ?, other2 = ?, ename = ?, esign = ?, language = ?,songtext = ? WHERE id = ?",
// [
// musicData.id,
// musicData.pdfpage,
// musicData.linkid,
// musicData.title,
// musicData.album,
// musicData.songURL,
// musicData.hindiName,
// musicData.mname,
// musicData.msign,
// musicData.other1,
// musicData.other2,
// musicData.ename,
// musicData.esign,
// musicData.language,
// musicData.songtext
// ]);
print("Record updated in db $id");
// _database.close();
} else {
String insertSQL =
"INSERT INTO SONGS (pdfpage, linkid, title,album,songURL,hindiName,mname,msign,other1,other2,ename,esign,language,songtext,isfav) VALUES ($PDFPAGE,$LINKID,'$TITLE','$ALBUM','$SONGURL','$HINDINAME','$MNAME','$MSIGN', '$OTHER1','$OTHER2','$ENAME','$ESIGN','$LANGUAGE','$SONGTEXT',0)";
batch.rawInsert(insertSQL);
// _database.insert('SONGS', musicData.toMap());
print("Record inserted in db $id");
}
}
Future<List> result = batch.commit();
});
SharedPreferences prefs = await SharedPreferences.getInstance();
await prefs.setInt('dbversion', version);
} catch (e) {
print(e);
}
}
getSongList should take a transaction argument. Basically use txn instead of _database in any db calls during a transaction. Otherwise it will hang and the warning is correct.
Also you might be hitting some race condition since you are not awaiting for batch.commit before the end of the transaction. You can try to replace:
Future<List> result = batch.commit();
by
await batch.commit();
Using pedantic could warn you about a missing await here.

Java Cast String to Date

How can I cast a java.lang.String to a java.sql.Date to get the following value: '12/03/2016'?
I tried this:
#Override
public int insertNewApplication(StudenDetails sd)
{
String sql = "INSERT INTO student_details"
+ "(applicant_name, applsex, "
+ "designation, date_of_superannuation, "
+ "ou_code, residential_address, "
+ "phone_no, mobile_no,email_id,token_user_id,token_password, "
+ "staff_status,office_type_code) VALUES (?, ?, ?,to(?,'DD/MM/YYYY'), ?, ?, ?, ?, ?, ?, ?,?,?)";
return jdbcTemplate.update(sql, new Object[] {
tpd.getApplicant_name(), tpd.getApplsex(),tpd.getDesignation(),tpd.getDate_of_superannuation(),tpd.getOu_code(),tpd.getResidential_address(),tpd.getPhone_no(),tpd.getMobile_no(),
tpd.getEmail_id(),tpd.getToken_user_id(),tpd.getToken_password(),tpd.getStaff_status(),tpd.getOffice_type_code()
});
}
Unfortunately, there's isn't a standard way that's widely accepted between RDBMSs on converting strings to dates. In Postgres, which you tagged, you could use the to_date function:
to_date(?, 'DD/MM/YY')
A better approach, IMHO, would be to do this conversion in Java, making your application much more portable:
private static final SimpleDateFormat FORMAT_CONVERTER =
#Override
public int insertNewApplication(StudenDetails sd)
{
String sql = "INSERT INTO student_details"
+ "(applicant_name, applsex, "
+ "designation, date_of_superannuation, "
+ "ou_code, residential_address, "
+ "phone_no, mobile_no,email_id,token_user_id,token_password, "
+ "staff_status,office_type_code) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?,?,?)";
Date date = FORMAT_CONVERTER.parse(tpd.getDate_of_superannuation());
return jdbcTemplate.update(sql, new Object[] {
tpd.getApplicant_name(), tpd.getApplsex(),tpd.getDesignation(),date,tpd.getOu_code(),tpd.getResidential_address(),tpd.getPhone_no(),tpd.getMobile_no(),
tpd.getEmail_id(),tpd.getToken_user_id(),tpd.getToken_password(),tpd.getStaff_status(),tpd.getOffice_type_code()
});
}

Handling concurrency exceptions when passing the objects ids and timestamps using jQuery

I have the following business scenario inside my Asp.net MVC 4 asset management system :-
Scenario 1) A user selects multiple servers , then he selects a Rack Tag ,and click on
assign . so the selected servers will be assigned to the new Rack.
Scenario 2) And i want to check for any concurrency exception , if for example the selected
servers have been modified by another user since they were retrieved .
so i have wrote the following jQuery which will send the object ids+timestamps to the action method:-
$('body').on("click", "#transferSelectedAssets", function () {
var boxData = [];
$("input[name='CheckBoxSelection']:checked").each(function () {
boxData.push($(this).val());
});
var URL = "#Url.Content("~/Server/TransferSelectedServers")";
$.ajax({
type: "POST",
url: URL,
data: { ids: boxData.join(","), rackTo: $("#rackIDTo").val()}
,
success: function (data) {
addserver(data); })});
and inside the action method i have the following code:-
public ActionResult TransferSelectedServers(string ids, int? rackTo)
{
if (ModelState.IsValid)
{
try
{
var serverIDs = ids.Split(',');
int i = 0;
foreach (var serverinfo in serverIDs)
{
var split = serverinfo.Split('~');
var name = split[0];
//System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding();
byte[] bytearray = Encoding.Default.GetBytes(split[1]);
i++;
var server = repository.FindServer_JTechnology(Int32.Parse(name));
if (server == null)
return Json(new { IsSuccess = false, reload = true, description = " Some Servers might have been deleted, Transferre process has been cancelled .", rackid = rackFrom }, JsonRequestBehavior.AllowGet);
server.RackID = rackTo;
server.timestamp = bytearray;
string ADusername = User.Identity.Name.Substring(User.Identity.Name.IndexOf("\\") + 1);
repository.InsertOrUpdateServer(server, ADusername, server.Technology.IT360ID.Value, server.IT360SiteID, new bool(), server.Technology);
}
repository.Save();
return Json(new { IsSuccess = true, description = i + " Server/s Transferred Successfully To Rack " + }, JsonRequestBehavior.AllowGet);
}
catch (DbUpdateConcurrencyException e)
{
return Json(new { IsSuccess = false, reload = true, description = "records has been modified by antoehr user" }, JsonRequestBehavior.AllowGet);
}
catch (Exception e)
{
return Json(new { IsSuccess = false, reload = true, description = " Server/s Can not Be Transferred to the Selected Rack " }, JsonRequestBehavior.AllowGet);
}
}
return RedirectToAction("Details", new { id = rackTo });
}
and the repository method looks as follow:-
public void InsertOrUpdateServer(TMSServer server, string username, long assetid, long? siteid = 0, bool isTDMHW = false, Technology t = null)
{
server.IT360SiteID = siteid.Value;
tms.Entry(server).State = EntityState.Modified;
var technology = tms.Technologies.Single(a => a.TechnologyID == server.TMSServerID);
technology.IsManaged = t.IsManaged;
tms.Entry(technology).State = EntityState.Modified;
InsertOrUpdateTechnologyAudit(auditinfo);
}
}
but currently if two users selects the same servers and assign them to tow different racks , no concurrency exception will be raised ?
Can anyone advice ? baring in mind that if two users edit single object then one of them will get an concurrent exception message. so my timestamp column is defined correctly.
Thanks

DbExtensions - How to create WHERE clause with OR conditions?

I'm trying to create WHERE clause with OR conditions using DbExtensions.
I'm trying to generate SQL statement which looks like
SELECT ID, NAME
FROM EMPLOYEE
WHERE ID = 100 OR NAME = 'TEST'
My C# code is
var sql = SQL.SELECT("ID, FIRSTNAME")
.FROM("EMPLOYEE")
.WHERE("ID = {0}", 10)
.WHERE("NAME = {0}", "TEST");
How do I get the OR seperator using the above mentioned DbExtensions library?
I have found a definition for logical OR operator here
public SqlBuilder _OR<T>(IEnumerable<T> items, string itemFormat, Func<T, object[]> parametersFactory) {
return _ForEach(items, "({0})", itemFormat, " OR ", parametersFactory);
}
And some code examples here
public SqlBuilder Or() {
int[][] parameters = { new[] { 1, 2 }, new[] { 3, 4} };
return SQL
.SELECT("p.ProductID, p.ProductName")
.FROM("Products p")
.WHERE()
._OR(parameters, "(p.CategoryID = {0} AND p.SupplierID = {1})", p => new object[] { p[0], p[1] })
.ORDER_BY("p.ProductName, p.ProductID DESC");
}
I think (by analogy with example) in your case code should be something like this (but I can't test it for sure):
var params = new string[] { "TEST" };
var sql = SQL.SELECT("ID, FIRSTNAME")
.FROM("EMPLOYEE")
.WHERE("ID = {0}", 10)
._OR(params, "NAME = {0}", p => new object[] { p })
Hope this helps :)
By the way... have you tried this way?
var sql = SQL.SELECT("ID, FIRSTNAME")
.FROM("EMPLOYEE")
.WHERE(string.Format("ID = {0} OR NAME = '{1}'", 10, "TEST"))
It's simpler than you think:
var sql = SQL
.SELECT("ID, FIRSTNAME")
.FROM("EMPLOYEE")
.WHERE("(ID = {0} OR NAME = {1})", 10, "TEST");
One can use:
.AppendClause("OR", ",", "NAME = {0}",new object[]{"TEST"});