Delete from cassandra Table in Spark - scala

I'm using Spark with cassandra. And i'm reading some rows from my table in order to delete theme using the PrimaryKey. This is my code :
val lines = sc.cassandraTable[(String, String, String, String)](CASSANDRA_SCHEMA, table).
select("a","b","c","d").
where("d=?", d).cache()
lines.foreach(r => {
val session: Session = connector.openSession
val delete = s"DELETE FROM "+CASSANDRA_SCHEMA+"."+table+" where channel='"+r._1 +"' and ctid='"+r._2+"'and cvid='"+r._3+"';"
session.execute(delete)
session.close()
})
But this method create an session for each row and it takes lot of time. So is it possible to delete my rows using sc.CassandraTable or another solution better then the mine.
Thank you

I don't think there's a support for delete at the moment on the Cassandra Connector. To amortize the cost of connection setup, the recommended approach is to apply the operation to each partition.
So your code will look like this:
lines.foreachPartition(partition => {
val session: Session = connector.openSession //once per partition
partition.foreach{elem =>
val delete = s"DELETE FROM "+CASSANDRA_SCHEMA+"."+table+" where channel='"+elem._1 +"' and ctid='"+elem._2+"'and cvid='"+elem._3+"';"
session.execute(delete)
}
session.close()
})
You could also look into using the DELETE FROM ... WHERE pk IN (list) and use a similar approach to build up the list for each partition. This will be even more performant, but might break with very large partitions as the list will become consequentially long. Repartitioning your target RDD before applying this function will help.

You asked the question a long time ago so you probably found the answer already. :P Just to share, here is what I did in Java. This code works against my local Cassandra instance beautifully. But it does not work against our BETA or PRODUCTION instances, because I suspect there are multiple instances of the Cassandra database there and the delete only worked against 1 instance and the data got replicated right back. :(
Please do share if you were able to get it to work against your Cassandra production environment, with multiple instances of it running!
public static void deleteFromCassandraTable(Dataset myData, SparkConf sparkConf){
CassandraConnector connector = CassandraConnector.apply(sparkConf);
myData.foreachPartition(partition -> {
Session session = connector.openSession();
while(partition.hasNext()) {
Row row = (Row) partition.next();
boolean isTested = (boolean) row.get(0);
String product = (String) row.get(1);
long reportDateInMillSeconds = ((Timestamp) row.get(2)).getTime();
String id = (String) row.get(3);
String deleteMyData = "DELETE FROM test.my_table"
+ " WHERE is_tested=" + isTested
+ " AND product='" + product + "'"
+ " AND report_date=" + reportDateInMillSeconds
+ " AND id=" + id + ";";
System.out.println("%%% " + deleteMyData);
ResultSet deleteResult = session.execute(deleteMyData);
boolean result = deleteResult.wasApplied();
System.out.println("%%% deleteResult =" + result);
}
session.close();
});
}

Related

Titan index issues with Cassandra storage backend

I am populating a Titan 1.0.0 single instance with a moderate graph in order to test its query performance. I am using Cassandra 2.0.17 as storage backend.
The thing is I am not able to create node indexes, and hence query results optimally. I have read the docs and I am trying to follow them carefully without much success. I am using the following groovy script for the schema definition, data population and index creation:
import com.thinkaurelius.titan.core.*;
import com.thinkaurelius.titan.core.schema.*;
import com.thinkaurelius.titan.graphdb.database.management.ManagementSystem;
import java.time.temporal.ChronoUnit;
graph = TitanFactory.open('conf/my-titan.properties');
mgmt = graph.openManagement();
// Build graph schema
// Node properties
idProp = mgmt.containsPropertyKey('userId') ?
mgmt.getPropertyKey('userId') : mgmt.makePropertyKey('id').dataType(String.class).cardinality(Cardinality.SINGLE);
isPublicProp = mgmt.containsPropertyKey('isPublic') ?
mgmt.getPropertyKey('isPublic') : mgmt.makePropertyKey('isPublic').dataType(Boolean.class).cardinality(Cardinality.SINGLE);
completionPercentageProp = mgmt.containsPropertyKey('completionPercentage') ?
mgmt.getPropertyKey('completionPercentage') : mgmt.makePropertyKey('completionPercentage').dataType(Integer.class).cardinality(Cardinality.SINGLE);
genderProp = mgmt.containsPropertyKey('gender') ?
mgmt.getPropertyKey('gender') : mgmt.makePropertyKey('gender').dataType(String.class).cardinality(Cardinality.SINGLE);
regionProp = mgmt.containsPropertyKey('region') ?
mgmt.getPropertyKey('region') : mgmt.makePropertyKey('region').dataType(String.class).cardinality(Cardinality.SINGLE);
lastLoginProp = mgmt.containsPropertyKey('lastLogin') ?
mgmt.getPropertyKey('lastLogin') : mgmt.makePropertyKey('lastLogin').dataType(String.class).cardinality(Cardinality.SINGLE);
registrationProp = mgmt.containsPropertyKey('registration') ?
mgmt.getPropertyKey('registration') : mgmt.makePropertyKey('registration').dataType(String.class).cardinality(Cardinality.SINGLE);
ageProp = mgmt.containsPropertyKey('age') ? mgmt.getPropertyKey('age') : mgmt.makePropertyKey('age').dataType(Integer.class).cardinality(Cardinality.SINGLE);
mgmt.commit();
nUsers = 0
println 'Starting nodes population...';
// Load users
new File('/home/jarandaf/soc-pokec-profiles.txt').eachLine {
try {
fields = it.split('\t').take(8);
userId = fields[0];
isPublic = fields[1] == '1' ? true : false;
completionPercentage = fields[2]
gender = fields[3] == '1' ? 'male' : 'female';
region = fields[4];
lastLogin = fields[5];
registration = fields[6];
age = fields[7] as int;
graph.addVertex('userId', userId, 'isPublic', isPublic, 'completionPercentage', completionPercentage, 'gender', gender, 'region', region, 'lastLogin', lastLogin, 'registration', registration, 'age', age);
} catch (Exception e) {
// Silently skip...
}
nUsers += 1
if (nUsers % 100000 == 0) println String.valueOf(nUsers) + ' loaded...';
};
graph.tx().commit();
println 'Nodes population finished';
// Index users by userId, gender and age
println 'Getting node properties...';
mgmt = graph.openManagement();
userId = mgmt.getPropertyKey('userId');
gender = mgmt.getPropertyKey('gender');
age = mgmt.getPropertyKey('age');
println 'Building byUserId index...';
if (mgmt.getGraphIndex('byUserId') == null) mgmt.buildIndex('byUserId', Vertex.class).addKey(userId).buildCompositeIndex();
println 'Building byGender index...';
if (mgmt.getGraphIndex('byGender') == null) mgmt.buildIndex('byGender', Vertex.class).addKey(gender).buildCompositeIndex();
println 'Building byAge index...';
if (mgmt.getGraphIndex('byAge') == null) mgmt.buildIndex('byAge', Vertex.class).addKey(age).buildCompositeIndex();
mgmt.commit();
// Wait for the indexes to become available
println 'Awaiting byUserId graph index status...';
ManagementSystem.awaitGraphIndexStatus(graph, 'byUserId')
.status(SchemaStatus.REGISTERED)
.timeout(10, ChronoUnit.MINUTES)
.call();
println 'Awaiting byGender graph index status...';
ManagementSystem.awaitGraphIndexStatus(graph, 'byGender')
.status(SchemaStatus.REGISTERED)
.timeout(10, ChronoUnit.MINUTES)
.call();
println 'Awaiting byAge graph index status...';
ManagementSystem.awaitGraphIndexStatus(graph, 'byAge')
.status(SchemaStatus.REGISTERED)
.timeout(10, ChronoUnit.MINUTES)
.call();
// Reindex the existing data
mgmt = graph.openManagement();
println 'Reindexing data by byUserId index...';
mgmt.updateIndex(mgmt.getGraphIndex('byUserId'), SchemaAction.REINDEX).get();
println 'Reindexing data by byGender index...';
mgmt.updateIndex(mgmt.getGraphIndex('byGender'), SchemaAction.REINDEX).get();
println 'Reindexing data by byAge index...';
mgmt.updateIndex(mgmt.getGraphIndex('byAge'), SchemaAction.REINDEX).get();
mgmt.commit();
// Enable indexes
println 'Enabling byUserId index...'
mgmt.awaitGraphIndexStatus(graph, 'byUserId').status(SchemaStatus.ENABLED).call();
println 'Enabling byGender index...'
mgmt.awaitGraphIndexStatus(graph, 'byGender').status(SchemaStatus.ENABLED).call();
println 'Enabling byAge index...'
mgmt.awaitGraphIndexStatus(graph, 'byAge').status(SchemaStatus.ENABLED).call();
graph.close();
The error I am getting is the following and is related with the reindex phase:
08:24:26 ERROR com.thinkaurelius.titan.graphdb.database.management.ManagementLogger - Evicted [2#0ac717511509-mybox] from cache but waiting too long for transactions to close. Stale transaction alert on: [standardtitantx[0x4b8696a4], standardtitantx[0x2d39f30a], standardtitantx[0x0da9172d], standardtitantx[0x7c6c7909], standardtitantx[0x79dd0a38], standardtitantx[0x5999c49e], standardtitantx[0x5aaba4a7]]
08:24:26 ERROR com.thinkaurelius.titan.graphdb.database.management.ManagementLogger - Evicted [3#0ac717511509-mybox] from cache but waiting too long for transactions to close. Stale transaction alert on: [standardtitantx[0x4b8696a4], standardtitantx[0x2d39f30a], standardtitantx[0x0da9172d], standardtitantx[0x7c6c7909], standardtitantx[0x79dd0a38], standardtitantx[0x5999c49e], standardtitantx[0x5aaba4a7]]
08:24:26 ERROR com.thinkaurelius.titan.graphdb.database.management.ManagementLogger - Evicted [4#0ac717511509-mybox] from cache but waiting too long for transactions to close. Stale transaction alert on: [standardtitantx[0x4b8696a4], standardtitantx[0x2d39f30a], standardtitantx[0x0da9172d], standardtitantx[0x7c6c7909], standardtitantx[0x79dd0a38], standardtitantx[0x5999c49e], standardtitantx[0x5aaba4a7]]
Any hints on this would be much appreciated.
The errors you get indicate that you have open transactions when you try to modify the schema. Titan needs to wait for all transactions to complete before it can modify the schema. See the answer from Matthias Broecheler on the mailing list for more information.
In general, you should avoid reindexing if possible as it requires Titan to walk over all vertices to see whether they need to be added to the index that should be updated. The documentation contains more information about this process.
For your use case, you can simply create all indexes before you load any data. When you then add the data after all indexes are ready, they will be simply added to the indexes. That way, you should be able to use the indexes immediately.
A minimal example for the schema creation in Groovy (but it should be basically the same in Java):
import com.thinkaurelius.titan.core.TitanFactory;
import com.thinkaurelius.titan.core.Multiplicity;
import com.thinkaurelius.titan.core.Cardinality;
graph = TitanFactory.open('conf/my-titan.properties')
mgmt = graph.openManagement()
id = mgmt.makePropertyKey('id').dataType(String.class).cardinality(Cardinality.SINGLE)
// some other properties that will not be indexed
mgmt.makePropertyKey('isPublic').dataType(Boolean.class).cardinality(Cardinality.SINGLE)
mgmt.makePropertyKey('completionPercentage').dataType(Integer.class).cardinality(Cardinality.SINGLE)
// I prefer to use vertex labels to differentiate between different 'types' of vertices but this isn't necessary
User = mgmt.makeVertexLabel('User').make()
mgmt.buildIndex('UserById',Vertex.class).addKey(id).indexOnly(user).buildCompositeIndex()
mgmt.commit()
I removed all the checks for already existing schema elements for simplicity, but you can of course add them again.
After the schema creation, you can add your data just like before.
A final node about index management: Try to always define the property keys that you want to index in the same transaction in which you create the index. Otherwise, Titan cannot know whether there is already data that needs to be added to the new index which requires again a complete scan of all data. This might require to choose a different name for a property. When you add for example a new vertex label post, then you might want to use a new name like postId instead of using the property id again to avoid the scan of all existing data.

How to count new element from stream by using spark-streaming

I have done implementation of daily compute. Here is some pseudo-code.
"newUser" may called first activated user.
// Get today log from hbase or somewhere else
val log = getRddFromHbase(todayDate)
// Compute active user
val activeUser = log.map(line => ((line.uid, line.appId), line).reduceByKey(distinctStrategyMethod)
// Get history user from hdfs
val historyUser = loadFromHdfs(path + yesterdayDate)
// Compute new user from active user and historyUser
val newUser = activeUser.subtractByKey(historyUser)
// Get new history user
val newHistoryUser = historyUser.union(newUser)
// Save today history user
saveToHdfs(path + todayDate)
Computation of "activeUser" can be converted to spark-streaming easily. Here is some code:
val transformedLog = sdkLogDs.map(sdkLog => {
val time = System.currentTimeMillis()
val timeToday = ((time - (time + 3600000 * 8) % 86400000) / 1000).toInt
((sdkLog.appid, sdkLog.bcode, sdkLog.uid), (sdkLog.channel_no, sdkLog.ctime.toInt, timeToday))
})
val activeUser = transformedLog.groupByKeyAndWindow(Seconds(86400), Seconds(60)).mapValues(x => {
var firstLine = x.head
x.foreach(line => {
if (line._2 < firstLine._2) firstLine = line
})
firstLine
})
But the approach of "newUser" and "historyUser" is confusing me.
I think my question can be summarized as "how to count new element from stream". As my pseudo-code above, "newUser" is part of "activeUser". And I must maintain a set of "historyUser" to know which part is "newUser".
I consider an approach, but I think it may not work right way:
Load the history user as a RDD. Foreach DStream of "activeUser" and find the elements doesn't exist in the "historyUser". A problem here is when should I update this RDD of "historyUser" to make sure I can get the right "newUser" of a window.
Update the "historyUser" RDD means add "newUser" to it. Just like what I did in the pseudo-code above. The "historyUser" is updated once a day in that code. Another problem is how to do this update RDD operation from a DStream. I think update "historyUser" when window slides is proper. But I haven't find a proper API to do this.
So which is the best practice to solve this problem.
updateStateByKey would help here as it allows you to set initial state (your historical users) and then update it on each interval of your main stream. I put some code together to explain the concept
val historyUsers = loadFromHdfs(path + yesterdayDate).map(UserData(...))
case class UserStatusState(isNew: Boolean, values: UserData)
// this will prepare the RDD of already known historical users
// to pass into updateStateByKey as initial state
val initialStateRDD = historyUsers.map(user => UserStatusState(false, user))
// stateful stream
val trackUsers = sdkLogDs.updateStateByKey(updateState, new HashPartitioner(sdkLogDs.ssc.sparkContext.defaultParallelism), true, initialStateRDD)
// only new users
val newUsersStream = trackUsers.filter(_._2.isNew)
def updateState(newValues: Seq[UserData], prevState: Option[UserStatusState]): Option[UserStatusState] = {
// Group all values for specific user as needed
val groupedUserData: UserData = newValues.reduce(...)
// prevState is defined only for users previously seen in the stream
// or loaded as initial state from historyUsers RDD
// For new users it is None
val isNewUser = !prevState.isDefined
// as you return state here for the user - prevState won't be None on next iterations
Some(UserStatusState(isNewUser, groupedUserData))
}

Play Scala Anorm dynamic SQL for UPDATE query

My Google-fu is letting me down, so I'm hoping you can help
I'm building some webservices is the play framework using scala and anorm for database access
One of my calls is to update an existing row in a database - i.e run a query like
UPDATE [Clerks]
SET [firstName] = {firstName}
,[lastName] = {lastName}
,[login] = {login}
,[password] = {password}
WHERE [id] = {id}
My method receives a clerk object BUT all the parameters are optional (except the id of course) as they may only wish to update a single column of the row like so
UPDATE [Clerks]
SET [firstName] = {firstName}
WHERE [id] = {id}
So I want the method to check which clerk params are defined and build the 'SET' part of the update statement accordingly
It seems like there should be a better way than to go through each param of the clerk object, check if it is defined and build the query string - but I've been unable to find anything on the topic so far.
Does anyone have any suggestions how this is best handled
As the commenters mentioned it appears to not be possible - you must build the query string yourself.
I found that examples around this lacking and it took more time to resolve this than it should have (I'm new to scala and the play framework, so this has been a common theme)
in the end this is what I implemented:
override def updateClerk(clerk: Clerk) = {
var setString: String = "[modified] = {modified}"
var params: collection.mutable.Seq[NamedParameter] =
collection.mutable.Seq(
NamedParameter("modified", toParameterValue(System.currentTimeMillis / 1000)),
NamedParameter("id", toParameterValue(clerk.id.get)))
if (clerk.firstName.isDefined) {
setString += ", [firstName] = {firstName}"
params = params :+ NamedParameter("firstName", toParameterValue(clerk.firstName.getOrElse("")))
}
if (clerk.lastName.isDefined) {
setString += ", [lastName] = {lastName}"
params = params :+ NamedParameter("lastName", toParameterValue(clerk.lastName.getOrElse("")))
}
if (clerk.login.isDefined) {
setString += ", [login] = {login}"
params = params :+ NamedParameter("login", toParameterValue(clerk.login.getOrElse("")))
}
if (clerk.password.isDefined) {
setString += ", [password] = {password}"
params = params :+ NamedParameter("password", toParameterValue(clerk.password.getOrElse("")))
}
if (clerk.supervisor.isDefined) {
setString += ", [isSupervisor] = {supervisor}"
params = params :+ NamedParameter("supervisor", toParameterValue(clerk.supervisor.getOrElse(false)))
}
val result = DB.withConnection { implicit c =>
SQL("UPDATE [Clerks] SET " + setString + " WHERE [id] = {id}").on(params:_*).executeUpdate()
}
}
it likely isn't the best way to do this, however I found it quite readable and the parameters are properly handled in the prepared statement.
Hopefully this can benefit someone running into a similar issue
If anyone wants to offer up improvements, they'd be gratefully received
Since roughly 2.6.0 this is possible directly with anorm using their macros, http://playframework.github.io/anorm/#generated-parameter-conversions
Here is my example:
case class UpdateLeagueFormInput(transferLimit: Option[Int], transferWildcard: Option[Boolean], transferOpen: Option[Boolean])
val input = UpdateLeagueFormInput(None, None, Some(true))
val toParams: ToParameterList[UpdateLeagueFormInput] = Macro.toParameters[UpdateLeagueFormInput]
val params = toParams(input)
val dynamicUpdates = params.map(p => {
val snakeName = camelToSnake(p.name)
s"$snakeName = CASE WHEN {${p.name}} IS NULL THEN l.$snakeName ELSE {${p.name}} END"
})
val generatedStmt = s"""UPDATE league l set ${dynamicUpdates.mkString(", ")} where league_id = ${league.leagueId}"""
SQL(generatedStmt).on(params: _*).executeUpdate()
producing:
UPDATE league l set transfer_limit = CASE WHEN null IS NULL THEN l.transfer_limit ELSE null END, transfer_wildcard = CASE WHEN null IS NULL THEN l.transfer_wildcard ELSE null END, transfer_open = CASE WHEN true IS NULL THEN l.transfer_open ELSE true END where league_id = 26;
Notes:
The camelToSnake function is just my own (There is an obvious ColumnNaming.SnakeCase available for parser rows, but I couldn't find something similar for parameter parsing)
My example string interpolates {league.leagueId}, when it could treat this as a parameter as well
Would be nice to avoid the redundant sets for null fields, however I don't think it's possible, and in my opinion clean code/messy auto-generated sql > messy code/clean auto-generated sql

Apache-Spark: method in foreach doesn't work

I read file from HDFS, which contains x1,x2,y1,y2 representing a envelope in JTS.
I would like to use those data to build STRtree in foreach.
val inputData = sc.textFile(inputDataPath).cache()
val strtree = new STRtree
inputData.foreach(line => {val array = line.split(",").map(_.toDouble);val e = new Envelope(array(0),array(1),array(2),array(3)) ;
println("envelope is " + e);
strtree.insert(e,
new Rectangle(array(0),array(1),array(2),array(3)))})
As you can see, I also print the e object.
To my surprise, when I log the size of strtree, it is zero! It seems that insert method make no senses here.
By the way, if I write hard code some test data line by line, the strtree can be built well.
One more thing, those project is packed into jar and submitted in the spark-shell.
So, why does the method in foreach not work ?
You will have to collect() to do this:
inputData.collect().foreach(line => {
... // your code
})
You can do this (for avoiding collecting all data):
val pairs = inputData.map(line => {
val array = line.split(",").map(_.toDouble);
val e = new Envelope(array(0),array(1),array(2),array(3)) ;
println("envelope is " + e);
(e, new Rectangle(array(0),array(1),array(2),array(3)))
}
pairs.collect().foreach(pair => {
strtree.insert(pair._1, pair._2)
}
Use .map() instead of .foreach() and reassign the outcome.
Foreach does not return the outcome of applyied function. It can be used for sending data somewhere, storing to db, printing, and so on.

When should I call SaveChanges() when creating 1000's of Entity Framework objects? (like during an import)

I am running an import that will have 1000's of records on each run. Just looking for some confirmation on my assumptions:
Which of these makes the most sense:
Run SaveChanges() every AddToClassName() call.
Run SaveChanges() every n number of AddToClassName() calls.
Run SaveChanges() after all of the AddToClassName() calls.
The first option is probably slow right? Since it will need to analyze the EF objects in memory, generate SQL, etc.
I assume that the second option is the best of both worlds, since we can wrap a try catch around that SaveChanges() call, and only lose n number of records at a time, if one of them fails. Maybe store each batch in an List<>. If the SaveChanges() call succeeds, get rid of the list. If it fails, log the items.
The last option would probably end up being very slow as well, since every single EF object would have to be in memory until SaveChanges() is called. And if the save failed nothing would be committed, right?
I would test it first to be sure. Performance doesn't have to be that bad.
If you need to enter all rows in one transaction, call it after all of AddToClassName class. If rows can be entered independently, save changes after every row. Database consistence is important.
Second option I don't like. It would be confusing for me (from final user perspective) if I made import to system and it would decline 10 rows out of 1000, just because 1 is bad. You can try to import 10 and if it fails, try one by one and then log.
Test if it takes long time. Don't write 'propably'. You don't know it yet. Only when it is actually a problem, think about other solution (marc_s).
EDIT
I've done some tests (time in miliseconds):
10000 rows:
SaveChanges() after 1 row:18510,534SaveChanges() after 100 rows:4350,3075SaveChanges() after 10000 rows:5233,0635
50000 rows:
SaveChanges() after 1 row:78496,929
SaveChanges() after 500 rows:22302,2835
SaveChanges() after 50000 rows:24022,8765
So it is actually faster to commit after n rows than after all.
My recommendation is to:
SaveChanges() after n rows.
If one commit fails, try it one by one to find faulty row.
Test classes:
TABLE:
CREATE TABLE [dbo].[TestTable](
[ID] [int] IDENTITY(1,1) NOT NULL,
[SomeInt] [int] NOT NULL,
[SomeVarchar] [varchar](100) NOT NULL,
[SomeOtherVarchar] [varchar](50) NOT NULL,
[SomeOtherInt] [int] NULL,
CONSTRAINT [PkTestTable] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
Class:
public class TestController : Controller
{
//
// GET: /Test/
private readonly Random _rng = new Random();
private const string _chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
private string RandomString(int size)
{
var randomSize = _rng.Next(size);
char[] buffer = new char[randomSize];
for (int i = 0; i < randomSize; i++)
{
buffer[i] = _chars[_rng.Next(_chars.Length)];
}
return new string(buffer);
}
public ActionResult EFPerformance()
{
string result = "";
TruncateTable();
result = result + "SaveChanges() after 1 row:" + EFPerformanceTest(10000, 1).TotalMilliseconds + "<br/>";
TruncateTable();
result = result + "SaveChanges() after 100 rows:" + EFPerformanceTest(10000, 100).TotalMilliseconds + "<br/>";
TruncateTable();
result = result + "SaveChanges() after 10000 rows:" + EFPerformanceTest(10000, 10000).TotalMilliseconds + "<br/>";
TruncateTable();
result = result + "SaveChanges() after 1 row:" + EFPerformanceTest(50000, 1).TotalMilliseconds + "<br/>";
TruncateTable();
result = result + "SaveChanges() after 500 rows:" + EFPerformanceTest(50000, 500).TotalMilliseconds + "<br/>";
TruncateTable();
result = result + "SaveChanges() after 50000 rows:" + EFPerformanceTest(50000, 50000).TotalMilliseconds + "<br/>";
TruncateTable();
return Content(result);
}
private void TruncateTable()
{
using (var context = new CamelTrapEntities())
{
var connection = ((EntityConnection)context.Connection).StoreConnection;
connection.Open();
var command = connection.CreateCommand();
command.CommandText = #"TRUNCATE TABLE TestTable";
command.ExecuteNonQuery();
}
}
private TimeSpan EFPerformanceTest(int noOfRows, int commitAfterRows)
{
var startDate = DateTime.Now;
using (var context = new CamelTrapEntities())
{
for (int i = 1; i <= noOfRows; ++i)
{
var testItem = new TestTable();
testItem.SomeVarchar = RandomString(100);
testItem.SomeOtherVarchar = RandomString(50);
testItem.SomeInt = _rng.Next(10000);
testItem.SomeOtherInt = _rng.Next(200000);
context.AddToTestTable(testItem);
if (i % commitAfterRows == 0) context.SaveChanges();
}
}
var endDate = DateTime.Now;
return endDate.Subtract(startDate);
}
}
I just optimized a very similar problem in my own code and would like to point out an optimization that worked for me.
I found that much of the time in processing SaveChanges, whether processing 100 or 1000 records at once, is CPU bound. So, by processing the contexts with a producer/consumer pattern (implemented with BlockingCollection), I was able to make much better use of CPU cores and got from a total of 4000 changes/second (as reported by the return value of SaveChanges) to over 14,000 changes/second. CPU utilization moved from about 13 % (I have 8 cores) to about 60%. Even using multiple consumer threads, I barely taxed the (very fast) disk IO system and CPU utilization of SQL Server was no higher than 15%.
By offloading the saving to multiple threads, you have the ability to tune both the number of records prior to commit and the number of threads performing the commit operations.
I found that creating 1 producer thread and (# of CPU Cores)-1 consumer threads allowed me to tune the number of records committed per batch such that the count of items in the BlockingCollection fluctuated between 0 and 1 (after a consumer thread took one item). That way, there was just enough work for the consuming threads to work optimally.
This scenario of course requires creating a new context for every batch, which I find to be faster even in a single-threaded scenario for my use case.
If you need to import thousands of records, I'd use something like SqlBulkCopy, and not the Entity Framework for that.
MSDN docs on SqlBulkCopy
Use SqlBulkCopy to Quickly Load Data from your Client to SQL Server
Transferring Data Using SqlBulkCopy
Use a stored procedure.
Create a User-Defined Data Type in Sql Server.
Create and populate an array of this type in your code (very fast).
Pass the array to your stored procedure with one call (very fast).
I believe this would be the easiest and fastest way to do this.
Sorry, I know this thread is old, but I think this could help other people with this problem.
I had the same problem, but there is a possibility to validate the changes before you commit them. My code looks like this and it is working fine. With the chUser.LastUpdated I check if it is a new entry or only a change. Because it is not possible to reload an Entry that is not in the database yet.
// Validate Changes
var invalidChanges = _userDatabase.GetValidationErrors();
foreach (var ch in invalidChanges)
{
// Delete invalid User or Change
var chUser = (db_User) ch.Entry.Entity;
if (chUser.LastUpdated == null)
{
// Invalid, new User
_userDatabase.db_User.Remove(chUser);
Console.WriteLine("!Failed to create User: " + chUser.ContactUniqKey);
}
else
{
// Invalid Change of an Entry
_userDatabase.Entry(chUser).Reload();
Console.WriteLine("!Failed to update User: " + chUser.ContactUniqKey);
}
}
_userDatabase.SaveChanges();