How to get a DataAdapter for or from an existing modified strongly-typed DataSet? - ado.net

I have about 10 tables that I load into a DataSet using a single DataAdapter in a sequence. During the load, I use only one DataAdapter, and I replace the table names and SELECT statements as required. I replace the table name and the SQL select statement and successively fill tables in the DataSet. Everything is done inside of two nested "using" statements to dispose of the connection and DataAdapter objects as shown below.
using (OleDbConnection conn = new OleDbConnection (Db.DbConnGet ())) {
using (var da = new OleDbDataAdapter (sql, conn)) {
tablename = "Table1";
da.SelectCommand.CommandText = $"Select * from {tablename}";
try {
da.Fill (hsdset, tablename);
} catch (Exception ex) {
...
}
tablename = "Table2";
da.SelectCommand.CommandText = $"Select * from {tablename}";
try {
da.Fill (hsdset, tablename);
} catch (Exception ex) {
...
}
}}
As you can see, the DataAdapter is disposed of once the loading is done, and I pass the DataSet around my application as necessary for reading data.
But now I have a need to update or extend the data in the dataset and get it back into the database. Updating the DataTables inside the dataset is not a problem - there are many examples on the net. I regenerated a new Connection and DataAdapter to do the update with a table in the existing, modified, strongly-typed DataSet, as follows.
using (OleDbConnection conn = new OleDbConnection (Db.DbConnGet ())) {
using (var da = new OleDbDataAdapter ("", conn)) {
// this is required; I don't know if it is used by Update
da.SelectCommand.CommandText = $"Select * from " + tablename;
try {
// build special update commands from the table->db differences
var cbuilder = new OleDbCommandBuilder (da);
da.Update (dset, "Layers");
} catch (Exception ex) {
...
}
}
}
}
My first question is, "Does the Update operation actually use the original SELECT statement to retrieve info from the database? If not, why is it required? I thought the DataSet kept track of modified rows, new rows, deleted rows, and so on. I thought updating could be done without reading the whole data table again? Or maybe it reads only the records that are marked as modified in the DataTable?
My second question is what is the best (or normal) way of working with DataSets and DataAdapters this way? Is it best practice to always save the original DataAdapters for later use, or is it good practice to create new ones like I did above? (Does the original DataAdapter keep any state information during the load that the newly-created DataAdapter would not have?) Thank you.

Related

CwvReader not loading lines starting with #

I'm trying to load a text file (.csv) into a SQL Server database table. Each line in the file is supposed to be loaded into a single column in the table. I find that lines starting with "#" are skipped, with no error. For example, the first two of the following four lines are loaded fine, but the last two are not. Anybody knows why?
ThisLineShouldBeLoaded
This one as well
#ThisIsATestLine
#This is another test line
Here's the segment of my code:
var sqlConn = connection.StoreConnection as SqlConnection;
sqlConn.Open();
CsvReader reader = new CsvReader(new StreamReader(f), false);
using (var bulkCopy = new SqlBulkCopy(sqlConn))
{
bulkCopy.DestinationTableName = "dbo.TestTable";
try
{
reader.SkipEmptyLines = true;
bulkCopy.BulkCopyTimeout = 300;
bulkCopy.WriteToServer(reader);
reader.Dispose();
reader = null;
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
System.Diagnostics.Debug.WriteLine(ex.Message);
throw;
}
}
# is the default comment character for CsvReader. You can change the comment character by changing the Comment property of the Configuration object. You can disable comment processing altogether by setting the AllowComment property to false, eg:
reader.Configuration.AllowComments=false;
SqlBulkCopy doesn't deal with CSV files at all, it sends any data that's passed to WriteServer to the database. It doesn't care where the data came from or what it contains, as long as the column mappings match
Update
Assuming LumenWorks.Framework.IO.Csv refers to this project the comment character can be specified in the constructor. One could set it to something that wouldn't appear in a normal file, perhaps even the NUL character, the default char value :
CsvReader reader = new CsvReader(new StreamReader(f), false, escape:default);
or
CsvReader reader = new CsvReader(new StreamReader(f), false, escape : '\0');

Trying to update Values in Standard Object from metaData

I am using Trigger isBefore
In System.debug(opp.get(metaData.get(0).Opportunity_Field_Name__c), it is showing correct Values but not Updating in Opportunity Object
Below is Trigger and its Apex Class Trigger
Trigger
trigger MetadataObjectFieldMapping on Opportunity (before insert, before update)
{
if(Trigger.isInsert || Trigger.isUpdate )
{
MetadataObjectFieldMappingHandler oppHandler = new MetadataObjectFieldMappingHandler();
oppHandler.Show(Trigger.new);
}
}
And Apex Class
public class MetadataObjectFieldMappingHandler {
List<String> strAccField = new List<String>();
//Getting List of MetaData Values
List<Object_Field_Mapping__mdt> metaData = new List<Object_Field_Mapping__mdt>
([SELECT Account_Field_Name__c,
Opportunity_Field_Name__c
FROM Object_Field_Mapping__mdt]);
//Function to check if Field Name Exists in Object or not
public Boolean hello(String objName, String fieldName)
{
Boolean temp = False;
//Creating Schema to get all fields from Account and Opportunity Object
Map<String, Schema.SObjectField> accFields = Schema.getGlobalDescribe().get(objName).getDescribe().fields.getMap();
for(Schema.SObjectField field : accFields.values())
{
strAccField.add(field+'');
}
//Calling Account and Opportunity Object in fieldName
if(strAccField.contains(fieldName)){
System.debug('PASS '+fieldName);
temp = true;
}
return temp;
}
public void Show(List<opportunity> newOppList)
{
Boolean test1 = hello('Account',metaData.get(0).Account_Field_Name__c);
Boolean test2 = hello('Opportunity',metaData.get(0).Opportunity_Field_Name__c);
//If both Field Value exists
if(test1 && test2){
//Getting value from Opp using dynamic Query
String query = 'Select Account.'+metaData.get(0).Account_Field_Name__c+', '+metaData.get(0).Opportunity_Field_Name__c+' from Opportunity where Id IN : newOppList ';
List<Opportunity> oppList =database.query(query);
for(Opportunity opp : oppList){
opp.put(
metaData.get(0).Opportunity_Field_Name__c,
opp.Account.get(metaData.get(0).Account_Field_Name__c)
);
System.debug(opp.get(metaData.get(0).Opportunity_Field_Name__c));
}
}
}
Can you please tell me, why Value is not Updating in Opportunity Object while it showing in Debug Logs..?
Multiple fails here I think.
Apex is case-insensitive when you do if('a' == 'A'). But when comparing Strings in collections (Lists, Sets, Map keys) it suddenly becomes case-sensitive.
List<String> fields = new List<String>{'Id', 'Name'};
System.debug(fields.contains('name')); // false
(this should have been a Set<String> by the way, for performance and logical readability). So I suspect something's fishy there, in case. You didn't show your metadata but check this one (you have only 1 row now, right? If you have more than one - we'll your metaData.get(0) essentially returns a random row).
I don't like the cast from Schema.SObjectField to String either.
Next: String query = 'Select Account.'+metaData.get(0).Account_Field_Name__c+', '+metaData.get(0).Opportunity_Field_Name__c+' from Opportunity where Id IN : newOppList ';
This has a chance of working in before update. But for sure not in before insert. Nothing's in database yet, your query will return zero results. You have to loop through trigger.new, collec AccountIds, query Accounts (directly, not via Opportunity table) and then make final loop that writes data.
You passed newOppList to Show(). If you want to get the save to database for free - you should modify values on the original, on newOppList. Instead you modify the in-memory results of query (oppList). Nothing will happen to them, they'll be discarded. If you want to save them, you'd have to do it manually (but then you risk entering a loop of update triggers and SF will stop you).
You sure this has to be code? Sounds like a job for workflow or process builder. Or make them formula fields so you always display fresh value instead of such copying... When something changes on Account it won't automatically cascade down to all opps unless you make next trigger/process builder...

how to implementing this simple logic in spring batch?

i tried to make this as simple as possible. i`m new to spring batch, i have a small isuue with understanding how to relate spring items together especially when it comes to multi-steps jobs however this is my logic not code(simplified) and i dont know to impliment it in spring batch so i thought this might be the right structure
reader_money
reader_details
tasklet
reader_profit
tasklet_calculation
writer
however please correct me if i`m wrong and provide some code if possible.
thank you very much
LOGIC:
sql = "select * from MONEY where id= user input"; //the user will input the condition
while (records are available) {
int currency= resultset(currency column);
sql= "select * from DETAILS where D_currency = currency";
while (records are available) {
int amount= resultset(amount column);
string money_flag= resultset(money_type column);
sql= "select * from PROFIT where Mtypes = money_type";
while (records are available) {
int revenue= resultset(revenue);
if (money_type== 1) {
int net_profit= revenue * 3.75;
sql = "update PROFIT set Nprofit = net_profit";
}
else (money_type== 2) {
int net_profit = (revenue - 5 ) * 3.7 ;
sql = "update PROFIT set Nprofit = net_profit";
}
}
sql="update DETAILS set detail_falg = 001 ";
}
sql = "update MONEY set currency_flag = 009";
}
to fit this into a 'conventional' spring batch configuration, you would need to flatten the three loops into one if possible.
perhaps a sql statement that would return it in one loop similiar to;
select p.revenue, d.amount from PROFIT p, DETAILS d, MONEY m where p.MTypes = d.money_type and d.D_currency = m.currency and m.id = :?
once you've "flattened" it, you then fall into the more 'conventional' read/process/write of a chunk pattern where the reader retrieves a record from the resultset, the processor performs the money_type logic, and the writer then executes the 'update' statement.
Check for the use of ItemReaderAdapter where you could place all your SQL in some kind of DAO that could return a list of aggregated object containing all the info you need for your calculation.
Or
You could use the CompositeItemReader pattern. You basicaly define multiple ItemReader into one master ItemReader. The read() method will invoke all the inner ItemReader before going to the Processor /writer phase.
I could post you some example.. but i have to leave :-(..
Leave a comment if you need some example

Insert query in jasper reports

Is it possible to execute "insert query" in IReports/jasper reports during report generation?
Yes, the idea you need is parameters using this syntax: $P!{PARAM_NAME}.
So your entire SQL query (or other type of query) could be simply $P!{SQL}. Then you pass in exactly the dynamic SQL that you need.
UPDATE:
After reading Sharad's comment, I realized that my answer above is not good. What I wrote is true... but it fails to address the core question.
No, your report cannot really execute an insert statement. Strictly speaking, I'm sure it's not impossible. You could add a scriptlet or custom function in a .jar file that makes a connection and does an insert. But realistically speaking... a report will execute one or more queries. The JR framework is not intended to execute inserts or updates.
Yes you can. You can execute the query when you want to display the report. Here is a sample that works for me.
try {
Map parameters = new HashMap();
connectionString ="jdbc:mysql://localhost/myDb", "myUsername", "myPassword"
Class.forName("com.mysql.jdbc.Driver");
Connection conn = DriverManager.getConnection(connectionString);
PreparedStatement stmt = conn.prepareStatement(query);
ResultSet rs = stmt.executeQuery();
JRResultSetDataSource rsdt = new JRResultSetDataSource(rs);
JasperPrint jp;
jp = JasperFillManager.fillReport("sourceFileName.jasper", parameters, rsdt);
JasperViewer jv = new JasperViewer(jp, false);
jv.setVisible(true);
} catch (ClassNotFoundException | SQLException | JRException ex) {
ex.printStackTrace();
}

Best way to check if object exists in Entity Framework? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
What is the best way to check if an object exists in the database from a performance point of view? I'm using Entity Framework 1.0 (ASP.NET 3.5 SP1).
If you don't want to execute SQL directly, the best way is to use Any(). This is because Any() will return as soon as it finds a match. Another option is Count(), but this might need to check every row before returning.
Here's an example of how to use it:
if (context.MyEntity.Any(o => o.Id == idToMatch))
{
// Match!
}
And in vb.net
If context.MyEntity.Any(function(o) o.Id = idToMatch) Then
' Match!
End If
From a performance point of view, I guess that a direct SQL query using the EXISTS command would be appropriate. See here for how to execute SQL directly in Entity Framework: http://blogs.microsoft.co.il/blogs/gilf/archive/2009/11/25/execute-t-sql-statements-in-entity-framework-4.aspx
I had to manage a scenario where the percentage of duplicates being provided in the new data records was very high, and so many thousands of database calls were being made to check for duplicates (so the CPU sent a lot of time at 100%). In the end I decided to keep the last 100,000 records cached in memory. This way I could check for duplicates against the cached records which was extremely fast when compared to a LINQ query against the SQL database, and then write any genuinely new records to the database (as well as add them to the data cache, which I also sorted and trimmed to keep its length manageable).
Note that the raw data was a CSV file that contained many individual records that had to be parsed. The records in each consecutive file (which came at a rate of about 1 every 5 minutes) overlapped considerably, hence the high percentage of duplicates.
In short, if you have timestamped raw data coming in, pretty much in order, then using a memory cache might help with the record duplication check.
I know this is a very old thread but just incase someone like myself needs this solution but in VB.NET here's what I used base on the answers above.
Private Function ValidateUniquePayroll(PropertyToCheck As String) As Boolean
// Return true if Username is Unique
Dim rtnValue = False
Dim context = New CPMModel.CPMEntities
If (context.Employees.Any()) Then ' Check if there are "any" records in the Employee table
Dim employee = From c In context.Employees Select c.PayrollNumber ' Select just the PayrollNumber column to work with
For Each item As Object In employee ' Loop through each employee in the Employees entity
If (item = PropertyToCheck) Then ' Check if PayrollNumber in current row matches PropertyToCheck
// Found a match, throw exception and return False
rtnValue = False
Exit For
Else
// No matches, return True (Unique)
rtnValue = True
End If
Next
Else
// The is currently no employees in the person entity so return True (Unqiue)
rtnValue = True
End If
Return rtnValue
End Function
I had some trouble with this - my EntityKey consists of three properties (PK with 3 columns) and I didn't want to check each of the columns because that would be ugly.
I thought about a solution that works all time with all entities.
Another reason for this is I don't like to catch UpdateExceptions every time.
A little bit of Reflection is needed to get the values of the key properties.
The code is implemented as an extension to simplify the usage as:
context.EntityExists<MyEntityType>(item);
Have a look:
public static bool EntityExists<T>(this ObjectContext context, T entity)
where T : EntityObject
{
object value;
var entityKeyValues = new List<KeyValuePair<string, object>>();
var objectSet = context.CreateObjectSet<T>().EntitySet;
foreach (var member in objectSet.ElementType.KeyMembers)
{
var info = entity.GetType().GetProperty(member.Name);
var tempValue = info.GetValue(entity, null);
var pair = new KeyValuePair<string, object>(member.Name, tempValue);
entityKeyValues.Add(pair);
}
var key = new EntityKey(objectSet.EntityContainer.Name + "." + objectSet.Name, entityKeyValues);
if (context.TryGetObjectByKey(key, out value))
{
return value != null;
}
return false;
}
I just check if object is null , it works 100% for me
try
{
var ID = Convert.ToInt32(Request.Params["ID"]);
var Cert = (from cert in db.TblCompCertUploads where cert.CertID == ID select cert).FirstOrDefault();
if (Cert != null)
{
db.TblCompCertUploads.DeleteObject(Cert);
db.SaveChanges();
ViewBag.Msg = "Deleted Successfully";
}
else
{
ViewBag.Msg = "Not Found !!";
}
}
catch
{
ViewBag.Msg = "Something Went wrong";
}
Why not do it?
var result= ctx.table.Where(x => x.UserName == "Value").FirstOrDefault();
if(result?.field == value)
{
// Match!
}
Best way to do it
Regardless of what your object is and for what table in the database the only thing you need to have is the primary key in the object.
C# Code
var dbValue = EntityObject.Entry(obj).GetDatabaseValues();
if (dbValue == null)
{
Don't exist
}
VB.NET Code
Dim dbValue = EntityObject.Entry(obj).GetDatabaseValues()
If dbValue Is Nothing Then
Don't exist
End If