I am trying to do the following:
http://msdn.microsoft.com/en-us/library/dd456857.aspx
I created the function in the edmx file here just before the schema element:
<Function Name="YearsSince" ReturnType="Edm.Int32">
<Parameter Name="date" Type="Edm.DateTime" />
<DefiningExpression>
Year(CurrentDateTime()) - Year(AppliedDate)
</DefiningExpression>
</Function>
</Schema>
Now, I want to be able to use that in a query.
I created the following code in the ApplicantPosition partial class
[EdmFunction("HRModel", "YearsSince")]
public static int YearsSince(DateTime date)
{
throw new NotSupportedException("Direct calls are not supported.");
}
And I am trying to do the following query
public class Class1
{
public void question()
{
using (HREntities context = new HREntities())
{
// Retrieve instructors hired more than 10 years ago.
var applicantPositions = from p in context.ApplicantPositions
where YearsSince((DateTime)p.AppliedDate) > 10
select p;
foreach (var applicantPosition in applicantPositions)
{
Console.WriteLine(applicantPosition.);
}
}
}
}
The YearsSince is not recognized, the MSDN tutorial does not show exactly where I need to put the functio, so that might be my problem.
Your static YearsSince function must be defined in some class so if it is not Class1 you must use full identification with class name to call it. Check also this answer.
Related
I've got an Entity Framework 4 entity model in my program. There's a stored function I've defined in my SQL Anywhere 12.0.1 database called GuidToPrefix:
CREATE OR REPLACE FUNCTION GuidToPrefix( ID UNIQUEIDENTIFIER ) RETURNS INT AS
BEGIN
RETURN CAST( CAST( ID AS BINARY(4) ) AS INT )
END;
Following the directions in this MSDN article, I added the function to my EDMX:
<Function Name="GuidToPrefix" ReturnType="int" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="true" ParameterTypeSemantics="AllowImplicitConversion" Schema="DBA">
<Parameter Name="ID" Type="uniqueidentifier" Mode="In" />
</Function>
To be totally honest, I updated the model from the database and checked off the function in the list on the first tab of the wizard. I don't know if that makes a difference or not, but I can't see why it would.
According to the article, I need to add a definition of the function in a C# class. My problem is it doesn't tell me what class to put that in. Do I add an entirely new class? Do I create a new .CS file and do something like this:
public static DbFunctions {
[EdmFunction( "CarSystemModel.Store", "GuidToPrefix" )]
public static int GuidToPrefix( Guid id ) {
throw new NotSupportedException( "Direct calls to GuidToPrefix are not supported." );
}
}
or do I put that in a partial of the entities class?
partial MyEntities {
[EdmFunction( "CarSystemModel.Store", "GuidToPrefix" )]
public static int GuidToPrefix( Guid id ) {
throw new NotSupportedException( "Direct calls to GuidToPrefix are not supported." );
}
}
I have two projects where this entity model is used. One is a class library and the model is definied in it. The other is another class library in another solution that just uses it. I've tried both examples above and the query in the second class library generates this error from the compiler in both cases:
The name 'GuidToPrefix' does not exist in the current context
Obviously I'm not doing something right. Has anyone tried this and got it to work?
I found the answer to this one.
Recall that I created a file with a partial of the MyEntities class in my project where the Entity Model is defined:
partial MyEntities {
[EdmFunction( "CarSystemModel.Store", "GuidToPrefix" )]
public static int GuidToPrefix( Guid id ) {
throw new NotSupportedException( "Direct calls to GuidToPrefix are not supported." );
}
}
This was fine and it all compiled. My problem wasn't here but in the project where I have to use the function.
In that project, in a class called DataInterface, I have a method with code like this:
var query = from read in context.Reads
from entry in context.Entries
.Where( e => GuidToPrefix( read.ID ) == e.PrefixID && read.ID == e.ID )
.....
The problem was that I needed to add the name of the class that contained the C# declaration of the GuidToPrefix function in the Where clause. That is, I needed to write the above expression as:
var query = from read in context.Reads
from entry in context.Entries
.Where( e => MyEntities.GuidToPrefix( read.ID ) == e.PrefixID && read.ID == e.ID )
.....
This compiles and when it runs it uses the function in the database in the LEFT OUTER JOIN as I wanted.
I have scalar function:
CREATE FUNCTION [dbo].[CheckLocation]
(
#locationId Int
)
RETURNS bit
AS
BEGIN
//code
END
I want to use it in Entity Framework context.
I have added this in the *.edmx file:
<Function Name="CheckLocation" ReturnType="bit" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="true" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo" >
<Parameter Name="locationId" Type="int" Mode="In" />
</Function>
I have also created a partial class with method decorated with EdmFunctionAttribute:
public partial class MainModelContainer
{
[EdmFunction("MainModel.Store", "CheckLocation")]
public bool CheckLocation(int locationId)
{
throw new NotSupportedException("Direct calls not supported");
}
}
I try to use this function like this:
Context.CheckLocation(locationId);
And get NotSupportedException("Direct calls not supported").
It works within Select method, but it does not suit me.
Help please!
How can I call this function without select method?
you need to access it as a select
var students = context.Locations
.Select ( new { location= CheckLocation(locationId)}):
Using WCF Data Services (and the latest Entity Framework), I want to return data from a stored procedure. The returned sproc fields do not match 1:1 any entity in my db, so I create a new complex type for it in the edmx model (rather than attaching an existing entity):
Right-click the *.edmx model / Add / Function Import
Select the sproc (returns three fields) - GetData
Click Get Column Information
Add the Function Import Name: GetData
Click Create new Complex Type - GetData_Result
In the service, I define:
[WebGet]
public List<GetData_Result> GetDataSproc()
{
PrimaryDBContext context = new PrimaryDBContext();
return context.GetData().ToList();
}
I created a quick console app to test, and added a reference to System.Data.Services and System.Data.Services.Client - this after running Install-Package EntityFramework -Pre, but the versions on the libraries are 4.0 and not 5.x.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Data.Services.Client;
using ConsoleApplication1.PrimaryDBService;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
DataServiceContext context = new DataServiceContext(new Uri("http://localhost:50100/PrimaryDataService1.svc/"));
IEnumerable<GetData_Result> result = context.Execute<GetData_Result>(new Uri("http://localhost:50100/PrimaryDataService1.svc/GetDataSproc"));
foreach (GetData_Result w in result)
{
Console.WriteLine(w.ID + "\t" + w.WHO_TYPE_NAME + "\t" + w.CREATED_DATE);
}
Console.Read();
}
}
}
I didn't use the UriKind.Relative or anything else to complicate this.
When I navigate in the browser to the URL, I see data, but when I consume it in my console app, I get nothing at all.
Adding tracing to the mix:
<system.diagnostics>
<sources>
<source name="System.ServiceModel" switchValue="Information, ActivityTracing" propagateActivity="true">
<listeners>
<add name="traceListener" type="System.Diagnostics.XmlWriterTraceListener" initializeData="c:\temp\WebWCFDataService.svclog" />
</listeners>
</source>
</sources>
</system.diagnostics>
... and opening using the Microsoft Service Trace Viewer, I see two idential warnings:
Configuration evaluation context not found.
<E2ETraceEvent xmlns="http://schemas.microsoft.com/2004/06/E2ETraceEvent">
<System xmlns="http://schemas.microsoft.com/2004/06/windows/eventlog/system">
<EventID>524312</EventID>
<Type>3</Type>
<SubType Name="Warning">0</SubType>
<Level>4</Level>
<TimeCreated SystemTime="2012-04-03T14:50:11.8355955Z" />
<Source Name="System.ServiceModel" />
<Correlation ActivityID="{66f1a241-2613-43dd-be0c-341149e37d30}" />
<Execution ProcessName="WebDev.WebServer40" ProcessID="5176" ThreadID="10" />
<Channel />
<Computer>MyComputer</Computer>
</System>
<ApplicationData>
<TraceData>
<DataItem>
<TraceRecord xmlns="http://schemas.microsoft.com/2004/10/E2ETraceEvent/TraceRecord" Severity="Warning">
<TraceIdentifier>http://msdn.microsoft.com/en-US/library/System.ServiceModel.EvaluationContextNotFound.aspx</TraceIdentifier>
<Description>Configuration evaluation context not found.</Description>
<AppDomain>fd28c9cc-1-129779382115645955</AppDomain>
</TraceRecord>
</DataItem>
</TraceData>
</ApplicationData>
</E2ETraceEvent>
So why am I able to see data from the browser, but not when consumed in my app?
-- UPDATE --
I downloaded the Microsoft WCF Data Services October 2011 CTP which exposed DataServiceProtocolVersion.V3, created a new host and client and referenced Microsoft.Data.Services.Client (v4.99.2.0). Now getting the following error on the client when trying iterate in the foreach loop:
There is a type mismatch between the client and the service. Type
'ConsoleApplication1.WcfDataServiceOctCTP1.GetDataSproc_Result' is an
entity type, but the type in the response payload does not represent
an entity type. Please ensure that types defined on the client match
the data model of the service, or update the service reference on the
client.
I tried the same thing by referencing the actual entity - works fine, so same issue.
Recap: I want to create a high-performing WCF service DAL (data access layer) that returns strongly-typed stored procedures. I initially used a "WCF Data Services" project to accomplish this. It seems as though it has its limitations, and after reviewing performance metrics of different ORM's, I ended up using Dapper for the data access inside a basic WCF Service.
I first created the *.edmx model and created the POCO for my sproc.
Next, I created a base BaseRepository and MiscDataRepository:
namespace WcfDataService.Repositories
{
public abstract class BaseRepository
{
protected static void SetIdentity<T>(IDbConnection connection, Action<T> setId)
{
dynamic identity = connection.Query("SELECT ##IDENTITY AS Id").Single();
T newId = (T)identity.Id;
setId(newId);
}
protected static IDbConnection OpenConnection()
{
IDbConnection connection = new SqlConnection(WebConfigurationManager.ConnectionStrings["PrimaryDBConnectionString"].ConnectionString);
connection.Open();
return connection;
}
}
}
namespace WcfDataService.Repositories
{
public class MiscDataRepository : BaseRepository
{
public IEnumerable<GetData_Result> SelectAllData()
{
using (IDbConnection connection = OpenConnection())
{
var theData = connection.Query<GetData_Result>("sprocs_GetData",
commandType: CommandType.StoredProcedure);
return theData;
}
}
}
}
The service class:
namespace WcfDataService
{
public class Service1 : IService1
{
private MiscDataRepository miscDataRepository;
public Service1()
: this(new MiscDataRepository())
{
}
public Service1(MiscDataRepository miscDataRepository)
{
this.miscDataRepository = miscDataRepository;
}
public IEnumerable<GetData_Result> GetData()
{
return miscDataRepository.SelectAllData();
}
}
}
... and then created a simple console application to display the data:
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
Service1Client client = new Service1Client();
IEnumerable<GetData_Result> result = client.GetData();
foreach (GetData_Result d in result)
{
Console.WriteLine(d.ID + "\t" + d.WHO_TYPE_NAME + "\t" + d.CREATED_DATE);
}
Console.Read();
}
}
}
I also accomplished this using PetaPOCO, which took much less time to setup than Dapper - a few lines of code:
namespace PetaPocoWcfDataService
{
// NOTE: You can use the "Rename" command on the "Refactor" menu to change the class name "Service1" in code, svc and config file together.
public class Service1 : IService1
{
public IEnumerable<GetData_Result> GetData()
{
var databaseContext = new PetaPoco.Database("PrimaryDBContext"); // using PetaPOCO for data access
databaseContext.EnableAutoSelect = false; // use the sproc to create the select statement
return databaseContext.Query<GetData_Result>("exec sproc_GetData");
}
}
}
I like how quick and simple it was to setup PetaPOCO, but using the repository pattern with Dapper will scale much better for an enterprise project.
It was also quite simple to create complex objects directly from the EDMX - for any stored procedure, then consume them.
For example, I created complex type return type called ProfileDetailsByID_Result based on the sq_mobile_profile_get_by_id sproc.
public ProfileDetailsByID_Result GetAllProfileDetailsByID(int profileID)
{
using (IDbConnection connection = OpenConnection("DatabaseConnectionString"))
{
try
{
var profile = connection.Query<ProfileDetailsByID_Result>("sq_mobile_profile_get_by_id",
new { profileid = profileID },
commandType: CommandType.StoredProcedure).FirstOrDefault();
return profile;
}
catch (Exception ex)
{
ErrorLogging.Instance.Fatal(ex); // use singleton for logging
return null;
}
}
}
So using Dapper along with some EDMX entities seems to be a nice quick way to get things going. I may be mistaken, but I'm not sure why Microsoft didn't think this all the way through - no support for complex types with OData.
--- UPDATE ---
So I finally got a response from Microsoft, when I raised the issue over a month ago:
We have done research on this and we have found that the Odata client
library doesn’t support complex types. Therefore, I regret to inform
you that there is not much that we can do to solve it.
*Optional: In order to obtain a solution for this issue, you have to use a Xml to Linq kind of approach to get the complex types.
Thank you very much for your understanding in this matter. Please let
me know if you have any questions. If we can be of any further
assistance, please let us know.
Best regards,
Seems odd.
Scenario
To make it simple, let's suppose I have an ItemReader that returns me 25 rows.
The first 10 rows belong to student A
The next 5 belong to student B
and the 10 remaining belong to student C
I want to aggregate them together logically say by studentId and flatten them to end up with one row per student.
Problem
If I understand correctly, setting the commit interval to 5 will do the following:
Send 5 rows to the Processor (which will aggregate them or do any business logic I tell it to).
After Processed will write 5 rows.
Then it will do it again for the next 5 rows and so on.
If that is true, then for the next five I will have to check the already written ones, get them out aggregate them to the ones that I am currently processing and write them again.
I personally do no like that.
What is the best practice to handle a situation like this in Spring Batch?
Alternative
Sometimes I feel that it is much easier to write a regular Spring JDBC main program and then I have full control of what I want to do. However, I wanted to take advantage of of the job repository state monitoring of the job, ability to restart, skip, job and step listeners....
My Spring Batch Code
My module-context.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:batch="http://www.springframework.org/schema/batch"
xsi:schemaLocation="http://www.springframework.org/schema/batch http://www.springframework.org/schema/batch/spring-batch-2.1.xsd
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">
<description>Example job to get you started. It provides a skeleton for a typical batch application.</description>
<batch:job id="job1">
<batch:step id="step1" >
<batch:tasklet transaction-manager="transactionManager" start-limit="100" >
<batch:chunk reader="attendanceItemReader"
processor="attendanceProcessor"
writer="attendanceItemWriter"
commit-interval="10"
/>
</batch:tasklet>
</batch:step>
</batch:job>
<bean id="attendanceItemReader" class="org.springframework.batch.item.database.JdbcCursorItemReader">
<property name="dataSource">
<ref bean="sourceDataSource"/>
</property>
<property name="sql"
value="select s.student_name ,s.student_id ,fas.attendance_days ,fas.attendance_value from K12INTEL_DW.ftbl_attendance_stumonabssum fas inner join k12intel_dw.dtbl_students s on fas.student_key = s.student_key inner join K12INTEL_DW.dtbl_schools ds on fas.school_key = ds.school_key inner join k12intel_dw.dtbl_school_dates dsd on fas.school_dates_key = dsd.school_dates_key where dsd.rolling_local_school_yr_number = 0 and ds.school_code = ? and s.student_activity_indicator = 'Active' and fas.LOCAL_GRADING_PERIOD = 'G1' and s.student_current_grade_level = 'Gr 9' order by s.student_id"/>
<property name="preparedStatementSetter" ref="attendanceStatementSetter"/>
<property name="rowMapper" ref="attendanceRowMapper"/>
</bean>
<bean id="attendanceStatementSetter" class="edu.kdc.visioncards.preparedstatements.AttendanceStatementSetter"/>
<bean id="attendanceRowMapper" class="edu.kdc.visioncards.rowmapper.AttendanceRowMapper"/>
<bean id="attendanceProcessor" class="edu.kdc.visioncards.AttendanceProcessor" />
<bean id="attendanceItemWriter" class="org.springframework.batch.item.file.FlatFileItemWriter">
<property name="resource" value="file:target/outputs/passthrough.txt"/>
<property name="lineAggregator">
<bean class="org.springframework.batch.item.file.transform.PassThroughLineAggregator" />
</property>
</bean>
</beans>
My supporting classes for the Reader.
A PreparedStatementSetter
package edu.kdc.visioncards.preparedstatements;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import org.springframework.jdbc.core.PreparedStatementSetter;
public class AttendanceStatementSetter implements PreparedStatementSetter {
public void setValues(PreparedStatement ps) throws SQLException {
ps.setInt(1, 7);
}
}
and a RowMapper
package edu.kdc.visioncards.rowmapper;
import java.sql.ResultSet;
import java.sql.SQLException;
import org.springframework.jdbc.core.RowMapper;
import edu.kdc.visioncards.dto.AttendanceDTO;
public class AttendanceRowMapper<T> implements RowMapper<AttendanceDTO> {
public static final String STUDENT_NAME = "STUDENT_NAME";
public static final String STUDENT_ID = "STUDENT_ID";
public static final String ATTENDANCE_DAYS = "ATTENDANCE_DAYS";
public static final String ATTENDANCE_VALUE = "ATTENDANCE_VALUE";
public AttendanceDTO mapRow(ResultSet rs, int rowNum) throws SQLException {
AttendanceDTO dto = new AttendanceDTO();
dto.setStudentId(rs.getString(STUDENT_ID));
dto.setStudentName(rs.getString(STUDENT_NAME));
dto.setAttDays(rs.getInt(ATTENDANCE_DAYS));
dto.setAttValue(rs.getInt(ATTENDANCE_VALUE));
return dto;
}
}
My processor
package edu.kdc.visioncards;
import java.util.HashMap;
import java.util.Map;
import org.springframework.batch.item.ItemProcessor;
import edu.kdc.visioncards.dto.AttendanceDTO;
public class AttendanceProcessor implements ItemProcessor<AttendanceDTO, Map<Integer, AttendanceDTO>> {
private Map<Integer, AttendanceDTO> map = new HashMap<Integer, AttendanceDTO>();
public Map<Integer, AttendanceDTO> process(AttendanceDTO dto) throws Exception {
if(map.containsKey(new Integer(dto.getStudentId()))){
AttendanceDTO attDto = (AttendanceDTO)map.get(new Integer(dto.getStudentId()));
attDto.setAttDays(attDto.getAttDays() + dto.getAttDays());
attDto.setAttValue(attDto.getAttValue() + dto.getAttValue());
}else{
map.put(new Integer(dto.getStudentId()), dto);
}
return map;
}
}
My concerns from code above
In the Processor, I create a HashMap and as I process the rows I check whether I already have that Student in the Map, if it's not there I add it. If it's already there I grab the it get the values that I am interested in and add them with the row that I am currently processing.
After that, Spring Batch Framework writes to a File according to my configuration
My question is as follows:
I do not want it to go to the writer. I want to process all the remaining rows. How do I keep this Map that I have created in memory for the next set of rows that need to go through this same Processor? Everytime, a row is processed through AttendanceProcessor the Map is initialized. Should I put the Map initialization in a static block?
In my application I created a CollectingJdbcCursorItemReader that extends the standard JdbcCursorItemReader and performs exactly what you need. Internally it uses my CollectingRowMapper: an extension of the standard RowMapper that maps multiple related rows to one object.
Here is the code of the ItemReader, the code of CollectingRowMapper interface, and an abstract implementation of it, is available in another answer of mine.
import java.sql.ResultSet;
import java.sql.SQLException;
import org.springframework.batch.item.ReaderNotOpenException;
import org.springframework.batch.item.database.JdbcCursorItemReader;
import org.springframework.jdbc.core.RowMapper;
/**
* A JdbcCursorItemReader that uses a {#link CollectingRowMapper}.
* Like the superclass this reader is not thread-safe.
*
* #author Pino Navato
**/
public class CollectingJdbcCursorItemReader<T> extends JdbcCursorItemReader<T> {
private CollectingRowMapper<T> rowMapper;
private boolean firstRead = true;
/**
* Accepts a {#link CollectingRowMapper} only.
**/
#Override
public void setRowMapper(RowMapper<T> rowMapper) {
this.rowMapper = (CollectingRowMapper<T>)rowMapper;
super.setRowMapper(rowMapper);
}
/**
* Read next row and map it to item.
**/
#Override
protected T doRead() throws Exception {
if (rs == null) {
throw new ReaderNotOpenException("Reader must be open before it can be read.");
}
try {
if (firstRead) {
if (!rs.next()) { //Subsequent calls to next() will be executed by rowMapper
return null;
}
firstRead = false;
} else if (!rowMapper.hasNext()) {
return null;
}
T item = readCursor(rs, getCurrentItemCount());
return item;
}
catch (SQLException se) {
throw getExceptionTranslator().translate("Attempt to process next row failed", getSql(), se);
}
}
#Override
protected T readCursor(ResultSet rs, int currentRow) throws SQLException {
T result = super.readCursor(rs, currentRow);
setCurrentItemCount(rs.getRow());
return result;
}
}
You can use it just like the classic JdbcCursorItemReader: the only requirement is that you provide it a CollectingRowMapper instead of the classic RowMapper.
I always follow this pattern:
I make my reader scope to be "step", and in #PostConstruct I fetch
the results, and put them in a Map
In processor, I convert the associatedCollection into writable list,
and send the writable list
In ItemWriter, I persist the writable item(s) depending on the case
because you changed your question i add a new answer
if the students are ordered then there is no need for list/map, you could use exactly one studentObject on the processor to keep the "current" and aggregate on it until there is a new one (read: id change)
if the students are not ordered you will never know when a specific student is "finished" and you'd have to keep all students in a map which can't be written until the end of the complete read sequence
beware:
the processor needs to know when the reader is exhausted
its hard to get it working with any commit-rate and "id" concept if you aggregate items that are somehow identical the processor just can't know if the currently processed item is the last one
basically the usecase is either solved at reader level completely or at writer level (see other answer)
private SimpleItem currentItem;
private StepExecution stepExecution;
#Override
public SimpleItem process(SimpleItem newItem) throws Exception {
SimpleItem returnItem = null;
if (currentItem == null) {
currentItem = new SimpleItem(newItem.getId(), newItem.getValue());
} else if (currentItem.getId() == newItem.getId()) {
// aggregate somehow
String value = currentItem.getValue() + newItem.getValue();
currentItem.setValue(value);
} else {
// "clone"/copy currentItem
returnItem = new SimpleItem(currentItem.getId(), currentItem.getValue());
// replace currentItem
currentItem = newItem;
}
// reader exhausted?
if(stepExecution.getExecutionContext().containsKey("readerExhausted")
&& (Boolean)stepExecution.getExecutionContext().get("readerExhausted")
&& currentItem.getId() == stepExecution.getExecutionContext().getInt("lastItemId")) {
returnItem = new SimpleItem(currentItem.getId(), currentItem.getValue());
}
return returnItem;
}
basically you talk about batch processing with changing IDs(1), where the batch has to keep track of the change
for spring/spring-batch we talk about:
ItemWriter which checks the list of items for an id change
before the change the items are stored in a temporary datastore(2) (List, Map, whatever), and are not written out
when the id changes, the aggregating/flattening business code runs on the items in the datastore and one item should be written, now the datastore can be used for the next items with the next id
this concept needs a reader which tells the step "i'm exhausted" to properly flush the temporary datastore on end of items (file/database)
here a rough and simple code example
#Override
public void write(List<? extends SimpleItem> items) throws Exception {
// setup with first sharedId at startup
if (currentId == null){
currentId = items.get(0).getSharedId();
}
// check for change of sharedId in input
// keep items in temporary dataStore until id change of input
// call delegate if there is an id change or if the reader is exhausted
for (SimpleItem item : items) {
// already known sharedId, add to tempData
if (item.getSharedId() == currentId) {
tempData.add(item);
} else {
// or new sharedId, write tempData, empty it, keep new id
// the delegate does the flattening/aggregating
delegate.write(tempData);
tempData.clear();
currentId = item.getSharedId();
tempData.add(item);
}
}
// check if reader is exhausted, flush tempData
if ((Boolean) stepExecution.getExecutionContext().get("readerExhausted")
&& tempData.size() > 0) {
delegate.write(tempData);
// optional delegate.clear();
}
}
(1)assuming the items are ordered by an ID (can be composite too)
(2)a hashmap spring bean for thread safety
Use Step Execution Listener and store the records as map to the StepExecutionContext , you can then group them in the writer or writer listener and write it at a time
We have an application where we are creating an activity (say = CallA), this activity will be used in the worklfow project. This activity(CallA) will call a method which is present in another class(and another namespace). I have written a sample code for the method being called below :-
namespace WorkflowApplication1
{
class Class1
{
public int Trial(int a, int b)
{
return 23;
}
}
}
We want to use InvokeMethod feature provided in the toolbox and don't want to use codeactivity.
If anybody has used this feature of WF 4.0, please help.
Thanks in advance.
In the target type you have to point to the class that implements the method.
In the method name you'll have to write the name. If the method is not static, then you'll need to create a variable of that class type, initialize it in advance and use it in the TargetObject property. You'll need a variable in your WF to store the result (using Result property on the Invoke activity)
Hope it helps
Here follows a suggestion for this question
1) Create a Windows Forms Application
2) Add a Class called Class 1 and change the namespace to WorkflowApplication1
3) Change the whole code from Class 1 to
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace WorkflowApplication1
{
public class Class1
{
public int Trial(int a, int b)
{
return 23;
}
}
}
4) Add an Activity called Activity1
5) Compile the solution
6) Open the Activity1 and add a sequence
7) Click on the sequence and create 2 variables, as shown below
8) Insert a InvokeMethod and a Writeline activities, as shown below
9) Edit the parameters of the Invoke Method, as shown below
10) Add a button and click it twice to create the Click event
11) Add the following piece of code inside your Form1 class and change the button1_Click event
namespace Generic
{
public partial class Form1 : Form
{
WorkflowApplication WFApp = null;
AutoResetEvent WFAppEvent = null;
public void RunWFApp()
{
WFAppEvent = new AutoResetEvent(false);
WFApp = new WorkflowApplication(new Activity1());
WFApp.Completed = delegate (WorkflowApplicationCompletedEventArgs e)
{
WFAppEvent.Set();
};
WFApp.Run();
}
private void button1_Click(object sender, EventArgs e)
{
RunWFApp();
}
...
...
}
}
12) Open the Output window (Ctrl-Alt-O). Run the application, click the button and check if the number 23 is shown in the output window