update dynamically targeting provider in iPOJO - ipojo

I have a component declared as:
<ipojo>
<component classname="HelloClass" name="helloCom" immediate="true">
<requires field="delayService" id="id1">
</requires>
</component>
<instance component="helloCom" name="hello">
<property name="requires.from">
<property name="id1" value="A"/>
</property>
</instance>
</ipojo>
The jar file of this component :helloComponent.jar
Now, i want to update (value="A") to (value="AA"). Thus, i implement a component using ConfigurationAdmin to update this property
public class ControllerReconfiguration {
private ConfigurationAdmin m_configAdmin;
#SuppressWarnings({ "rawtypes", "unchecked" })
public void reconfigure() throws IOException {
Configuration configuration = m_configAdmin.getConfiguration("hello","file:./helloComponent.jar");
configuration.setBundleLocation("file:./helloComponent.jar");
Properties props = new Properties();
//Dictionary props = new Hashtable();
props.put("id1", "AA");
configuration.update(props);
System.out.println("Update");
}
}
However, this ControllerReconfiguration component can't update the value 'A' (by 'AA') in 'hello' instance.
How to modify this ControllerReconfiguration component, please ?
Thanks you for your help.

Unfortunately, you can't push new 'from' configuration like this.
However, you can use the iPOJO introspection API directly: http://felix.apache.org/documentation/subprojects/apache-felix-ipojo/apache-felix-ipojo-userguide/ipojo-advanced-topics/using-ipojo-introspection-api.html
Retrieve the Architecture service of the instance
Retrieve the InstanceDescription and DependencyDescription
Call the setFilter method

Thanks Clement,
it works fine !!!!! :) I access InstanceManager using Factory.
Ex, in order to access InstanceManager of component "hello.call.CallHello"
#require
private Factory[] factories;
for (Factory factory : factories) {
/*
* "hello.call.CallHello" is a component name
* note: it is not component instance name
*/
if (factory.getName().equals("hello.call.CallHello")) {
/*
* a component can have many instances
* if there is only one instance.
* get(0) return the first instance.
*/
InstanceManager im = (InstanceManager) factory.getInstances().get(0);
}

Related

Create Log4net rolling file which is based on date

I need to create files for 45 separate locations (example: Boston, London, etc). And these file names have to be based on the date. Also can I provide a maximum file size to roll the files and the maximum number of files to roll.
Basically a file name must look like : Info_Boston_(2019.02.25).txt
So far I have come up with the below code to get by date. But I couldn't limit the file size to 1MB. The file grows beyond 1MB, and a new rolling file is not created. Please assist
<appender name="MyAppenderInfo" type="log4net.Appender.RollingFileAppender">
<param name="File" value="C:\\ProgramData\\Service\\Org\\Info"/>
<param name="RollingStyle" value="Date"/>
<param name="DatePattern" value="_(yyyy.MM.dd).\tx\t"/>
<param name="StaticLogFileName" value="false"/>
<maxSizeRollBackups value="10" />
<maximumFileSize value="1MB" />
<appendToFile value="true" />
<lockingModel type="log4net.Appender.FileAppender+MinimalLock" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date %message%n" />
</layout>
<filter type="log4net.Filter.LevelRangeFilter">
<levelMin value="DEBUG" />
<levelMax value="INFO" />
</filter>
</appender>
To address your specific post, I would not do this with a config based approach, as it would get rather cumbersome to manage I would think. A more programmatic approach would be to generate the logging instances dynamically.
EDIT: I took down the original to post this reworked example based on this SO post log4net: different logs on different file appenders at runtime
EDIT-2: I had to rework this again, as I realized I had omitted some required parts, and had some things wrong after the rework. This is tested and working. However, a few things to note, you will need to provide the using statements on the controller, to the logging class you make. next, you will need to DI your logging directories in as I have done, or come up with another method of providing the list of log file outputs.
This will allow you to very cleanly dynamically generate as many logging instances as you need to, to as many independent locations as you would like. I pulled this example from a project I did, and modified it a bit to fit your needs. Let me know if you have questions.
Create a Dynamic logger class which inherits from the base logger in the heirarchy:
using log4net;
using log4net.Repository.Hierarchy;
public sealed class DynamicLogger : Logger
{
private const string REPOSITORY_NAME = "somename";
internal DynamicLogger(string name) : base(name)
{
try
{
// try and find an existing repository
base.Hierarchy = (log4net.Repository.Hierarchy.Hierarchy)LogManager.GetRepository(REPOSITORY_NAME);
} // try
catch
{
// it doesnt exist, make it.
base.Hierarchy = (log4net.Repository.Hierarchy.Hierarchy)LogManager.CreateRepository(REPOSITORY_NAME);
} // catch
} // ctor(string)
} // DynamicLogger
then, build out a class to manage the logging instances, and build the new loggers:
using log4net;
using log4net.Appender;
using log4net.Config;
using log4net.Core;
using log4net.Filter;
using log4net.Layout;
using log4net.Repository;
using Microsoft.Extensions.Options;
using System.Collections.Generic;
using System.Linq;
public class LogFactory
{
private static List<ILog> _Loggers = new List<ILog>();
private static LoggingConfig _Settings;
private static ILoggerRepository _Repository;
public LogFactory(IOptions<LoggingConfig> configuration)
{
_Settings = configuration.Value;
ConfigureRepository(REPOSITORY_NAME);
} // ctor(IOptions<LoggingConfig>)
/// <summary>
/// Configures the primary logging repository.
/// </summary>
/// <param name="repositoryName">The name of the repository.</param>
private void ConfigureRepository(string repositoryName)
{
if(_Repository == null)
{
try
{
_Repository = LogManager.CreateRepository(repositoryName);
}
catch
{
// repository already exists.
_Repository = LogManager.GetRepository(repositoryName);
} // catch
} // if
} // ConfigureRepository(string)
/// <summary>
/// Gets a named logging instance, if it exists, and creates it if it doesnt.
/// </summary>
/// <param name="name"></param>
/// <returns></returns>
public ILog GetLogger(string name)
{
string filePath = string.Empty;
switch (name)
{
case "core":
filePath = _Settings.CoreLoggingDirectory;
break;
case "image":
filePath = _Settings.ImageProcessorLoggingDirectory;
break;
} // switch
if (_Loggers.SingleOrDefault(a => a.Logger.Name == name) == null)
{
BuildLogger(name, filePath);
} // if
return _Loggers.SingleOrDefault(a => a.Logger.Name == name);
} // GetLogger(string)
/// <summary>
/// Dynamically build a new logging instance.
/// </summary>
/// <param name="name">The name of the logger (Not file name)</param>
/// <param name="filePath">The file path you want to log to.</param>
/// <returns></returns>
private ILog BuildLogger(string name, string filePath)
{
// Create a new filter to include all logging levels, debug, info, error, etc.
var filter = new LevelMatchFilter();
filter.LevelToMatch = Level.All;
filter.ActivateOptions();
// Create a new pattern layout to determine the format of the log entry.
var pattern = new PatternLayout("%d %-5p %c %m%n");
pattern.ActivateOptions();
// Dynamic logger inherits from the hierarchy logger object, allowing us to create dynamically generated logging instances.
var logger = new DynamicLogger(name);
logger.Level = Level.All;
// Create a new rolling file appender
var rollingAppender = new RollingFileAppender();
// ensures it will not create a new file each time it is called.
rollingAppender.AppendToFile = true;
rollingAppender.Name = name;
rollingAppender.File = filePath;
rollingAppender.Layout = pattern;
rollingAppender.AddFilter(filter);
// allows us to dynamically generate the file name, ie C:\temp\log_{date}.log
rollingAppender.StaticLogFileName = false;
// ensures that the file extension is not lost in the renaming for the rolling file
rollingAppender.PreserveLogFileNameExtension = true;
rollingAppender.DatePattern = "yyyy-MM-dd";
rollingAppender.RollingStyle = RollingFileAppender.RollingMode.Date;
// must be called on all attached objects before the logger can use it.
rollingAppender.ActivateOptions();
logger.AddAppender(rollingAppender);
// Sets the logger to not inherit old appenders, or the core appender.
logger.Additivity = false;
// sets the loggers effective level, determining what level it will catch log requests for and log them appropriately.
logger.Level = Level.Info;
// ensures the new logger does not inherit the appenders of the previous loggers.
logger.Additivity = false;
// The very last thing that we need to do is tell the repository it is configured, so it can bind the values.
_Repository.Configured = true;
// bind the values.
BasicConfigurator.Configure(_Repository, rollingAppender);
LogImpl newLog = new LogImpl(logger);
_Loggers.Add(newLog);
return newLog;
} // BuildLogger(string, string)
} // LogFactory
Then, in your Dependency Injection you can inject your log factory. You can do that with something like this:
services.AddSingleton<LogFactory>();
Then in your controller, or any constructor really, you can just do something like this:
private LogFactory _LogFactory;
public HomeController(LogFactory logFactory){
_LogFactory = logFactory;
}
public async Task<IActionResult> Index()
{
ILog logger1 = _LogFactory.GetLogger("core");
ILog logger2 = _LogFactory.GetLogger("image");
logger1.Info("SomethingHappened on logger 1");
logger2.Info("SomethingHappened on logger 2");
return View();
}
This example will output:
2019-03-07 10:41:21,338 INFO core SomethingHappened on logger 1
in its own file called Core_2019-03-07.log
and also:
2019-03-07 11:06:29,155 INFO image SomethingHappened on logger 2
in its own file called Image_2019-03-07
Hope that makes more sense!

MvvmCross passing configuration to configurable plugin loader

I am creating a portable MockGeoLocationWatcher that one can substitute in place of the concrete implementations of IMvxGeoLocationWatcher until one has an actual device. This should facilitate development and testing of applications that require geo location.
The PluginLoader class for this plugin currently looks like this:
namespace Pidac.MvvmCross.Plugins.Location
{
public class PluginLoader : IMvxConfigurablePluginLoader
{
private bool _loaded;
public static readonly PluginLoader Instance = new PluginLoader();
public void EnsureLoaded()
{
if (_loaded)
return;
_loaded = true;
var locationWatcher = new MockGeoLocationWatcher();
var data = #"<?xml version='1.0' encoding='utf-8'?>
<WindowsPhoneEmulator xmlns='http://schemas.microsoft.com/WindowsPhoneEmulator/2009/08/SensorData'>
<SensorData>
<Header version='1' />
<GpsData latitude='48.619934106826' longitude='-84.5247359841114' />
<GpsData latitude='48.6852544862377' longitude='-83.9864059059864' />
<GpsData latitude='48.8445703681025' longitude='-83.7337203591114' />
<GpsData latitude='48.8662561090809' longitude='-83.2393355934864' />
<GpsData latitude='49.0825970371386' longitude='-83.0415816872364' />
<GpsData latitude='49.2621642999055' longitude='-82.7229781716114' />
<GpsData latitude='49.2621642999055' longitude='-82.6021285622364' />
<GpsData latitude='49.2047736379815' longitude='-82.3054977028614' />
</SensorData>
</WindowsPhoneEmulator>";
locationWatcher.SensorLocationData = data;
Mvx.RegisterSingleton(typeof(IMvxGeoLocationWatcher), locationWatcher);
}
public void Configure(IMvxPluginConfiguration configuration)
{
}
}
public class MockLocationWatcherConfiguration : IMvxPluginConfiguration
{
public static readonly MockLocationWatcherConfiguration Default = new MockLocationWatcherConfiguration();
// ideally, we should use this property to point to a file or string containing location data
// this should be configurable outside of code base.
public string SensorLocationData { get; set; }
}
}
I will like to pass the sensor data, currently hardcoded into the variable called "data" through an instance of MockLocationWatcherConfiguration but do not know where the MvvmCross framework is expecting to load the configuration for this plugin before IMvxConfigurablePluginLoader.Configure(configuration) is invoked. Ideally, I should specify this through configuration.
I looked at the Json plugin's implementation of PluginLoaded but still could not figure out where the configuration was retrieved before a cast was attempted in IMvxConfigurablePluginLoader.Configure.
Any ideas or pointers will be greatly appreciated.
TIA.
This is covered in the draft wiki page https://github.com/slodge/MvvmCross/wiki/MvvmCross-plugins - see "writing a configurable plugin"

How to consume a complex object from a sproc using WCF Data Services / OData?

Using WCF Data Services (and the latest Entity Framework), I want to return data from a stored procedure. The returned sproc fields do not match 1:1 any entity in my db, so I create a new complex type for it in the edmx model (rather than attaching an existing entity):
Right-click the *.edmx model / Add / Function Import
Select the sproc (returns three fields) - GetData
Click Get Column Information
Add the Function Import Name: GetData
Click Create new Complex Type - GetData_Result
In the service, I define:
[WebGet]
public List<GetData_Result> GetDataSproc()
{
PrimaryDBContext context = new PrimaryDBContext();
return context.GetData().ToList();
}
I created a quick console app to test, and added a reference to System.Data.Services and System.Data.Services.Client - this after running Install-Package EntityFramework -Pre, but the versions on the libraries are 4.0 and not 5.x.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Data.Services.Client;
using ConsoleApplication1.PrimaryDBService;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
DataServiceContext context = new DataServiceContext(new Uri("http://localhost:50100/PrimaryDataService1.svc/"));
IEnumerable<GetData_Result> result = context.Execute<GetData_Result>(new Uri("http://localhost:50100/PrimaryDataService1.svc/GetDataSproc"));
foreach (GetData_Result w in result)
{
Console.WriteLine(w.ID + "\t" + w.WHO_TYPE_NAME + "\t" + w.CREATED_DATE);
}
Console.Read();
}
}
}
I didn't use the UriKind.Relative or anything else to complicate this.
When I navigate in the browser to the URL, I see data, but when I consume it in my console app, I get nothing at all.
Adding tracing to the mix:
<system.diagnostics>
<sources>
<source name="System.ServiceModel" switchValue="Information, ActivityTracing" propagateActivity="true">
<listeners>
<add name="traceListener" type="System.Diagnostics.XmlWriterTraceListener" initializeData="c:\temp\WebWCFDataService.svclog" />
</listeners>
</source>
</sources>
</system.diagnostics>
... and opening using the Microsoft Service Trace Viewer, I see two idential warnings:
Configuration evaluation context not found.
<E2ETraceEvent xmlns="http://schemas.microsoft.com/2004/06/E2ETraceEvent">
<System xmlns="http://schemas.microsoft.com/2004/06/windows/eventlog/system">
<EventID>524312</EventID>
<Type>3</Type>
<SubType Name="Warning">0</SubType>
<Level>4</Level>
<TimeCreated SystemTime="2012-04-03T14:50:11.8355955Z" />
<Source Name="System.ServiceModel" />
<Correlation ActivityID="{66f1a241-2613-43dd-be0c-341149e37d30}" />
<Execution ProcessName="WebDev.WebServer40" ProcessID="5176" ThreadID="10" />
<Channel />
<Computer>MyComputer</Computer>
</System>
<ApplicationData>
<TraceData>
<DataItem>
<TraceRecord xmlns="http://schemas.microsoft.com/2004/10/E2ETraceEvent/TraceRecord" Severity="Warning">
<TraceIdentifier>http://msdn.microsoft.com/en-US/library/System.ServiceModel.EvaluationContextNotFound.aspx</TraceIdentifier>
<Description>Configuration evaluation context not found.</Description>
<AppDomain>fd28c9cc-1-129779382115645955</AppDomain>
</TraceRecord>
</DataItem>
</TraceData>
</ApplicationData>
</E2ETraceEvent>
So why am I able to see data from the browser, but not when consumed in my app?
-- UPDATE --
I downloaded the Microsoft WCF Data Services October 2011 CTP which exposed DataServiceProtocolVersion.V3, created a new host and client and referenced Microsoft.Data.Services.Client (v4.99.2.0). Now getting the following error on the client when trying iterate in the foreach loop:
There is a type mismatch between the client and the service. Type
'ConsoleApplication1.WcfDataServiceOctCTP1.GetDataSproc_Result' is an
entity type, but the type in the response payload does not represent
an entity type. Please ensure that types defined on the client match
the data model of the service, or update the service reference on the
client.
I tried the same thing by referencing the actual entity - works fine, so same issue.
Recap: I want to create a high-performing WCF service DAL (data access layer) that returns strongly-typed stored procedures. I initially used a "WCF Data Services" project to accomplish this. It seems as though it has its limitations, and after reviewing performance metrics of different ORM's, I ended up using Dapper for the data access inside a basic WCF Service.
I first created the *.edmx model and created the POCO for my sproc.
Next, I created a base BaseRepository and MiscDataRepository:
namespace WcfDataService.Repositories
{
public abstract class BaseRepository
{
protected static void SetIdentity<T>(IDbConnection connection, Action<T> setId)
{
dynamic identity = connection.Query("SELECT ##IDENTITY AS Id").Single();
T newId = (T)identity.Id;
setId(newId);
}
protected static IDbConnection OpenConnection()
{
IDbConnection connection = new SqlConnection(WebConfigurationManager.ConnectionStrings["PrimaryDBConnectionString"].ConnectionString);
connection.Open();
return connection;
}
}
}
namespace WcfDataService.Repositories
{
public class MiscDataRepository : BaseRepository
{
public IEnumerable<GetData_Result> SelectAllData()
{
using (IDbConnection connection = OpenConnection())
{
var theData = connection.Query<GetData_Result>("sprocs_GetData",
commandType: CommandType.StoredProcedure);
return theData;
}
}
}
}
The service class:
namespace WcfDataService
{
public class Service1 : IService1
{
private MiscDataRepository miscDataRepository;
public Service1()
: this(new MiscDataRepository())
{
}
public Service1(MiscDataRepository miscDataRepository)
{
this.miscDataRepository = miscDataRepository;
}
public IEnumerable<GetData_Result> GetData()
{
return miscDataRepository.SelectAllData();
}
}
}
... and then created a simple console application to display the data:
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
Service1Client client = new Service1Client();
IEnumerable<GetData_Result> result = client.GetData();
foreach (GetData_Result d in result)
{
Console.WriteLine(d.ID + "\t" + d.WHO_TYPE_NAME + "\t" + d.CREATED_DATE);
}
Console.Read();
}
}
}
I also accomplished this using PetaPOCO, which took much less time to setup than Dapper - a few lines of code:
namespace PetaPocoWcfDataService
{
// NOTE: You can use the "Rename" command on the "Refactor" menu to change the class name "Service1" in code, svc and config file together.
public class Service1 : IService1
{
public IEnumerable<GetData_Result> GetData()
{
var databaseContext = new PetaPoco.Database("PrimaryDBContext"); // using PetaPOCO for data access
databaseContext.EnableAutoSelect = false; // use the sproc to create the select statement
return databaseContext.Query<GetData_Result>("exec sproc_GetData");
}
}
}
I like how quick and simple it was to setup PetaPOCO, but using the repository pattern with Dapper will scale much better for an enterprise project.
It was also quite simple to create complex objects directly from the EDMX - for any stored procedure, then consume them.
For example, I created complex type return type called ProfileDetailsByID_Result based on the sq_mobile_profile_get_by_id sproc.
public ProfileDetailsByID_Result GetAllProfileDetailsByID(int profileID)
{
using (IDbConnection connection = OpenConnection("DatabaseConnectionString"))
{
try
{
var profile = connection.Query<ProfileDetailsByID_Result>("sq_mobile_profile_get_by_id",
new { profileid = profileID },
commandType: CommandType.StoredProcedure).FirstOrDefault();
return profile;
}
catch (Exception ex)
{
ErrorLogging.Instance.Fatal(ex); // use singleton for logging
return null;
}
}
}
So using Dapper along with some EDMX entities seems to be a nice quick way to get things going. I may be mistaken, but I'm not sure why Microsoft didn't think this all the way through - no support for complex types with OData.
--- UPDATE ---
So I finally got a response from Microsoft, when I raised the issue over a month ago:
We have done research on this and we have found that the Odata client
library doesn’t support complex types. Therefore, I regret to inform
you that there is not much that we can do to solve it.
*Optional: In order to obtain a solution for this issue, you have to use a Xml to Linq kind of approach to get the complex types.
Thank you very much for your understanding in this matter. Please let
me know if you have any questions. If we can be of any further
assistance, please let us know.
Best regards,
Seems odd.

How to process logically related rows after ItemReader in SpringBatch?

Scenario
To make it simple, let's suppose I have an ItemReader that returns me 25 rows.
The first 10 rows belong to student A
The next 5 belong to student B
and the 10 remaining belong to student C
I want to aggregate them together logically say by studentId and flatten them to end up with one row per student.
Problem
If I understand correctly, setting the commit interval to 5 will do the following:
Send 5 rows to the Processor (which will aggregate them or do any business logic I tell it to).
After Processed will write 5 rows.
Then it will do it again for the next 5 rows and so on.
If that is true, then for the next five I will have to check the already written ones, get them out aggregate them to the ones that I am currently processing and write them again.
I personally do no like that.
What is the best practice to handle a situation like this in Spring Batch?
Alternative
Sometimes I feel that it is much easier to write a regular Spring JDBC main program and then I have full control of what I want to do. However, I wanted to take advantage of of the job repository state monitoring of the job, ability to restart, skip, job and step listeners....
My Spring Batch Code
My module-context.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:batch="http://www.springframework.org/schema/batch"
xsi:schemaLocation="http://www.springframework.org/schema/batch http://www.springframework.org/schema/batch/spring-batch-2.1.xsd
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">
<description>Example job to get you started. It provides a skeleton for a typical batch application.</description>
<batch:job id="job1">
<batch:step id="step1" >
<batch:tasklet transaction-manager="transactionManager" start-limit="100" >
<batch:chunk reader="attendanceItemReader"
processor="attendanceProcessor"
writer="attendanceItemWriter"
commit-interval="10"
/>
</batch:tasklet>
</batch:step>
</batch:job>
<bean id="attendanceItemReader" class="org.springframework.batch.item.database.JdbcCursorItemReader">
<property name="dataSource">
<ref bean="sourceDataSource"/>
</property>
<property name="sql"
value="select s.student_name ,s.student_id ,fas.attendance_days ,fas.attendance_value from K12INTEL_DW.ftbl_attendance_stumonabssum fas inner join k12intel_dw.dtbl_students s on fas.student_key = s.student_key inner join K12INTEL_DW.dtbl_schools ds on fas.school_key = ds.school_key inner join k12intel_dw.dtbl_school_dates dsd on fas.school_dates_key = dsd.school_dates_key where dsd.rolling_local_school_yr_number = 0 and ds.school_code = ? and s.student_activity_indicator = 'Active' and fas.LOCAL_GRADING_PERIOD = 'G1' and s.student_current_grade_level = 'Gr 9' order by s.student_id"/>
<property name="preparedStatementSetter" ref="attendanceStatementSetter"/>
<property name="rowMapper" ref="attendanceRowMapper"/>
</bean>
<bean id="attendanceStatementSetter" class="edu.kdc.visioncards.preparedstatements.AttendanceStatementSetter"/>
<bean id="attendanceRowMapper" class="edu.kdc.visioncards.rowmapper.AttendanceRowMapper"/>
<bean id="attendanceProcessor" class="edu.kdc.visioncards.AttendanceProcessor" />
<bean id="attendanceItemWriter" class="org.springframework.batch.item.file.FlatFileItemWriter">
<property name="resource" value="file:target/outputs/passthrough.txt"/>
<property name="lineAggregator">
<bean class="org.springframework.batch.item.file.transform.PassThroughLineAggregator" />
</property>
</bean>
</beans>
My supporting classes for the Reader.
A PreparedStatementSetter
package edu.kdc.visioncards.preparedstatements;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import org.springframework.jdbc.core.PreparedStatementSetter;
public class AttendanceStatementSetter implements PreparedStatementSetter {
public void setValues(PreparedStatement ps) throws SQLException {
ps.setInt(1, 7);
}
}
and a RowMapper
package edu.kdc.visioncards.rowmapper;
import java.sql.ResultSet;
import java.sql.SQLException;
import org.springframework.jdbc.core.RowMapper;
import edu.kdc.visioncards.dto.AttendanceDTO;
public class AttendanceRowMapper<T> implements RowMapper<AttendanceDTO> {
public static final String STUDENT_NAME = "STUDENT_NAME";
public static final String STUDENT_ID = "STUDENT_ID";
public static final String ATTENDANCE_DAYS = "ATTENDANCE_DAYS";
public static final String ATTENDANCE_VALUE = "ATTENDANCE_VALUE";
public AttendanceDTO mapRow(ResultSet rs, int rowNum) throws SQLException {
AttendanceDTO dto = new AttendanceDTO();
dto.setStudentId(rs.getString(STUDENT_ID));
dto.setStudentName(rs.getString(STUDENT_NAME));
dto.setAttDays(rs.getInt(ATTENDANCE_DAYS));
dto.setAttValue(rs.getInt(ATTENDANCE_VALUE));
return dto;
}
}
My processor
package edu.kdc.visioncards;
import java.util.HashMap;
import java.util.Map;
import org.springframework.batch.item.ItemProcessor;
import edu.kdc.visioncards.dto.AttendanceDTO;
public class AttendanceProcessor implements ItemProcessor<AttendanceDTO, Map<Integer, AttendanceDTO>> {
private Map<Integer, AttendanceDTO> map = new HashMap<Integer, AttendanceDTO>();
public Map<Integer, AttendanceDTO> process(AttendanceDTO dto) throws Exception {
if(map.containsKey(new Integer(dto.getStudentId()))){
AttendanceDTO attDto = (AttendanceDTO)map.get(new Integer(dto.getStudentId()));
attDto.setAttDays(attDto.getAttDays() + dto.getAttDays());
attDto.setAttValue(attDto.getAttValue() + dto.getAttValue());
}else{
map.put(new Integer(dto.getStudentId()), dto);
}
return map;
}
}
My concerns from code above
In the Processor, I create a HashMap and as I process the rows I check whether I already have that Student in the Map, if it's not there I add it. If it's already there I grab the it get the values that I am interested in and add them with the row that I am currently processing.
After that, Spring Batch Framework writes to a File according to my configuration
My question is as follows:
I do not want it to go to the writer. I want to process all the remaining rows. How do I keep this Map that I have created in memory for the next set of rows that need to go through this same Processor? Everytime, a row is processed through AttendanceProcessor the Map is initialized. Should I put the Map initialization in a static block?
In my application I created a CollectingJdbcCursorItemReader that extends the standard JdbcCursorItemReader and performs exactly what you need. Internally it uses my CollectingRowMapper: an extension of the standard RowMapper that maps multiple related rows to one object.
Here is the code of the ItemReader, the code of CollectingRowMapper interface, and an abstract implementation of it, is available in another answer of mine.
import java.sql.ResultSet;
import java.sql.SQLException;
import org.springframework.batch.item.ReaderNotOpenException;
import org.springframework.batch.item.database.JdbcCursorItemReader;
import org.springframework.jdbc.core.RowMapper;
/**
* A JdbcCursorItemReader that uses a {#link CollectingRowMapper}.
* Like the superclass this reader is not thread-safe.
*
* #author Pino Navato
**/
public class CollectingJdbcCursorItemReader<T> extends JdbcCursorItemReader<T> {
private CollectingRowMapper<T> rowMapper;
private boolean firstRead = true;
/**
* Accepts a {#link CollectingRowMapper} only.
**/
#Override
public void setRowMapper(RowMapper<T> rowMapper) {
this.rowMapper = (CollectingRowMapper<T>)rowMapper;
super.setRowMapper(rowMapper);
}
/**
* Read next row and map it to item.
**/
#Override
protected T doRead() throws Exception {
if (rs == null) {
throw new ReaderNotOpenException("Reader must be open before it can be read.");
}
try {
if (firstRead) {
if (!rs.next()) { //Subsequent calls to next() will be executed by rowMapper
return null;
}
firstRead = false;
} else if (!rowMapper.hasNext()) {
return null;
}
T item = readCursor(rs, getCurrentItemCount());
return item;
}
catch (SQLException se) {
throw getExceptionTranslator().translate("Attempt to process next row failed", getSql(), se);
}
}
#Override
protected T readCursor(ResultSet rs, int currentRow) throws SQLException {
T result = super.readCursor(rs, currentRow);
setCurrentItemCount(rs.getRow());
return result;
}
}
You can use it just like the classic JdbcCursorItemReader: the only requirement is that you provide it a CollectingRowMapper instead of the classic RowMapper.
I always follow this pattern:
I make my reader scope to be "step", and in #PostConstruct I fetch
the results, and put them in a Map
In processor, I convert the associatedCollection into writable list,
and send the writable list
In ItemWriter, I persist the writable item(s) depending on the case
because you changed your question i add a new answer
if the students are ordered then there is no need for list/map, you could use exactly one studentObject on the processor to keep the "current" and aggregate on it until there is a new one (read: id change)
if the students are not ordered you will never know when a specific student is "finished" and you'd have to keep all students in a map which can't be written until the end of the complete read sequence
beware:
the processor needs to know when the reader is exhausted
its hard to get it working with any commit-rate and "id" concept if you aggregate items that are somehow identical the processor just can't know if the currently processed item is the last one
basically the usecase is either solved at reader level completely or at writer level (see other answer)
private SimpleItem currentItem;
private StepExecution stepExecution;
#Override
public SimpleItem process(SimpleItem newItem) throws Exception {
SimpleItem returnItem = null;
if (currentItem == null) {
currentItem = new SimpleItem(newItem.getId(), newItem.getValue());
} else if (currentItem.getId() == newItem.getId()) {
// aggregate somehow
String value = currentItem.getValue() + newItem.getValue();
currentItem.setValue(value);
} else {
// "clone"/copy currentItem
returnItem = new SimpleItem(currentItem.getId(), currentItem.getValue());
// replace currentItem
currentItem = newItem;
}
// reader exhausted?
if(stepExecution.getExecutionContext().containsKey("readerExhausted")
&& (Boolean)stepExecution.getExecutionContext().get("readerExhausted")
&& currentItem.getId() == stepExecution.getExecutionContext().getInt("lastItemId")) {
returnItem = new SimpleItem(currentItem.getId(), currentItem.getValue());
}
return returnItem;
}
basically you talk about batch processing with changing IDs(1), where the batch has to keep track of the change
for spring/spring-batch we talk about:
ItemWriter which checks the list of items for an id change
before the change the items are stored in a temporary datastore(2) (List, Map, whatever), and are not written out
when the id changes, the aggregating/flattening business code runs on the items in the datastore and one item should be written, now the datastore can be used for the next items with the next id
this concept needs a reader which tells the step "i'm exhausted" to properly flush the temporary datastore on end of items (file/database)
here a rough and simple code example
#Override
public void write(List<? extends SimpleItem> items) throws Exception {
// setup with first sharedId at startup
if (currentId == null){
currentId = items.get(0).getSharedId();
}
// check for change of sharedId in input
// keep items in temporary dataStore until id change of input
// call delegate if there is an id change or if the reader is exhausted
for (SimpleItem item : items) {
// already known sharedId, add to tempData
if (item.getSharedId() == currentId) {
tempData.add(item);
} else {
// or new sharedId, write tempData, empty it, keep new id
// the delegate does the flattening/aggregating
delegate.write(tempData);
tempData.clear();
currentId = item.getSharedId();
tempData.add(item);
}
}
// check if reader is exhausted, flush tempData
if ((Boolean) stepExecution.getExecutionContext().get("readerExhausted")
&& tempData.size() > 0) {
delegate.write(tempData);
// optional delegate.clear();
}
}
(1)assuming the items are ordered by an ID (can be composite too)
(2)a hashmap spring bean for thread safety
Use Step Execution Listener and store the records as map to the StepExecutionContext , you can then group them in the writer or writer listener and write it at a time

get list of services implementations with OSGi declarative services

I have a very simple example of declarative services. I'm following this tutorial http://www.eclipsezone.com/eclipse/forums/t97690.html?start=0. Every thing is working as expected. However, I cannot figure out how I can make the "SampleImporter" (which is the bundle that is expected to use other bundles' services) aware of the list of "SampleExporter" (bundle providing a service). In other words, I want the "SamlpeImporter" to see the ID of the bundle(s) that it is eventually using. This information is very useful for my application.
here is the XML file for SampleExporter:
<?xml version="1.0"?>
<component name="samplerunnable">
<implementation class="org.example.ds.SampleRunnable"/>
<property name="ID" value="expoter" />
<service>
<provide interface="java.lang.Runnable"/>
</service>
while for the SampleImporter:
<?xml version="1.0"?>
<component name="commandprovider1">
<implementation class="org.example.ds.SampleCommandProvider1"/>
<service>
<provide interface="org.eclipse.osgi.framework.console.CommandProvider"/>
</service>
<reference name="RUNNABLE"
interface="java.lang.Runnable"
bind="setRunnable"
unbind="unsetRunnable"
cardinality="0..1"
policy="dynamic"/>
</component>
In the Importer side, I have the following function:
public class SampleCommandProvider1 implements CommandProvider {
private Runnable runnable;
public synchronized void setRunnable(Runnable r) {
runnable = r;
}
public synchronized void unsetRunnable(Runnable r) {
runnable = null;
}
public synchronized void _run(CommandInterpreter ci) {
if(runnable != null) {
runnable.run();
} else {
ci.println("Error, no Runnable available");
}
}
public String getHelp() {
return "\trun - execute a Runnable service";
}
}
This works fine but then if I want to get the value of the property, using
public synchronized void setRunnable(Runnable r, Map properties)
or
public synchronized void setRunnable(Runnable r, ServiceReference reference)
the method run of the exporter is never called which means that the bind function (setRunnable is not called).Hwever, using the console command "services" I see that the exporter bundle is used by the imporeter one. Also, using ss and ls I can see that the component eporter is "satisfied".
What is wrong with my implementetion?
Thanks in advance
Cheers
Marie
The following bind signature is not supported by any version of DS:
public void setRunnable(Runnable r, ServiceReference ref)
Instead you will have to take only the ServiceReference and use either the ComponentContext or BundleContext to access the service instance object.
Alternatively if you want a more POJO-style way of accessing service properties, the following bind signature is allowed in DS 1.1 (but not in DS 1.0):
public void setRunnable(Runnable r, Map properties)
To access DS 1.1 features, you need to add the correct namespace to your XML as follows:
<component xmlns='http://www.osgi.org/xmlns/scr/v1.1.0' name='...'>
By the way, I wrote this original article a very long time ago! These days I would use bnd annotations to avoid having to write the XML document by hand.